Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla's Autopilot needs to be shut down and NHTSA needs to do their due diligence

This site may earn commission on affiliate links.
When you have to pay more attention to the road using the car's AP than your brain's AP, it is not an auto pilot.
Car's AP should relief driving stress not the other way around. I know it's in beta but the TACC should not be.
 
  • Funny
Reactions: bhzmark
... as Lane-Keeping Assistance. If Tesla wants to claim superiority they could have called it Super Lane-Keeping Assistance.

This sir shows why Elon is an entrepreneurial genius of the first order and you are - not.

NHTSA report regarding AP1.

Some people die because they are strangled by their seat belt, or their airbag hits them the wrong way, or their vehicle stability control prevents them from doing a certain evasive maneuver. But on avg those technologies save many more lives than they take. People who have passed their freshman year philosphy/ethics course have no problem understanding that. Keep studying up on the trolley car problem. This isn't one of those harder problems.

Apparently YOU didn't do well in ethics 101 because the trolley problem's conclusion is entirely debatable. The conclusion one takes from a survey level ethics course - if one is paying attention - is that ethics as a field is full of professionals who completely disagree with each other. Deontology, consequentialism, noncognitivism etc etc etc.

Having said that - I agree with you. If autopilot saves lives on the whole then damnit let's use it and stop debating.

And what made you think that you can base the safety of EAP today on historical data of AP1? Are you saying EAP today is just as safe or safer than AP1? If not, don't you think it would have been safer to keep shipping AP1 until EAP is at least on par with AP1? Anyone that could have been saved by AP1 Emergency Braking is out of luck driving an AP2 car today.

Logic behind seatbelts and such is different - there is a clear "benefit in most situations" argument. What Tesla is doing here is akin to a drug company which has a successfull drug, then coming up with a brand new forumulation , and using the success rate of the old drug to justify why they sell the new drug to people, watching the results and tweaking the formula. Would you advocate for such drug trials on the public and justify it with the greater good (which I'm sure you could get numbers for, since it definitely speeds up the drug development process to be able to test it on the general public).

Yes I would advocate for drug trials on the public as long as people can sign disclaimers and make free decisions with full knowledge of the consequences.
 
Yes I would advocate for drug trials on the public as long as people can sign disclaimers and make free decisions with full knowledge of the consequences.
First, obviously you are in disagreement with most of the civilized world here. Second, I hope you mean after the drug manufacturer takes the time to explain the risks or the fact that the product is so new nobody know the risks, so you agree you understand and assume full responsibility should it kill you or maim you for life - not some fine print they have to sign when they sign the credit card receipt during the purchase. Tesla doesn't explain any risks, nor make you sign any wavers. The most they do is put up a quick click-through that you must be ready to take over any time and are solely responsible for anything that happens (though nothing about risks of using the system). Unless of course you're ok with a "you must stop taking this drug if you think it will harm you, and are solely responsible if it does" click through on all drug trials, no additional information.
 
  • Like
Reactions: disagree
And that's exactly why it's dangerous. It's so freaking awesome that eventually you are going to be lulled into a moment of inattention, and if something bad happens at that moment, it's game over. As more and more people begin to use it, the probability of a bad event will increase and eventually will happen. At that point the system will be shut down.
Did you say the same when the radio in your car came out? It's so dangerous!!

You could get distracted listening to a podcast about autonomous cars, and actually believe that your is autonomous. The government must shut down all radios!
 
Uh oh, looks like someone just violated their NDA/security clearance..... The people I knew/know on Aegis, Phalanx, and Iron Dome, don't exactly go around telling people.
"The people" that you're talking about, is yourself right? ;)

Also, that comment is not exactly true, and doesn't apply in this context.

There are projects which are classified, and there are parts of projects that are classified. Aegis falls under the latter (go google Aegis, there's a ton of [what I assume is outdated] stuff on wikipedia).
 
Apparently YOU didn't do well in ethics 101 because the trolley problem's conclusion is entirely debatable. The conclusion one takes from a survey level ethics course - if one is paying attention - is that ethics as a field is full of professionals who completely disagree with each other. Deontology, consequentialism, noncognitivism etc etc etc.

Having said that - I agree with you. If autopilot saves lives on the whole then damnit let's use it and stop debating.

I said this wasn't the trolley car problem. It is easier. And so you therefore agree with me. Because we are saying the same thing.
 
"The people" that you're talking about, is yourself right? ;)

Also, that comment is not exactly true, and doesn't apply in this context.

There are projects which are classified, and there are parts of projects that are classified. Aegis falls under the latter (go google Aegis, there's a ton of [what I assume is outdated] stuff on wikipedia).

Classified info is a separate issue. When I worked at General Atomics, we were told strictly and regularly to not publicize that we worked there, including guidance to hide our badges anytime we were out in public.
 
The Solution

  • Create a Scenario Matrix that cars will be officially tested to. Ensure this matrix covers a minimum amount of scenarios that ensure driver and public safety. Gather folks from these companies, automakers, the insurance industry, traffic engineering, NHTSA, academics and people who actually know how to create, design and test to a massive exception handling matrix like this. Most likely from DoD, NASA or Boeing. Ensure these standards are met before releasing any updates.
  • Bring that systems engineering experience into these companies. Commercial IT has never used most best engineering practices. Yeah I know they make tons of money and really cool apps, games and websites. The fact is that Commercial IT rarely even looks into exception handling (cases where things do not go as planned) let alone a massive effort like this. That includes identifying them, designing to them and testing them. They lack the experience in doing this and their tools don't support it.
  • Stop this massively avoidable process of using customers and the public as Guinea pigs. Musk says he needs 6 BILLION miles of it to collect the data he needs. Look at what that means. Innocent and trusting people being used to not only gather the first sets of data, most of which is for ACCIDENTS, then they are used to regression test after a system change. The reason for the 6 BILLION miles is that most of the data collected is repeat. They have to drive billions of miles because they are randomly stumbling on the scenarios. The solution here is to use the matrix described above with simulation and simulators to do most of the discovery and testing. That can be augmented with test tracks and controlled public driving. (Note - By Guinea pigs I mean the folks driving cars with autopilots engaged. Gathering data when they are in control is prudent.
  • Ensure the black box data is updated often enough to gather all the data for any event (many times a second) or make sure the black box can withstand any crash. In the McCarthy/Speckman tragedy Tesla said they have no data on the crash. That is inexcusable. Also pass regulations that give the proper government organizations access to that data while ensuring it cannot be tampered with before they do so.
  • Investigate the McCarth/Speckman crash. Determine if that car contributed to the accident. That includes any autopilot use as well as why that battery exploded and caused so much damage so fast. https://www.linkedin.com/pulse/how-much-responsibility-does-tesla-have-tragedy-michael-dekort
I am a former systems engineer, program and engineering manager for Lockheed Martin. There I worked on aircraft simulation, the Aegis Weapon System and was Software Engineering Manager for all of NORAD. I was also the whistleblower who raised the Deepwater Program issues - IEEE Xplore Full-Text PDF:

I haven't read the entire thread, so apologies if this has already been said. But I do want to point out that this proposed "solution" is exactly the opposite of where the entire computing industry is going. This is not getting adopted, period. Nor is it a better approach to what Tesla and other companies are using for driver assists and autonomous driving. In fact, in the long-run, it's significantly worse.

Using scenarios created by experts got us to Deep Blue defeating Kasparov but that approach "maxed out" somewhere around there. Most complex system today are going to machine learning: using massive amounts of data to train systems with very little human guidance. Yes, this approach requires many guinea pigs to generate the data but there's compelling evidence that, in the long-run, it works much, much better than scenarios created by experts.

While the data is being generated to train the system, we do need to keep the guinea pigs (i.e., us Tesla drivers) safe. However, AP is not the death trap that some people make it out to be. If it were, we should be seeing AP crashes every day. We are not. We hear of one every several months or so. That either suggests the system works quite well or the people who claim the average Joe can't handle the system aren't giving the aforementioned Joe enough credit. In either case, this is an Everest being made of a mole hill. Not saying AP is perfect. But it's far from the menace some people make it out to be.
 
Last edited:
I haven't read the entire thread, so apologies if this has already been said. But I do want to point out that this proposed "solution" is exactly the opposite of where the entire computing industry is going. This is not getting adopted, period. Nor is it a better approach to what Tesla and other companies are using for driver assists and autonomous driving. In fact, in the long-run, it's significantly worse.

Using scenarios created by experts got us to Deep Blue defeating Kasparov but that approach "maxed out" somewhere around there. Most complex system today are going to machine learning: using massive amounts of data to train systems with very little human guidance. Yes, this approach requires many guinea pigs to generate the data but there's compelling evidence that, in the long-run, it works much, much better than scenarios created by experts.

While the data is being generated to train the system, we do need to keep the guinea pigs (i.e., us Tesla drivers) safe. However, AP is not the death trap that some people make it out to be. If it were, we should be seeing AP crashes every day. We are not. We hear of one every several months or so. That either suggests the system works quite well or the people who claim the average Joe can't handle the system aren't giving the aforementioned Joe enough credit. In either case, this is an Everest being made of a mole hill. Not saying AP is perfect. But it's far from the menace some people make it out to be.

Do keep in mind that some "Joe's" are idiots.... Not any reading this mind you, but they are out there.
 
I've been using AP1 for the last week. It's not perfect by any means. I can't believe people were seen on YouTube getting out of the drivers seat and jumping into the back seat. There's no question that you have to pay attention and remain alert and ready to take over. Tesla should probably make people take a short online tutorial before enabling their AP.
 
Do keep in mind that some "Joe's" are idiots.... Not any reading this mind you, but they are out there.

Well, see my point is that even with those "idiots", we only hear of AP crashes once every several months or so. To me, that means that AP is either fairly safe, even in the hands of an idiot, or those "idiots" are somehow managing to deal with AP limitations. I may change my thoughts on this when the Model 3 gets into the hands of even more average Joes but, for now, the argument that AP is unsafe in the hands of the average Joe doesn't make sense to me based on the evidence.

Or maybe it's just dumb luck that's keeping them safe or the patron saint of AP is particularly powerful or the stars are aligned for AP or whatever. But I don't put much stock in that sort of thing, especially over millions of miles and long periods of time.
 
  • Like
Reactions: bhzmark and EinSV
Did you say the same when the radio in your car came out? It's so dangerous!!

You could get distracted listening to a podcast about autonomous cars, and actually believe that your is autonomous. The government must shut down all radios!
Except that a radio won't crash your car if it loses signal. Autopilot will if it loses its bearings.

I've been using AP1 for the last week. It's not perfect by any means. I can't believe people were seen on YouTube getting out of the drivers seat and jumping into the back seat. There's no question that you have to pay attention and remain alert and ready to take over. Tesla should probably make people take a short online tutorial before enabling their AP.

^^ this

Why not play an instructional video on the 17" screen and not allow the driver to accept conditions of use until after the video is watched from beginning to end? Tesla should do this. How easy.
 
  • Like
Reactions: croman and bhzmark
I wrote something up earlier and it was all lost due to a careless keystroke and a browser setting that somehow became disabled. :(
Very long thread, probably mentioned but this is not a regression failure as the software was completely rewritten for new hardware.
If Apple released an iPhone 8 w/totally different hardware but also rewrote iOS from the ground up, called it iOS 11 (10.2.1 is the latest right now) but iOS 11 shipped with a whole bunch of major basic features broken or totally missing and it was extremely unreliable vs. previous iOS versions, is that excusable? There's no regression?

While I sort of see the OP's sentiment about "using Tesla owners as test subjects" as being questioned, I don't think its as bad as you make it out to be. I do think learning from the real world is A LOT better than testing in a lab. As with any software, there are staged/phased releases that come out with limitations and caveats and the users report customer found defects and issues. Yes there is life as the cost here, but only if used recklessly without being responsible....
I sort of agree with the notion that AP2 auto-steer is terrible at this point and it shouldve been tested more internally before using live subjects, there is a fine line to draw here between releasing a beta product vs releasing something really not stable and expecting to learn from "incidents" as opposed to safe miles. Good discussion overall
I disagree. AP1 on day 1 was far more reliable and predictable than AP2 is months into releases. Day 1 AP1's major problem was diving for exit ramps and turn lanes, a predictable and avoidable behavior. AP2 does all of this plus loses the lanes completely, drifts into oncoming traffic, locks on to the lane and drives into oncoming traffic, dives for barriers and other vehicles, etc.... there's really no comparison. AP1 2 months into releases was infinitely more usable than AP2 is 2 months into releases also. AP1 today, IMO is less usable than previous versions due to the arbitrary speed restrictions, excessive nagging, and purely punitive "features", but that's another discussion entirely.
I finally found a quote I was looking for from Google's self-driving cars rack up 3 million simulated miles every day.
Before it rolls out any code changes to its production cars (22 Lexus SUVs and 33 of its prototype cars, split between fleets in Mountain View and Austin), it "re-drives" its entire driving history of more than 2 million miles to make sure everything works as expected. Then, once it finally goes live, Google continues to test its code with 10-15,000 miles of autonomous driving each week.
Sure, Tesla definitely had internal testing of AP2 before release, but if they had an extensive regression test suite (that some baseline like AP1 passed) and did something like the above and did NOT allow AP2 code to reach and be activated on customer cars until such an extensive suite passed, then we probably wouldn't have what appears to be very broken functionality.

Did Tesla do something like the above? Probably not.

I can kinda see where the OP's coming from re: his mentions of NHTSA. Given that Joshua Brown died in May 2016 and AP2 hardware I believe rolled out in Oct 2016 w/no working AP software, all/virtually all the NHTSA investigation stuff would've had to focus on AP1 hardware and software, which is obviously not the same as AP2.
 
Last edited:
The fact that mobileye was purchased for $15 billion suggests that intel walked away with most of the goodness of "Tesla AP".

It also would seem that there was a bidding war for mobileye, considering the huge valuation. Was Tesla in that bidding war?

Musk is the only tech CEO who could get away suggesting that the short term problem is that mobileye would not let their system be used to train the Tesla/Nvidia system. That is some terrible planning and execution of a critical Tesla feature.

"I'm hiring my inexperienced son to replace you. I'm planning on you sticking around for a year to train my son to do your job". What could go wrong?
 
The fact that mobileye was purchased for $15 billion suggests that intel walked away with most of the goodness of "Tesla AP".

It also would seem that there was a bidding war for mobileye, considering the huge valuation. Was Tesla in that bidding war?

Musk is the only tech CEO who could get away suggesting that the short term problem is that mobileye would not let their system be used to train the Tesla/Nvidia system. That is some terrible planning and execution of a critical Tesla feature.

"I'm hiring my inexperienced son to replace you. I'm planning on you sticking around for a year to train my son to do your job". What could go wrong?

Microsoft has a Qualcomm ARM-based version of Windows Server in internal testing. Consider what's happening in personal computing, high-performance computing, cloud-based business computing and then consider what would happen if a lot of Windows servers moved to ARM.

I think that the purchase of mobileye says a lot about its strength but it also says a lot about Intel's cash pile, what's happening to its market and the potential value of the autonomous and advanced driver assistance systems market.
 
The fact that mobileye was purchased for $15 billion suggests that intel walked away with most of the goodness of "Tesla AP".

Disagree.

A lot of manufacturers use MobilEye but not one of them is even close to where Tesla was/is with AP1. And there were cases where MobilEye said their system couldn't handle certain cases and there were YouTube videos of Teslas correctly handling those situations (e.g., cross traffic). Also keep in mind that Tesla may have wanted better integration of video and radar than MobilEye was going to do. So, while it's true that Tesla built on the MobilEye platform, Tesla also made significant improvements to it. I wouldn't blame Tesla for thinking they could do better (and I expect them to do better over time) given the fact that MobilEye wasn't moving fast enough for Tesla's FSD goals.

Intel paying $15B for MobilEye is simply a sign of how desperate Intel is to not completely miss the boat on autonomous driving (having previously completely missed the boat on cell phone processors and every sign indicating Nvidia, and to a lesser extent AMD, are better-positioned in the autonomous vehicle market).
 
Why not play an instructional video on the 17" screen and not allow the driver to accept conditions of use until after the video is watched from beginning to end? Tesla should do this. How easy.
Easy but, unfortunately, too easy: I fear you've made the assumption that exactly one driver only will ever be associated with a specific vehicle. I'm not sure how to get around that.