You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
This is why Tesla made it a point to remind its drivers to always have their hands on the wheel when AP is engaged. They have gone out of their way to make the car give warnings, and in some instances, would stop the vehicle if it doesn’t detect engagement from the driver. This is why Level 3 autonomous will detect your eyes, the Tesla is not at level 5 self driving yet, so hands on wheel is imperative.
We all know (fear) how the media will play this one out. Sad.
An Update on Last Week’s Accident
[snip]
Autopilot had navigated this stretch of road tens of thousands times so the failure this time must have been due to some unusual circumstances and I wonder what that might have been.
I agree. As much as I appreciate Tesla, I feel this is a play on words. I would have been a lot more assured if the blog post said 'AP was showing warnings moments prior to the collision' but instead they say AP warnings were shown 'earlier in the drive'.
If AP was not actually showing visual or audible warnings moments prior to the collision, then I'm not sure why providing this 'earlier in the drive' information is relevant to the accident. AP could have shown a warning 5 km before the accident spot but that doesnt help Tesla's case. Also saying that the driver had 5 seconds of no hands on wheels may not help if AP never prompted the driver to put their hands on the wheel in those 5 seconds. Just my thoughts after reading that update.
Edit: However if the intent of the Tesla post was to convey that AP warnings were shown and for full 5 seconds driver didnt respond to the warnings before the accident, then this is a non-issue for the company. If that is the case then the post may have to be re-written to make it clear.
Agree with you on this. But think Tesla knew quite quickly the data they are reporting now. They seemed to have reams of info on all the "safe" miles logged on this stretch. Friday, after close of business, is their preferred time for bad news.
The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
I agree that Tesla's word choice here is strange. I see two distinct ways of reading it. Were those 6 warnings immediately before the crash? Or minutes earlier? I'm admittedly biased to think the ladder, but I can see the former option too. I hope they get clarification, because we shouldn't be wondering what "earlier in the drive" means.
On the whole "the driver should have been driving" bull case, I get it. I acknowledge that Tesla has been hyper-communicative in stressing to drivers that they need drivers to still be alert with autopilot on. I feel bad for the victim, but I also think Tesla does a pretty good job at setting expectations for users.
The problem is that these events do impact public perception (push notification from the Wall Street Journal). In the long run, that's damaging to the industry. In the short run, it serves as a reminder to shareholders that "autopilot" does not mean "autonomous."
The optics are extremely bad. The victim had complained about autopilot malfunctions at that exact barrier. A big autopilot update had been pushed a few days prior. A guy called in to the official Tesla podcast and complained about a very similar issue with autopilot. All of this in the shadow of Elaine's passing.
Meanwhile, Google and GM have been aggressively pushing LIDAR as a standard. They've both had coordinated media pushes that are trying to brand LIDAR as a safe option. Both have been making moves to get level 4 autonomous vehicles on the road before 2020. They are going to fight hard for the perception that those spinning things on the top of a car are for safety.
I don't think there need to be legal, civil, or regulatory consequences for this to be damaging to TSLA.
The optics are extremely bad. The victim had complained about autopilot malfunctions at that exact barrier. A big autopilot update had been pushed a few days prior. A guy called in to the official Tesla podcast and complained about a very similar issue with autopilot. All of this in the shadow of Elaine's passing.
We all know (fear) how the media will play this one out. Sad.
Why did he continue to use autopilot at the exact location he knew it was malfunctioning? I just don't understand. What am i missing? If you know autopilot isn't working in a particular location, then why use it at that location? Shouldn't you at least be extra vigilant?
I'm not sure what Tesla is trying to say, but according to earlier reports, the driver had complained about autopilot trying to steer him towards that barrier on several previous occasions. If that's true, why would the driver trust autopilot to work properly at the time the crash occurred?May be, Tesla was waiting to analyze the control module that was shown to be recovered by CHP investigators yesterday..
Not clear what Tesla is trying to say. Are they suggesting the driver should have taken control because there was a barrier ahead and visibility was clear for 150 m? Last week, Tesla said thousands of cars traveled the same route on autopilot. Now, it's trying to paint the driver negatively for getting nag warnings in the past? 6 seconds is well within the nag limit, so what's the issue? How would the driver know that AP will not handle that situation? If the drivers have to be 200% alert to take control at all instants, then what's the point?
Some late night thoughts before I go to sleep. The accident was already rumoured to be caused by autopilot, this confirms those rumours. My understanding is that some of the damage to the sp was already done. I see it dropping further but recalls happen, accidents happen. This is the nature of the auto business.
Now the more important point. We know autopilot can’t be perfect, a fatal accident was going to come sooner or later. A serious discussion needs to be had about choosing to use this technology to decrease accident and death rate. Hopefully people realize that is what this software does.
Why did he continue to use autopilot at the exact location he knew it was malfunctioning? I just don't understand. What am i missing? If you know autopilot isn't working in a particular location, then why use it at that location? Shouldn't you at least be extra vigilant?
I'm not sure what Tesla is trying to say, but according to earlier reports, the driver had complained about autopilot trying to steer him towards that barrier on several previous occasions. If that's true, why would the driver trust autopilot to work properly at the time the crash occurred?
Everyone takes this statement as gospel but no one has ever questioned the methodology or data used to reach this conclusion. I think it's far past time we actually be open about the math used to get to this calculation.The software is 1.4x safer than human drivers.
Like I said, a big update to autopilot had happened days prior. Maybe he'd used it going to work a few days and thought the update fixed it? I don't know. But again, in the big picture, it's awful. How many drivers are having issues like Walter's? ABC News is airing a video of a guy demonstrating autopilot glitches. People are calling in to the official podcast to complain. It's going to make some people ask "is autopilot safe?"Why did he continue to use autopilot at the exact location he knew it was malfunctioning? I just don't understand.