Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Time to trade ap1...elon says yes

This site may earn commission on affiliate links.
Why did you not read the article I suggested earlier ( Robot Cars Can’t Count on Us in an Emergency ) It mentions for example data collected by Nauto, or Stanford research on how long it takes a human driver to regain control of the car, which is a the heart of the debate whether Level 2 can ever be safely implemented. Level 2 by the way requires the driver to regain control of the car at any time with little or even no warning at all (if AP is driving straight into a median divider, even if it doesn't warn you, you have to take over). This is the biggest problem with AP, it's the fact that it required the driver to babysit it at all times (and for those who don't, some get away with it, some pay for it).

That NY Times article focuses primarily on Google's efforts to test Level 5 cars and there is no data at all on Autopilot. Level 5 testing is a very different situation from Autopilot, where the driver is instructed to keep hands on wheel and maintain control of the vehicle at all times and there is nag system a nag system setup to help ensure compliance.

In fact, the NHTSA investigated this very issue and concluded that Tesla's instructions to drivers and nag system as part of its Autopilot report had addressed the issue of driver distraction to its satisfaction:

The potential for driver misuse was evaluated as part of Tesla’s design process and solutions were tested, validated, and incorporated into the wide release of the product. It appears that Tesla’s evaluation of driver misuse and its resulting actions addressed the unreasonable risk to safety that may be presented by such misuse. https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF (page 10, section 5.3)
According to the article, the Nauta researchers you mention point out that driver distraction is a huge issue with normal driving, and suggested their data showed that "there was evidence that the inattention of human drivers was a factor in half of the approximately 40,000 traffic fatalities in the United States last year." So the driver distraction issue is present whether or not Autopilot is in operation, as others have mentioned.
In any case, no data is presented in the article on Autopilot or that contradicts the NHTSA's finding that airbag inducing accidents decreased by 40% when Autopilot was enabled.

Bottom line: you seem to be arguing that it is impossible for Tesla or any other manufacturers who are developing Level 2 systems (GM, Audi, Nissan, Mercedes to name a few) to develop systems as safe or safer than average drivers.

But that's just a hypothesis that needs to be proven with data. The data analyzed and reported in the NHTSA study seems to be saying exactly the opposite for Autopilot. As more data comes out on all of these systems we will learn more, but so far Autopilot's ability to avoid accidents and save lives looks very promising IMO.
 


Nope, you are misreading the IIHS study you quote. The study actually concludes that FCW/AEB resulted in only a 2% decrease in overall injury accidents.

Take a second look at the portion I bolded above from the passage you cited -- the 42% reduction with AEB/FCW is only for one small subset of accidents (rear end collisions), and actually only half of those -- where the car with FCW/AEB rear-ended another car, not the other way around.

The 42% number excludes all other types of accidents, including head-on collisions, side swipes, single car crashes, and collisions in which the car with AEB/FCW was rear ended (including as a result of unnecessary or excessive braking caused by AEB or FCW)

In fact, the IIHS study found that there was only a 2% decrease in overall injury accidents with AEB plus forward collision warning and only a 6% decrease in overall accidents. Neither one even reached the level of statistical significance.. http://orfe.princeton.edu/~alaink/S...s/IIHS-CicchinoEffectivenessOfCWS-Jan2016.pdf (see pages 1 and 15 and also Table 3 on page 12)

The 2% reduction in injury accidents from FCW/AEB is paltry. Nothing compared to the 40% reduction in serious accidents the NHTSA found after AP was enabled.

It would be very helpful if you would acknowledge your error, because many people seem to make this same mistake and it causes a lot of confusion.

The NHTSA's reported 40% reduction in accidents once AP was enabled is enormous when compared to the relatively modest gains the IIHS found from AEB and FCW technology.

The only other automotive safety technology that has achieved safety gains of this magnitude is the seat belt. After 70 years of refinements since seat belts were introduced (1949), the CDC estimates that seatbelts reduce serious injuries and deaths by about half when used. Seat belt - Wikipedia.

If the NHTSA's numbers are confirmed, the first generation of Autopilot may have achieved roughly the same safety gains as 70 years of seat-belt technology. That is really remarkable, particularly given that the gains took place in a period when overall US traffic fatalities increased from 32,744 to 37,461. If those fatalities dropped 40% per vehicle mile instead of rising, there would have been only 20,938 fatalities in the U.S. in 2016. So 16,500 lives saved in one year alone.

While the NHTSA analysis is not conclusive (no study is) it certainly is promising and suggests there is very good reason to support the use of Autopilot (and similar systems) not just for convenience but to save lives.




I disagree. The data so far is very promising and with the significantly improved hardware in AP2 and more fleet learning, IMO safety gains are likely to continue to grow until the systems are reliable enough to become fully autonomous.

Just for completeness, even NHTSA doesn't agree with your interpretation of their report. From the article sourced below:
"NHTSA's safety defect investigation of MY2014-2016 Tesla Model S and Model X did not assess the effectiveness of this technology," the agency said in an email to Ars on Wednesday afternoon. "NHTSA performed this cursory comparison of the rates before and after installation of the feature to determine whether models equipped with Autosteer were associated with higher crash rates, which could have indicated that further investigation was necessary."

Tesla has also claimed that its cars have a crash rate 3.7 times lower than average, but as we'll see there's little reason to think that has anything to do with Autopilot.
Source: Sorry Elon Musk, there’s no clear evidence Autopilot saves lives
 
This is an old thread. But I just accidentally found this article that throws some shade at Tesla and NHTSA stats:

NHTSA's analysis of Tesla Autopilot safety was bad, but its coverup was worse.

I didn’t go into the arguments in detail so take it with a grain of salt. But as I’ve said before, it’s pretty easy to lie with statistics. Small changes in methodology or perspective can result in completely different conclusions.
 
I get the focus on agency competence and possible corruption. It is sexy.

I can not, however, not consider that a guy was using a feature that required his involvement and, while doing so, drove under a semi turning in front of him. In the absence of the feature, would he have been paying sufficient attention to still be with us? Was the accident timing such that, in the absence of the feature and the driver paying full attention, there was no way it was going to be avoided?

I do not mind examining contributory elements but I believe we do ourselves a disservice when we discount the core elements of a discussion.

What happens to the advancement of auto pilot and full self driving if we clamp down on these incidents and slow the progress of this technology? How many people die for the delay to fully robust and demonstrably (by 90% of most standards) safer autonomous systems? From an overall risk analysis, I suspect we are better off on the path we are currently on. Its kinda like vaccines; I really do not want to be the one that dies from a reaction but I have no choice if I am committed to herd immunity.
 
I get the focus on agency competence and possible corruption. It is sexy.

I can not, however, not consider that a guy was using a feature that required his involvement and, while doing so, drove under a semi turning in front of him. In the absence of the feature, would he have been paying sufficient attention to still be with us? Was the accident timing such that, in the absence of the feature and the driver paying full attention, there was no way it was going to be avoided?

I do not mind examining contributory elements but I believe we do ourselves a disservice when we discount the core elements of a discussion.

What happens to the advancement of auto pilot and full self driving if we clamp down on these incidents and slow the progress of this technology? How many people die for the delay to fully robust and demonstrably (by 90% of most standards) safer autonomous systems? From an overall risk analysis, I suspect we are better off on the path we are currently on. Its kinda like vaccines; I really do not want to be the one that dies from a reaction but I have no choice if I am committed to herd immunity.
With all due respect, I think Tesla is taking a lot of risks they shouldn't have. This article talks about Tesla risky approach:
Dashcam video shows Tesla steering toward lane divider—again
Actually, by taking such risks they may end up setting the whole field back when the society has a knee jerk reaction to some particularly gruesome accident involving children or other sympathetic victims.

Will taking huge risks get us there faster? Of course. But the same is true of medicine if we decide to experiment on the prisoners or even desperate people who willfully sign up for the money, does it make it right? How far are we going to take the good of the heard, would randomly picking a healthy person for parts to save a few people be ok? One's person's death can save a dozen people, sounds good for the heard, no? What if that person wants to sell their body for parts, still saving a lot of people for one life, so you're only killing a person who is willing to die in exchange for cash from the body parts recipients who are willing to give that cash, and net result is more of "the heard" is saved, so greater good achieved?

So just like lengthy, controlled medical studies, not going to human trials until many other avenues of verification have been explored, instead of just experimenting on live patients, may slow down some progress for medications, society has decided that is a better approach for the heard. Autonomous driving should so the same, as Tesla competitors are doing - heavy on simulations, which may be slower, but less risk to lives.