Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

NHTSA Probe into Tesla Self Driving moving “fast”

This site may earn commission on affiliate links.
Some selection bias? Some? There's nothing but selection bias in AP choice, not "some."
I use AP for about 80% of my driving. Last fall a young deer ran in front of my car and AP slammed on the brakes before I saw it. We missed the deer by inches. If it wasn't for AP I would have certainly hit it. By the time I saw the deer it was less than 20 feet away. The car could not have physically stopped in time. So for me, at least, it's not all selection bias.
 
  • Like
Reactions: laservet and Dewg
I thought the investigation was about autopilot causing Teslas to hit stationary vehicles. In that case the crashes like the SF Bay Bridge tunnel wouldn’t factor.

I think we can believe that autopilot can save lives and drive safer than humans while still hoping to end the many examples of the robot going haywire. I have seen way too many videos of NoA or FSDb nearly crashing the car. If the NHTSA wasn’t investigating then I’d wonder about the organization, frankly. Just my opinion.
 
I use AP for about 80% of my driving. Last fall a young deer ran in front of my car and AP slammed on the brakes before I saw it. We missed the deer by inches. If it wasn't for AP I would have certainly hit it. By the time I saw the deer it was less than 20 feet away. The car could not have physically stopped in time. So for me, at least, it's not all selection bias.
First, I do not deny that AP, and related Tesla safety features (like automatic braking) are beneficial from a safety point of view in situations like those you describe. There are cases, as in your anecdote, where these features can prevent an accident, or at least reduce its severity. There are, however, also cases in which AP can cause an accident. Just in the past month, I've experienced erratic lane changes and the car appearing to point itself head-on into fixed objects. I do not know whether the car would have crashed or not, since I took over from it. Those anecdotes, like yours, go into the analysis. It's only by combining all of this that you get anything resembling scientifically-admissible data, rather than mere anecdotes.

The term "selection bias" has a specific scientific definition, relating to how subjects are selected for inclusion in the study or assignment to experimental conditions. In most cases, a scientific study in in medicine or the social sciences should ideally have subjects randomly assigned to experimental vs. control conditions. This isn't always possible, and scientists have ways to try to work around the problem when it's not, but those methods involve a lot of attention to detail, additional measurements, long-term study, etc. Selection bias is made worse if there's a reason to believe that the selection may influence the outcome of a study. For instance, if you wanted to study the effects of taking a vitamin on mental acuity, you would not let the subjects choose which group they were in, since those choices might reflect characteristics that affect the study's measures (motivation to perform well, say), and since knowledge of whether they were taking the vitamin could affect the outcome.

Your last sentence quoted above is meaningless in light of this definition, since you're explicitly speaking about your personal (anecdotal) experience, not a scientific study. Applied to the Tesla safety report referenced earlier, the "experimental" conditions are non-Tesla cars, Teslas driving manually, and Teslas driving on AP. The last two of those conditions are defined largely by driver actions; they are not assigned randomly. As I and others (including you) have said earlier in this thread, AP is more likely to be used in easy driving conditions, which will make it seem safer than manual driving. The Tesla safety report makes no attempt to disentangle the huge amount of selection bias at play in defining those two conditions. The non-Tesla vs. Tesla distinction is also rife with selection bias, but it's probably not as bad as the AP vs. manual driving selection bias, since it's not done on a moment-by-moment basis in response to the sort of road conditions that will themselves affect the measure under study. There are also many variables between the non-Tesla and Tesla conditions (cars' driving dynamics, prevalence of non-AP safety features like forward collision warning, demographics of people who buy non-Teslas vs. Teslas, etc.). These differences make it impossible, based on the data in the report, to do more than speculate about what might cause the differences shown in the report between the non-Tesla and Tesla vehicles.

Put another way, selection bias enables the variable under study (such as Autopilot use vs. manual driving) to be mixed up with other variables (such as road conditions, weather, speed, highway vs. non-highway driving, etc.) in such a way that it's impossible to determine which variable(s) are responsible for the observed differences in number of crashes. We cannot conclude that AP is safer than manual driving because the observed differences could be due, in whole or in part, to these confounding variables. That said, we also cannot conclude that manual driving is safer than AP. We just can't conclude anything. We simply cannot draw any conclusions about the differences observed in Tesla's safety report because of the selection bias.

Unfortunately, most people have a limited understanding of this, and may misinterpret Tesla's safety report to mean that AP is safer than manual driving. I can't say definitively that AP is not safer than manual driving, but I can say that Tesla's safety report does not address that question. That is, AFAIK there exists inadequate data on this question. Considered as a scientific study, Tesla's report is poor at best and completely worthless at worst -- probably closer to the latter than the former, although it might have some value if combined with other data and subjected to clever statistical analyses. Considered as propaganda, Tesla's safety report exploits peoples' limited understanding of science to make them draw an unjustified conclusion and make them think it's backed up by science. Thus, it may well be effective propaganda, but it's not good science.
 
There’s absolutely no way human hwy driving can be safer than AP ...if I had a dollar for every human that has drifted into my lane I would be a millionaire
You're probably right that AP is safer than a human in a certain limited set of conditions. Those conditions are largely, but note entirely, what a human would consider "easy" driving conditions. AP works well on a typical highway, but it has certain known problems, like phantom braking, that don't cause humans problems. AP doesn't work as well on city streets. Of course, FSD is designed for city driving, but IMHO it still does a poor job of it, compared to a typical human. Once outside the range of conditions that are good for AP, it falls down in a big way. There are a couple of big issues here.

First, to the best of my knowledge, there is no good way to measure AP vs. human driving. Tesla's safety report is, as I've said, worthless for this; there are too many confounding variables caused by selection bias. Ideally, you'd need a way to judge AP vs. human driving in a variety of well-defined conditions. This might be achievable in a study in which cars are outfitted with cameras and drivers are told to engage vs. not engage AP, and researchers rate the conditions (weather, traffic, etc.). Absent that, it might be possible to combine data sources (car telematics plus weather reports plus Waze reports about road hazards, etc.), but if the driver is selecting whether or not to use AP, there's always the possibility of the driver giving AP the easier conditions, even if the reason for the driver considering something easier vs. harder isn't obvious from the data. Without such studies, we have no way of knowing precisely when AP works best and when it doesn't. This would be very useful information for developing AP and for writing laws about when such systems may legally be used. Maybe Tesla has such data in-house, but if they do, then AFAIK they haven't shared the data publicly.

Second, people tend to become complacent about technologies that are good, but not 100% perfect. There have been numerous well-publicized crashes in which AP has been in control, and in which AP proponents have blamed the driver because the driver should have taken over control of the car well before the accident happens. This criticism of the driver is well-founded; given current technology, the human driver is legally responsible for what the car does, whether or not AP is engaged. The trouble is that human psychology doesn't work that way. To a human brain, if something happens a certain way 100 times, we expect it to happen the same way for the next 100 times. When that something is safety-related, we become complacent and don't pay as much attention as we should. As a matter of public policy, you can either blame the complacent human (which is 100% guaranteed to result in deaths, since you're not addressing the reality of human psychology) or you can design the technologies with human psychology in mind (to monitor for signs of inattentiveness, to reduce the likelihood of inattentiveness developing, etc.). These two approaches aren't mutually exclusive. Tesla fans sometimes lean heavily on the first, though; and Tesla as a company is being dragged kicking and screaming into doing anything more than the bare minimum for the second. Some non-Tesla AP-like systems have better attention-monitoring technology and/or they limit where their systems operate.
 
First, to the best of my knowledge, there is no good way to measure AP vs. human driving. Tesla's safety report is, as I've said, worthless for this; there are too many confounding variables caused by selection bias. Ideally, you'd need a way to judge AP vs. human driving in a variety of well-defined conditions. [...]

I think you're getting lost in the weeds of scientific perfection a bit. If you zoom out to the big picture, all that really matters is whether the existence and use of AP on a car makes it safer than it not existing or never being used, in the aggregate across a large population of comparable drivers. This is pretty easy to measure, and Tesla's existing safety reports come close to giving us that data (but not quite!).

Ideally, you'd take the whole Tesla fleet as data, because otherwise there's too much bias in the differences between Tesla and non-Tesla drivers in general (income, region, various other personal attributes). You'd separate the Tesla fleet into two bins: those cars which have AP and meet some minimum bar of "using it", vs everyone else (non-AP car, or cars that never use it, or use it so rarely it's not worth mentioning. Maybe make the cutoff something like <3% of total miles driven this year). Then you'd compare the total accident rate (or injury/death rates, or financial losses? all are useful metrics) per miles driven between the two bins, ignoring whether AP was engaged or not at the time of the incident. This neatly sidesteps a lot of the issues about selecting when to use it, complacency, over-reliance making you terrible when you don't use it, etc. It gets us straight down to the brass tacks: is having+using the software a net benefit to the end-users' overall accident rates in the aggregate?

Unfortunately Tesla's own safety report doesn't quite give us this data. Instead they break up the incidents based on whether AP was engaged at the time or not, as opposed to breaking up the cars into bins based on whether AP is being used at least part time on this car. I think Tesla's data at least hints that an analysis done my way would come out in AP's favor, but someone would have to publish more/better data to know.
 
One area Driver Assist is safer than a human is distracted driving. NHTSA stats show 3142 deaths in 2020 from distracted driving. And here are some more stats:
  • The National Safety Council reports that cell phone use while driving leads to 1.6 million crashes each year.
  • Nearly 390,000 injuries occur each year from accidents caused by texting while driving.
  • Texting while driving is 6x more likely to cause an accident than driving drunk.
  • Answering a text takes away your attention for about five seconds. Traveling at 55 mph, that's enough time to travel the length of a football field.
Remember people, it's not a perfect system, and will never be accident free. The goal is to reduce accidents and deaths. Save lives. Yes, it's tragic when someone is hurt or killed while using ADAS features. But if less people are hurt or killed overall because of it, it's worth it in my book.
 
Elon mentioned in Ai day 2 rather than wait for the technology to be perfect let’s do what’s morally right which is reduce major injuries and death in car accidents..which he has done already and everyone should be praising him for that compared too waiting for NHSTA approval No one counts lives saved i know for me personally it has saved 3 peoples lives one guy on a bike and 2 pedestrians
 
Elon mentioned in Ai day 2 rather than wait for the technology to be perfect let’s do what’s morally right which is reduce major injuries and death in car accidents..which he has done already and everyone should be praising him for that compared too waiting for NHSTA approval No one counts lives saved i know for me personally it has saved 3 peoples lives one guy on a bike and 2 pedestrians
Are you telling us that without AP or FSD, you would have KILLED THREE PEOPLE? Like, FSD/AP was literally those persons only saving grace??
 
Last edited:
Ideally, you'd take the whole Tesla fleet as data, because otherwise there's too much bias in the differences between Tesla and non-Tesla drivers in general (income, region, various other personal attributes). You'd separate the Tesla fleet into two bins: those cars which have AP and meet some minimum bar of "using it", vs everyone else (non-AP car, or cars that never use it, or use it so rarely it's not worth mentioning. Maybe make the cutoff something like <3% of total miles driven this year). Then you'd compare the total accident rate (or injury/death rates, or financial losses? all are useful metrics) per miles driven between the two bins, ignoring whether AP was engaged or not at the time of the incident. This neatly sidesteps a lot of the issues about selecting when to use it, complacency, over-reliance making you terrible when you don't use it, etc. It gets us straight down to the brass tacks: is having+using the software a net benefit to the end-users' overall accident rates in the aggregate?
The problem with this idea is that there's still massive selection bias. Somebody who seldom or never engages AP might do so because they seldom or never drive on highways, because they're elderly and don't understand or trust the technology, because they love driving fast and weaving between cars, etc. Any of those conditions would tend to increase the accident rate for those drivers.
Elon mentioned in Ai day 2 rather than wait for the technology to be perfect let’s do what’s morally right which is reduce major injuries and death in car accidents..which he has done already and everyone should be praising him for that compared too waiting for NHSTA approval
The problem is that, AFAIK, we don't really know that AP, or other technologies like it, really are saving lives. We have anecdotes, and reasonable arguments that AP should improve safety, but I for one haven't seen much solid data. The data I have seen are weak at best.

That said, I am not arguing against the deployment of AP and similar technologies. As a practical matter, we can't know how they'll work until they are deployed. Requiring solid data prior to deployment would therefore be counterproductive. As with so many things in the history of society, progress is made by those who try things first and look at the results later. My objections have to do with the false claims that AP has been proven to improve safety (it hasn't, AFAIK), and to over-reliance on such safety systems, which results in complacency or people doing Just Plain Stupid things (like intentionally taking naps while on AP). We as end users have to understand the limits of the technology and remain vigilant; and Tesla and others who are developing such technologies must give a lot of thought to how they're likely to be both used and abused, and build in safeguards to improve the ratio of use to abuse. I don't think Tesla is doing enough on this last point, and their propaganda about the benefits of AP and FSD is encouraging a lack of vigilance, legal CYA notices notwithstanding.
 
The problem with this idea is that there's still massive selection bias. Somebody who seldom or never engages AP might do so because they seldom or never drive on highways, because they're elderly and don't understand or trust the technology, because they love driving fast and weaving between cars, etc. Any of those conditions would tend to increase the accident rate for those drivers.

The problem is that, AFAIK, we don't really know that AP, or other technologies like it, really are saving lives. We have anecdotes, and reasonable arguments that AP should improve safety, but I for one haven't seen much solid data. The data I have seen are weak at best.

That said, I am not arguing against the deployment of AP and similar technologies. As a practical matter, we can't know how they'll work until they are deployed. Requiring solid data prior to deployment would therefore be counterproductive. As with so many things in the history of society, progress is made by those who try things first and look at the results later. My objections have to do with the false claims that AP has been proven to improve safety (it hasn't, AFAIK), and to over-reliance on such safety systems, which results in complacency or people doing Just Plain Stupid things (like intentionally taking naps while on AP). We as end users have to understand the limits of the technology and remain vigilant; and Tesla and others who are developing such technologies must give a lot of thought to how they're likely to be both used and abused, and build in safeguards to improve the ratio of use to abuse. I don't think Tesla is doing enough on this last point, and their propaganda about the benefits of AP and FSD is encouraging a lack of vigilance, legal CYA notices notwithstanding.
We have NHTSA and IIHS in the US and NCAP in Europe. They are working on new testing protocols for ADAS functions, but regulatory bodies tend to move slow in comparison to the technology they are testing. We'll just have to wait and see.
 
A major injury is no laughing matter ..it’s basically hell on earth for the rest of your life ask anyone that survives being hit by a car
but you are implying that without AP, you would have absolutely mowed down 3 pedestrians.

If thats the case, AP/FSD wasnt the issue or non issue...the issue would have been you not paying FULL AND COMPLETE ATTENTION to the road ahead of you and your surroundings as you were driving and in complete control of the vehicle.