Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FUD I believe in. There is an enormous leap from a safe Level 2 system to a safe Level 3-5 system.

This site may earn commission on affiliate links.
The funny irony thing about that would be if Tesla can legally abscond...I mean take...I mean recognize all the revenue that had been paid by the k’s of FSD buyers ¯\_(ツ)_/¯

You mean when they claim "L5 feature complete" or when they deploy the first robotaxis? Certainly, if Tesla actually did deploy robotaxis, I think they would legally be able to claim the FSD money that they have received.

"L5 feature complete" would be a different matter. It would probably depend on what "L5 feature complete" actually is. If "L5 feature complete" means that the car can actually self-drive on highways and city streets, albeit with some driver supervision, then I think Tesla could probably claim the FSD money. If "L5 feature complete" is just the current AP with a couple extra features like stop light detection, then no, it would be questionable to claim the money.

Although, we might need to actually get more technical. If Tesla delivers the current FSD features on the website, then I don't see why they could not claim the FSD money from people who purchased the "new" FSD when those FSD features went on the website. But Tesla could not claim the FSD money from FSD buyers who purchased the "old" FSD (back when Tesla was offering Enhanced Autopilot and FSD).
 
  • Like
Reactions: boonedocks
If Tesla delivers the current FSD features on the website, then I don't see why they could not claim the FSD money from people who purchased the "new" FSD when those FSD features went on the website. But Tesla could not claim the FSD money from FSD buyers who purchased the "old" FSD (back when Tesla was offering Enhanced Autopilot and FSD).

This. Once they have rolled out the promised L2 features in the city, they can recognize the revenue from FSD sales in 2019 and onwards. It’s not required that it be safe enough to go beyond L2, according to the very clear verbiage on the website.

People buying FSD today are clearly not buying a product that makes the vehicle autonomous - says this right on the website. I assume the people buying FSD realize this. It is pretty obvious, but who knows, people don’t read things I guess.
 
The whole "L5 feature complete, no geofence by end of 2019" assertion by Elon may also refer to what will be in developer builds (that he can run on his own car). I don't recall him saying that it would be released to customers by the end of the year, but maybe I'm misremembering.

In general, I think some of the disconnect between what Elon says and what we see as customers released to our cars, is that he has the latest software and hardware in his car, and knows what they're working on and how it seems to be progressing at the moment. That, plus his natural over-optimism on timeframes, where he assumes everyone can work as hard as he is willing to. (I don't believe he is a huckster or charlatan.)

My assumption, ever since we heard that Tesla had hired a chip designer, is that the old hardware would get orphaned at some point, and the new hardware (FSD computer/HW3) would start getting most of the cool features. That is why I bought FSD during the $2K sale (already had EAP). Whether or not we ever get to actual FSD (by some definition), I'm pretty sure that most of the EAP/FSD-type features will run a lot better on the FSD hardware. This difference will evolve over time. I expect in a year, it will be pretty obvious.

I'm also pretty sure that the FSD software has been a separate branch of code for a while now, designed to take full advantage of NNs running on HW3. For people with EAP on HW 2.x, there will probably be dumbed-down versions (simpler NN), and they will strongly encourage people to upgrade. Not sure if the initial Smart Summon is mostly software 1.0 (more heuristics and less NN), but wouldn't be surprised. I sure hope Andrej is the real deal.
 
  • Like
Reactions: diplomat33
Consumer Reports: Tesla Must Prove Safety Before Claiming “Self-Driving” Ability

“We’ve heard promises of self-driving vehicles being just around the corner from Tesla before. Claims about the company’s driving automation systems and safety are not backed up by the data, and it seems today’s presentations had more to do with investors than consumers’ safety. We agree that Tesla, and every other car company, has a moral imperative to make transportation safer, and all companies should embrace the most important principle: preventing harm and saving lives.

“But instead of treating the public like guinea pigs, Tesla must clearly demonstrate a driving automation system that is substantially safer than what is available today, based on rigorous evidence that is transparently shared with regulators and consumers, and validated by independent third-parties. In the meantime, the company should focus on making sure that proven crash avoidance technologies on Tesla vehicles, such as automatic emergency braking with pedestrian detection, are as effective as possible.”
 
  • Like
Reactions: AlanSubie4Life
Good stuff. Generally I dislike Consumer Reports - particularly their car stuff (though I keep a membership since they have a lot of useful content on other things as well).

In this case though, they do seem to hit the mark with this assessment.

“ fails to keep the driver engaged exactly when it is needed most.”

Definitely the core issue I worry about with Autopilot.

I just don’t understand why Tesla is not more forthcoming with ALL their data. There is nothing to be afraid of - publishing the data will make any deficiencies that exist more clear, but they’ll already be clear from the press reports on such accidents. And it will have the benefit of educating consumers on the current limitations. It’s win-win to just lay out the full datasets and let people pick them apart and criticize. Free data analysis.

Personally, I really would like to see data on whether (and where) Autopilot is safer - it would inform my use patterns, perhaps. It certainly would result in more vigilance in some situations, if I was aware of where I need to be extra careful. For the most part I feel like I can figure it out (be vigilant all the time is the obvious solution), but data is still nice to see and sometimes my own anecdata could trick me...
 
Well, Elon is promising robotaxis which are by definition Level 4-5 and I think those will be safe. My concern is with Level 2 systems that are advanced enough to cause driver complacency (like what Google tested and abandoned). I think the current implementation of Autopilot is right at the limit of what a Level 2 system can be. It's just bad enough to keep you on your toes and keep sane people from texting, falling asleep, etc.

In some ways I see it as a road we have to travel to get to full autonomous. Could try skipping but with the current number of deaths from distracted driving would rather have some variants of driver assist rather than nothing.

The current generation of drivers will only get more distracted. While Tesla’s current iteration is good enough to lull but also bad enough to keep most logical people awake and paying attention, it will get better in a short period of time.
 
Yep. Most other companies could easily implement the type of lane keeping that Tesla has (after all AP1 used Mobileye tech which is what most other manufacturers are using). Instead they choose to have a drunk driver mode where the car will bounce between lane lines. Tesla's statistics on safety are misleading since they don't take into account when and where autopilot is being used. For example, maybe users aren't using autopilot during snow storms and maybe accident rates go up during snow storms. It seems like that could skew the results.
My real concern is about what happens when the systems become advanced enough to require very infrequent user intervention, say once every 10,000 miles? Will people still be alert if they haven't had to touch the steering wheel in 10,000 miles? I'm skeptical.
I appreciate your concern and i think it is bona fide, but i believe that statistics show an excellent safety track record. Your theoretical concerns seem to make sense but do not reflect what actually happens on the road.
 
  • Disagree
Reactions: acoste
I don’t understand the disagreement with wanting statistics (in an earlier post)? I don’t see the downside of getting some from Tesla (we don’t have that info thus far).

@MaryAnning3 - which statistics?

As I said: entirely plausible to me that routine use of AP could be safer on average, at a fleet level, vs. not using it. But it would be nice to know what the data show.
 
Despite misuses and abuses, it's still safer than driving without one as Tesla's statistic has shown quarterly.

I entirely disagree, and we're starting to see evidence of this. Tesla's statistics were on an extremely limited set of data, and didn't account for many variables. Now we're seeing pedestrians being struck and killed because of collisions caused by people misusing autopilot. Cherry picking data is just as dangerous as instilling false confidence in users of a system that is absolutely not ready to be trusted to drive itself.
 
Now we're seeing pedestrians being struck and killed because of collisions caused by people misusing autopilot.

Guess I missed this story...”Tesla pedestrian killed” did not show me an obvious hit in Google, though there are investigations.

So, could you link to the source for what you are specifically referencing here?

Details would be interesting in any case.
 
...Cherry picking data..

Cherry-picking means citing there are 3 confirmed Autopilot deaths in the USA then extrapolate that as proof that more use of Autopilot would only means less safe.

The right way is to see beyond those 3 deaths, how many more undeath are there which is exactly what Tesla Quarterly Report is about.

Same with a pedestrian killed. I am sure pedestrians do get killed whether there is Autopilot or not. What more important is if there is an Autopilot pedestrian death, are there fewer pedestrians killed by non-Autopilot?

So far, there's only one reported of pedestrian killed related to an onboard automation system hardware but was purposefully turned off:

She was run over because Uber wanted to impress their CEO that their car runs so smoothly so they disconnected the autobraking system and relied on human to brake but without telling human that fact.

Report: Uber self-driving team was preparing for CEO demo before fatal crash

So actually, the pedestrian should have been alive if the automation system was used and not turned off to please its CEO!

Autopilot is different. It clearly tells that human needs to be in charge at all times by numerous reminders:

1) Owner's Manual
2) Onscreen consent/refusal when first activated for each driver profile
3) On the instrument cluster, each time and every time that Autopilot is activated.
 
Last edited:
What more important is if there is an Autopilot pedestrian death, are there fewer pedestrians killed by non-Autopilot?

This is, indeed, the question. Right now, there is no way to know. (When measured using an appropriate metric, controlling for obfuscating factors, of course - not measuring just the absolute numbers or numbers per mile.) Hopefully Tesla will provide transparent data at some point which allows experts to draw conclusions based on the evidence.
 
...Hopefully Tesla will provide transparent data at some point which allows experts to draw conclusions based on the evidence.

No, there's no way to know but the Pedestrian death rate keeps on rising reaching 28-year high:

Pedestrian Deaths Reach Highest Level In Decades, Report Says

Autopilot has not been programmed for pedestrian yet so using today's Autopilot should have no effect on the rate.

The next progression for Autopilot is paying more for FSD feature which will have:

1) Traffic Lights and Traffic Signs automatic compliance coming to the end of this year.

But that says nothing about pedestrian avoidance technology.

Then

2) Automatic Driving in the city: This should address pedestrian avoidance technology but who knows when!

Thus, Pedestrian Avoidance Technology statistics will be a long way from now.
 
Autopilot has not been programmed for pedestrian yet so using today's Autopilot should have no effect on the rate.


Not quite true, skip to 2:13 or so. Not that this is a guarantee of avoidance. But you’d expect to see some response in the data to this sort of feature...unless there are other factors which concurrently increase exposure. It’s not really a feature of Autopilot of course - and likely will not be a feature of FSD either - it’ll be included on every Model 3 with HW3 and as much as possible on HW2.5, even without FSD.

Safety features will always be included on all Model 3s up to the limits of the hardware and software, regardless of whether someone has bought FSD. Going to be interesting to see how they do that - probably a lot of beeps and noises and eventual disabling of AP for those who did not buy FSD is what has been speculated.

You’d also have to compare to similar age vehicles and all other factors “apple-to-apple” without the feature to see if it increases safety.

Unfortunately we have no data. :(
 
Guess I missed this story...”Tesla pedestrian killed” did not show me an obvious hit in Google, though there are investigations.

The specific one I was thinking of was the San Francisco crash that killed the husband and left the wife in critical condition. Initial reports were that the driver thought AP was enabled, but data shows it was not. To me this story meant she was clearly misusing or intending to misuse the system, and when it came out that it was not enabled that she was probably confused about whether it was even on or not.

Waymo's point here is that it shouldn't be possible to even make this mistake. Either the car is clearly driving or the occupant is clearly driving, and the car will only ask for intervention when it can not drive.

citing there are 3 confirmed Autopilot deaths in the USA then extrapolate that as proof that more use of Autopilot would only means less safe.

NOBODY IS DOING THAT HERE You're inferring something that I'm not saying. What I am saying is that in the case Google (Waymo) was exactly right and Tesla isn't. Offering partial autonomy lulls users into a false sense of security, and that sense of security is dangerous in itself. See: China AP crash, January 16, 2016. Florida crash, May 7, 2016. California crash, March 23, 2018. Florida crash March 1, 2019. And the countless videos of fender benders and collisions where the driver trusted the system to do the "right" thing" but it ended up crashing, curb rashing, or otherwise damaging something.

how many more undeath are there

You can't prove a negative, that's now how the universe we live in works.


What more important is if there is an Autopilot pedestrian death, are there fewer pedestrians killed by non-Autopilot?

It clearly tells that human needs to be in charge at all times by numerous reminders:

So you didn't watch the video linked to at all. You should have started off with that.
 
...she was clearly misusing or intending to misuse the system, and when it came out that it was not enabled that she was probably confused about whether it was even on or not...

It must be this report:

San Francisco police investigate if rented Tesla in deadly crash was in 'Autopilot'

All reports that I read on this case have never stated that the driver has made any statement.

The driver's statement was missing. What reported was from authorities and others but none mentioned what the driver said.

The driver just used an app to rent a Tesla so it is unclear whether the driver is familiar with the Autopilot.

People thought that she must have thought that Tesla could stop for a red light but that's what not she said.

Even if the misuse was the case, we have to look at the whole picture.

Should we extrapolate one misuse and think that it just multiplies that Autopilot should cause more than 30,000 annual USA traffic deaths?

That's why Tesla has issued the quarterly report despite misuses.
 
Initial reports were that the driver thought AP was enabled, but data shows it was not. To me this story meant she was clearly misusing or intending to misuse the system, and when it came out that it was not enabled that she was probably confused about whether it was even on or not.

Yeah, I think you're a bit too far ahead with this one. There are (unfortunately) going to be genuine instances of people very familiar with a Tesla using AP hitting pedestrians, eventually, and I think we should wait for those occurrences to come to light (and compare to unaided driving accident rates, appropriately adjusted for all factors, and rates in Teslas with all such features but without AP in use) before drawing firm conclusions that L2 is definitely bad.

I understand (and agree) we don't want unnecessary death and destruction, and clearly unsafe systems should not be deployed. And I don't see anything good about what was happening in the Google videos. The question though, in the end, is when you look at the appropriately adjusted and statistically valid accident rate - is it safer to have an L2 system with driver monitoring, or not?

I can definitely see the reasoning for why it might be very dangerous to have a very capable L2 system. But I don't see how we can answer it without data...and since L3-L5 may take so long, it may not be ok to "just wait" - maybe we can be saving lives in the meantime? I'm not saying we will - I'm just posing the argument for why you might want to allow it - with appropriate driver attentiveness systems, of course. Unfortunately gathering that data will cost lives - as does driving in general. I'm not suggesting gathering data irresponsibly - and there's probably more Tesla could be doing to properly educate their customers...but there's always going to be irresponsible people out there. I'm all for aggressive Autopilot jail, etc., and other methods to prevent abuse.

To be clear, I don't think we should be sacrificing lives in the service of an irresponsible experiment. How to balance that with actually having all the improved safety and realizing lower accident rates...I will leave to the experts...

That's why Tesla has issued the quarterly report despite misuses.

I'm genuinely curious about what sort of information you are able to extract from that quarterly report, and how you draw that conclusion. I have not been able to draw any conclusions with that data. One good question would be to answer whether it is safer to use Autopilot or not.
 
Last edited:
I'm also curious how they define when a car has Autopilot enabled. If I'm driving using Autopilot and I hit the brakes 1 second before slamming into a gore point did that crash occur while I was using Autopilot?

You got the point there. There's no question that there are biases and the report is too crude and not detail for a statistician.

Tesla does not give out the definition so here's my guess:

Autopilot enabled is exactly what you mentioned above. It's abled until a driver turns it off by any means including manual braking, manual steering.

That's unfair because the Autopilot was engaged the whole time until the last seconds and because of manual intervention, should the death be classified as Autopilot engaged or not?

That's why in Q3-2018, Tesla would include "crash-like event" because during Autopilot, there were no impact until last seconds when the Autopilot was manually turned off for a real crash event when the collision happened at last.

The unfairness also could be said when the driver was driving manually until it's almost time for collision and the driver would turn on Autopilot at the last seconds before death. Unfair but "Autopilot enabled" is exactly what it means even for the last seconds activation.


...what sort of information you are able to extract from that quarterly report, and how you draw that conclusion. I have not been able to draw any conclusions with that data. One good question would be to answer whether it is safer to use Autopilot or not.

one accident for every 3.27 million Autopilot miles
one accident for every 2.19 million Tesla non-Autopilot but with active safety features miles
one accident for every 1.41 million Tesla non-Autopilot and non-active safety features miles

VS.

one accident for every 498,000 NHTSA miles.

I have no doubt that there are problems and noise in the miles (did it include those who sleep behind the wheel, did it include the San Francisco pedestrian accident that people think that the driver thought Autopilot could stop for red light...)

There is no question that the details are not there but the overwhelmingly safer numbers are there.