Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What the chances Tesla cars will be self driving in 3 years? Why do you think that way?

What the chances Tesla cars will be self driving in 3 years?


  • Total voters
    215
This site may earn commission on affiliate links.
The results are consistent with my (and others) hypothesis.

Data showing that people maintain functional vigilance is not "consistent with" (even the weakest kind of support) your claim that people lose functional vigilance. Your claim is just a hypothesis unsupported by any real life data.

And the anti-Tesla/anti-autopilot jihadists like Missy Cummings at Duke and Michael DeKort pray to that hypothesis like a religion with zero real life data in support of it.

Most troubling is that your comments, like the jihadists Cummings and DeKort, utterly ignore the very real safety provided by AP -- the ability to detect abrupt car braking in front and even two cars ahead, or keeping the car safe when a driver is distracted or has a medical event. It is an empirical fact whether there are fewer collisions and injuries with or without AP. And the accidents while under AP is never contrasted with all the accidents avoided while under AP -- that is the overall worthwhile safety issue which the jihadists never deal with.

without good attention management.

Lots of technology can have bad results if it isn't used "without good attention management." But the analysis shouldn't stop there. What is the overall assessment of actual safety results -- how many lives and cars saved compared to lost?

One place I differ with Musk he is continually uses languages comparing humans to driving autonomy. Until we get to L4/L5 that is the wrong comparison. The comparison should simply be:

1) humans without any assistance, and
2) humans with L2/L3 assistance.

Even when some will misuse L2 and L3 and not pay attention and have accidents, my hypothesis is that many more accidents will be avoided and humans with L2 will be safer than mere humans without L2.

This is simply an emprical question that will eventually be answered. Tesla, and MIT and insurance companies will provide the answers eventually. But no intelligent answers are likely to come from mouse model academic jihadists like Missy Cummings and Kacyzinski-esque nutjobs like DeKort.
 
Last edited:
Data showing that people maintain functional vigilance is not "consistent with" (even the weakest kind of support) your claim that people lose functional vigilance. Your claim is just a hypothesis unsupported by any real life data.
I think the claim is that as the system becomes more capable then functional vigilance will go down as seen in automation studies. Obviously no one has deployed a highly capable level 2 driving system so you are correct that there is no data but this is a forum where people talk about the future of autonomous vehicles. You're acting like it's a completely crazy hypothesis but to me it make intuitive sense.
There is anecdotal data from Uber's initial foray into level 2 self driving in California where their drivers were running red lights because they failed to monitor the system and of course the behavior of the driver in the Arizona crash.
It would be great if Tesla would release their autopilot data to a third party researcher like Lex Fridman to analyze. The way he talks about autopilot, I'm sure he'd love to get his hands on it.
 
Data showing that people maintain functional vigilance is not "consistent with" (even the weakest kind of support) your claim that people lose functional vigilance.

It is consistent, because the paper explains that they may well maintain functional vigilance only because they are using a limited capability system where the drivers expect it to have problems. I mean, just read what the authors say. lt's not that complicated. You are absolutely correct that there will be no real data yet, because no such system capable enough to exhibit an issue yet exists.

anti-Tesla/anti-autopilot jihadists like Missy Cummings at Duke and Michael DeKort

I don't know these people and I don't care about them. My perspective is from a driver who drives to work on a multi-lane surface street LOADED with Model 3s. I think I see about 20 every day on my way to work, so there are hundreds of them around every day. It's reasonable to assume that some of these drivers may eventually decide to get FSD. I want FSD to work really well, and I don't want these people with FSD driving around looking at their phones! They'll crash into me! I want them paying full attention to the road at all times as the car drives itself - because that's what FSD will need to be to be safe, at least for the next several years. It's bad enough already with Autopilot, with a bunch of people driving around with a casual single hand at the bottom of the wheel - but that's much less of a danger to me. My fear is that this is potentially not what we will get...I don't know how having a hand on the wheel while staring at your phone while FSD does its magic is going to be safe at all, and I'm not aware that Tesla can do more than that to check driver awareness. That's what motivates my concern. Admittedly, right now, people are already staring at their phones while they drive, which is really bad, but the key difference is that they are still trying to drive (with mixed results). With FSD people might stop trying.

And the accidents while under AP is never contrasted with all the accidents avoided while under AP -- that is the overall worthwhile safety issue which the jihadists never deal with.

It's hard to measure the accidents avoided. As you say, we really just need the overall data, presented fairly and impartially, clearly indicating all relevant variables (freeway miles, surface street miles, time of day, etc., etc.), and compare comparable vehicles to Teslas. They may well be safer with AP. I have no idea. I haven't see any data that allows me to compare.

But the analysis shouldn't stop there. What is the overall assessment of actual safety results -- how many lives and cars saved compared to lost?

I agree that that is the metric. We just need really good data.

Even when some will misuse L2 and L3 and not pay attention and have accidents, my hypothesis is that many more accidents will be avoided and humans with L2 will be safer than mere humans without L2.

That is the question - what will the balance be? As I said earlier, I think the net safety level is a strong function of the % success rate of the system in an L2+ system (highly capable L2). At some point the systems will be good enough that definitely it will be safer with an L2 system and lives will be saved vs. no L2. I just don't know when that will be, and the path from the current level of safety (which may well be net beneficial, but I don't know) to that extremely capable system (which will be net beneficial) is potentially perilous.

This is simply an emprical question that will eventually be answered. Tesla, and MIT and insurance companies will provide the answers eventually.

I will believe the insurance companies for sure. I have 4 cars currently and my Tesla makes up half of my insurance premium. The Highlander has similar value and is 1/3 the cost to insure. It has primitive L2 features as well. So I suspect the reasons for the high premium in this case is the high cost & great length of Tesla repairs. These rates won't be a reflection of L2 safety for a while.
 
will believe the insurance companies for sure. I have 4 cars currently and my Tesla makes up half of my insurance premium. The Highlander has similar value and is 1/3 the cost to insure.
Root insurance company offers a discount for Tesla with autopilot.

Root Announces Car Insurance Discount for Tesla Autopilot Drivers

my insurance costs for two Tesla cars were same as non Tesla cars even though more expensive cars and costly aluminum repairs. Dont consider your overpaying for insurance a relevant data point.
 
  • Funny
Reactions: AlanSubie4Life
Root insurance company offers a discount for Tesla with autopilot.

Root Announces Car Insurance Discount for Tesla Autopilot Drivers

my insurance costs for two Tesla cars were same as non Tesla cars even though more expensive cars and costly aluminum repairs. Dont consider your overpaying for insurance a relevant data point.

I don’t consider it a relevant datapoint. Thanks for the tip though. I know I am overpaying. Might fix it this summer. Are you using Root?

Hopefully you understand my perspective on this now. You seem resistant even to the idea that this could work out unfavorably. Personally, I think you could end up being right, but it depends on exactly what choices Tesla and other manufacturers make to ensure ongoing appropriate use through the (alleged) transitional “danger zone”.

I hope we get good data too - concerned about that aspect as well - though I agree that overall insurance rates will eventually reflect any significant safety advantage, and might be the best indicator. There are a lot of variables determining rates though so you’d have to have access to the insurance companies’ actuarial data or see a specific discount for having the FSD feature (similar to what you linked to).
 
Last edited:
Most troubling is that your comments, like the jihadists Cummings and DeKort, utterly ignore the very real safety provided by AP -- the ability to detect abrupt car braking in front and even two cars ahead, or keeping the car safe when a driver is distracted or has a medical event. It is an empirical fact whether there are fewer collisions and injuries with or without AP. And the accidents while under AP is never contrasted with all the accidents avoided while under AP -- that is the overall worthwhile safety issue which the jihadists never deal with.


Perhaps the improvement in accident rate can simply be attributed to automatic emergency braking, which is offered by every major automaker. It's an empirical fact there are fewer collisions with this tech, and I think Tesla is conflating without isolating AP from simple AEB.
 
This presentation proves why 80%of the people on here are dead wrong. Level 4/5 FSD is going to be here with ease in 3 years.

This post should be over

It was another fake demo, just like the last one. No way to tell how many takes they needed to get it to work, no real info on what the car could see or how much of the environment it was actually accounting for etc.

Apart from some lane changing on highways they didn't even demonstrate anything new that they weren't claiming back in 2016. So according to Tesla they have made barely any progress in 3 years.
 
@banned-66611

Two things are different from 2016:

1) There is more reason to believe now what we see is actually something Tesla made itself and can run in current cars. In hindsight that was not true for 2016 but today I think there is less reason to believe it a lie.

2) Tesla doubled down on the Level 5 no geofence robotaxi story with such precision and timelines that it is all or nothing now. Either they have legitimate faith in achieving Level 5 with this suite or it is Theranos. Let’s hope the faith is legitimate.

I will certainly welcome Level 5 for my AP2 car.
 
@banned-66611

Two things are different from 2016:

1) There is more reason to believe now what we see is actually something Tesla made itself and can run in current cars. In hindsight that was not true for 2016 but today I think there is less reason to believe it a lie.

2) Tesla doubled down on the Level 5 no geofence robotaxi story with such precision and timelines that it is all or nothing now. Either they have legitimate faith in achieving Level 5 with this suite or it is Theranos. Let’s hope the faith is legitimate.

I will certainly welcome Level 5 for my AP2 car.

If the demo was real they would have let journalists film their rides in it, and today we would be flooded with stories about it.

The confidence is just Musk being Musk. It's a common tactic - set a deadline to "focus minds", as if the only reason it's not happening is that engineers to goofing off too much. Aggressive deadlines are his style at Space X too.
 
  • Like
Reactions: am_dmd
The confidence is just Musk being Musk. It's a common tactic - set a deadline to "focus minds", as if the only reason it's not happening is that engineers to goofing off too much. Aggressive deadlines are his style at Space X too.

If it is all hot air, then this one would have gone over the edge even for Musk in my view. So let’s hope not.
 
If the demo was real they would have let journalists film their rides in it, and today we would be flooded with stories about it.

Actually, reporters were not under any NDA and we are getting flooded with stories, like this one:

upload_2019-4-23_7-22-24.png
 
One big issue is that whenever it rains, some of the traffic lights go out, while others just flash. It's the ones that are totally out that are the issue because human drivers don't see them (or pretend they don't). There needs to be some "free day at the zoo" algorithms.

Disabled traffic light is a surprise all-way stop. So the approach could be:
Enable hazard lights.
Stop.
Wait your turn, or for gap in traffic.
Proceed, watching very carefully for cross traffic.
 
Just to update from my post of The Times newspaper article about autonomous driving testing at the end of the year in the UK, it seems that things have started earlier than expected.

Abstract: Monday 18 Mar 2019 4:59 pm

Britain’s first full-sized bus to be fitted with driverless technology was tested today by one of the UK’s biggest transport firms. Stagecoach is trialling the tech at its depot in Manchester, with video showing the bus moving on its own accord. Driver Dennis Finnegan sat back and relaxed as the bus wheel moved without his assistance.
 
Got any reputable sources? For example the article on Engadget doesn't mention it at all. All the major news outlets are strangely silent, YouTube oddly devoid of independently filmed fully autonomous journeys.

Does Morgan Stanley count as a reputable source? This is what they had to say about their test drive:

D47JjL6UYAALbSA.jpg:large
 
"The two main results of this work are that (1) drivers elect to use Autopilot for a significant percent of their driven miles and (2) drivers do not appear to over-trust the system to a degree that results in significant functional vigilance degradation in their supervisory role of system operation. In short, in our data, drivers use Autopilot frequently and remain functionally vigilant in their use."

Of course, that's mostly because every time we start to get close to trusting it, Tesla releases an upgrade with major regressions that actively tries to kill us (e.g. 2019.8.x). :D


Data showing that people maintain functional vigilance is not "consistent with" (even the weakest kind of support) your claim that people lose functional vigilance. Your claim is just a hypothesis unsupported by any real life data.

Tell that to Waymo and Uber, both of which found that this phenomenon did, in fact, occur (fatally, in Uber's case), requiring extensive training and careful selection of safety drivers.
 
Does Morgan Stanley count as a reputable source? This is what they had to say about their test drive:

D47JjL6UYAALbSA.jpg:large

That just confirms what I said. Pre-planned route, calm traffic, and even then the driver had to take over. It's obvious why they wouldn't let anyone else film.

How many attempts did it take them to make the video?