Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What the chances Tesla cars will be self driving in 3 years? Why do you think that way?

What the chances Tesla cars will be self driving in 3 years?


  • Total voters
    215
This site may earn commission on affiliate links.
While I believe FSD *MIGHT* be possible within 3 years, I think the elephant in the room that most posters are missing is the regulatory issue. Before FSD will be allowed on streets and highways there will need to be regulatory approval by not only the federal government, but every state, province and other regulatory agency with jurisdiction over public transportation. That could take a decade or more.
 
  • Funny
Reactions: AlanSubie4Life
Umm...did you read that paper? It definitely does not suggest the opposite, nor would Fridman claim that. The paper is very very clear about the limited scope and how the results are unlikely to be able to be extrapolated to more capable systems. In a very specific situation, the 21 drivers in the study seemed to stay engaged and maintain good awareness when using AP. There are a number of possible reasons for this discussed in the paper. I recommend reading it through.


“...the Autopilot dataset includes 323,384 total miles and 112,427 miles under Autopilot control. Of the 21 vehicles in the dataset, 16 are HW1 vehicles and 5 are HW2 vehicles.
The Autopilot dataset contains a total of 26,638 epochs of Autopilot utilization...

…these findings (1) cannot be directly used to infer safety as a much larger dataset would be required for crash-based statistical analysis of risk, (2) may not be generalizable to a population of drivers nor Autopilot versions outside our dataset, (3) do not include challenging scenarios that did not lead to Autopilot disengagement, (4) are based on human-annotation of critical signals, and (5) do not imply that driver attention management systems are not potentially highly beneficial additions to the functional vigilance framework for the purpose of encouraging the driver to remain appropriately attentive to the road…

…Research in the scientific literature has shown that highly reliable automation systems can lead to a state of “automation complacency” in which the human operator becomes satisfied that the automation is competent and is controlling the vehicle satisfactorily. And under such a circumstance, the human operator’s belief about system competence may lead them to become complacent about their own supervisory responsibilities and may, in fact, lead them to believe that their supervision of the system or environment is not necessary….The corollary to increased complacency with highly reliable automation systems is that decreases in automation reliability should reduce automation complacency, that is, increase the detection rate of automation failures….

…Wickens & Dixon hypothesized that when the reliability level of an automated system falls below some limit (which the suggested lies at approximately 70% with a standard error of 14%) most human operators would no longer be inclined to rely on it. However, they reported that some humans do continue to rely on such automated systems. Further, May[23] also found that participants continued to show complacency effects even at low automation reliability. This type of research has led to the recognition that additional factors like first failure, the temporal sequence of failures, and the time between failures may all be important in addition to the basic rate of failure….

….We filtered out a set of epochs that were difficult to annotate accurately. This set consisted of disengagements … [when] the sun was below the horizon computed based on the location of the vehicles and the current date. [So all miles are daytime miles]

Normalizing to the number of Autopilot miles driven during the day in our dataset, it is possible to determine the rate of tricky disengagements. This rate is, on average, one tricky disengagement every 9.2 miles of Autopilot driving. Recall that, in the research literature (see§II-A), rates of automation anomalies that are studied in the lab or simulator are often artificially increased in order to obtain more data faster [19] such as “1 anomaly every 3.5 minutes” or “1 anomaly every 30 minutes.” This contrasts with rates of “real systems in the world” where anomalies and failures can occur at much lower rates (once every 2 weeks, or even much more rare than that). The rate of disengagement observed thus far in our study suggests that the current Autopilot system is still in an early state, where it still has imperfections and this level of reliability plays a role in determining trust and human operator levels of functional vigilance...

...We hypothesize two explanations for the results as detailed below: (1) exploration and (2) imperfection. The latter may very well be the critical contributor to the observed behavior. Drivers in our dataset were addressing tricky situations at the rate of 1 every 9.2 miles. This rate led to a level of functional vigilance in which drivers were anticipating when and where a tricky situation would arise or a disengagement was necessary 90.6% of the time…..

.In other words, perfect may be the enemy of good when the human factor is considered. A successful AI-assisted system may not be one that is 99.99...% perfect but one that is far from perfect and effectively communicates its imperfections….

...It is also recognized that we are talking about behavior observed in this substantive but still limited naturalistic sample. This does not ignore the likelihood that there are some individuals in the population as a whole who may over-trust a technology or otherwise become complacent about monitoring system behavior no matter the functional design characteristics of the system. The minority of drivers who use the system incorrectly may be large enough to significantly offset the functional vigilance characteristics of the majority of the drivers when considered statistically at the fleet level.”

That’s really interesting—the idea that drivers often anticipate tricky situations. Maybe it would be useful if the autopilot’s goal was to communicate more often rather than try to be invisible. When it reaches some level of computational fuzz or insecurity, give the wheel a little shake or sound a non-threatening chime just to tell the driver to be more vigilant. A “tricky situation coming up, little help please?” message would be more appreciated than a jump scare because the car was about to sideswipe a semi, right?
 
That’s really interesting—the idea that drivers often anticipate tricky situations. Maybe it would be useful if the autopilot’s goal was to communicate more often rather than try to be invisible. When it reaches some level of computational fuzz or insecurity, give the wheel a little shake or sound a non-threatening chime just to tell the driver to be more vigilant. A “tricky situation coming up, little help please?” message would be more appreciated than a jump scare because the car was about to sideswipe a semi, right?

Maybe. It might make things worse because it might condition drivers to only pay attention when the system knows it can’t figure things out. The real problem is when the system does not know it does not understand. It could not generate a warning in that case and those cases might be the most dangerous. If the driver isn’t paying attention at that moment, it could be catastrophic.

So driver attention management systems to ensure 100% driver engagement would still be needed. You might be able to layer your suggestion on top of that and in that case it might improve safety.
 
I believe the issue is more a legislative one than Tesla's capability. As one can see in this video published on YouTube back on Nov 18, 2016, the car is fully capable of full self-driving (link:
). Knowing Moore's Law, which states that computer processing speeds double about every 18-24 months, we can see the processing speed should have doubled (24 months) or almost quadrupled (18 months) since this video. Also, Tesla has announced it has created a processing chip that can perform a 10-fold increase in calculations over the current chip in use in today's vehicles. The bottleneck, IMHO, will occur with legislation being slow-rolled to allow full self-driving, autonomous vehicles.
 
I believe the issue is more a legislative one than Tesla's capability. As one can see in this video published on YouTube back on Nov 18, 2016, the car is fully capable of full self-driving (link:
). Knowing Moore's Law, which states that computer processing speeds double about every 18-24 months, we can see the processing speed should have doubled (24 months) or almost quadrupled (18 months) since this video. Also, Tesla has announced it has created a processing chip that can perform a 10-fold increase in calculations over the current chip in use in today's vehicles. The bottleneck, IMHO, will occur with legislation being slow-rolled to allow full self-driving, autonomous vehicles.

Umm...that looked pretty bad - stopping miles back from a stopped car, and slowing down for pedestrians on a sidewalk. The non-real time video makes it hard to tell how it is actually doing, too. And it was probably a fixed course which had been driven many times before. You really can't read too much into this - even if everything I mentioned is fixed (and it probably is) it's all about the edge cases as far as actual FSD (level 3 and up) goes. Of course something like this that has been refined would be useful as a level 2 system, and perhaps that is what Tesla is close to. But level 3 is way harder, as it does not require driver attention at all times - so the system has to be basically perfect and also always understand when it doesn't understand, so it can warn the driver to take over (which has its own significant handoff problems due to lack of context - which may be insurmountable - Waymo decided they were).

My understanding is that legislation for a high functioning level 2 system is not an issue at all, and no one is all that close to that with a system that drives like a human - yet. Tesla may be closer than I think, we'll see! And if a full level 2 system is not possible right now, then there really isn't any reason to worry about legislation issues for level 3-5 yet.
 
Last edited:
0% if you're referring to complete autonomy in all situations.

I work in the broader AI field, admittedly in the NLP/NLU/NLC space rather than image processing, but autonomy (which is what we're really talking about) is an extraordinarily difficult problem to solve, where the inputs are not finite. There are literally an infinite number of "ifs" with a finite number of "thens" to consider, and so this is both a computational problem as well as a learning problem and a recall (storage/latency) problem.

I certainly think that something close to self-driving will be available for "sunny day" scenarios, like driving around relatively calm and quiet grid-based cities with good weather and predictable traffic. But negotiating poorly signed road-works on smashed up highways around New York or Detroit, or dealing with heavy rain, snow, fog, erratic drivers, cyclists, traffic cops telling drivers to do the opposite of what the signs say, dealing with emergency vehicles, navigating parking lots, doing all of that in the dark, doing any of that when there's no wireless signal and not enough information in the onboard database, etc, all mean that "full" autonomy is probably 3 or 4 generations away (10 - 15 years IMO).

In technology people talking about 80/20 where 20% of the effort gets you 80% of the result, and the remaining 80% effort gets you the final 20% result. In the development of autonomous driving, those "edge cases" which might only be 1 - 2% of driving situations, will consume more than 95% of the effort.

I would so love to be wrong BTW.


Wooloo:
I agree. I work in the area of software robots and Artificial Intelligence and while all the bobbing heads say that Artificial Intelligence is already out there, that is BS as few people truly understand the difference between software robotics, machine learning and Artificial Intelligence, with the latter being incredibly complex. People who aren't trying to sell a product all agree that true Artificial Intelligence and deep learning are 5 to 10 years out. Anyone who says its already here or right around the corner doesn't have a clue as to what they are talking about. I put people who say true self-driving, fully autonomous cars are almost here (i.e. no one in the drivers seat) in the same category.

There is a difference between what's realistic to expect very what you'd like to expect.
 
@StockDoc It is widely doubted that the video has little to do with Tesla’s actual capabilities at the time. It has been said to be pretty much a fixed demo with the visuals added after the fact — and as we know speed and cuts altered — and there has been speculation it used Nvidia’s demo code.

It would make sense given how far from that Tesla’s actual AP2 as released to customers was in late 2016...
 
  • Like
Reactions: Mobius484
@StockDoc It is widely doubted that the video has little to do with Tesla’s actual capabilities at the time. It has been said to be pretty much a fixed demo with the visuals added after the fact — and as we know speed and cuts altered — and there has been speculation it used Nvidia’s demo code.

It would make sense given how far from that Tesla’s actual AP2 as released to customers was in late 2016...


Yes, it's true it was definitely not ready for prime time as of that video. It definitely had its issues, stopping late at stop signs, early at other times, etc. But I do think that at the time this video was released, the full self-driving capability was much further along than anyone at the time (outside of Tesla) had anticipated. I think prior than this video, it was mostly just speculation that "some day" FSD might be a reality.
 
Where is this FUD coming from and what are the motivations behind it?

The shorts are clearly getting out ahead of things and trying to discourage people from buying Tesla vehicles with the imminent-release FSD-computer system - clearly the strategy is to flood the zone to make people think they won't be able to legally use this awesome new feature they'll definitely be able to buy in a couple weeks.

Even when Tesla is wildly successful and about to bring the wonders of true FSD to the masses, the shorts are still the naysayers, and the main obstacle between us and an autonomous utopia.

;)
 
  • Funny
Reactions: Daniel in SD
I've been pressing the limits of my existing FSD, and just keep seeing more and more situations that I really don't think an autonomous car could fully deal with on it's own.

However, I do foresee cities and towns designating "robot friendly" zones in which the streets, signs, curbs, etc are maintained specifically with the robot driver in mind. This is not going to happen on a big scale in three years, but I could see robo-buses making routes in some cities.

My predictions:
1) When FSD cars are fully operational, most people will stop buying personal vehicles
2) In the near future, we'll see laws that a car must display a visible signal that the robot is in control,
3) Eventually, human-operated-vehicles will be outlawed in most places as being too dangerous and unpredictable to be on the road.
 
I believe the issue is more a legislative one than Tesla's capability. As one can see in this video published on YouTube back on Nov 18, 2016, the car is fully capable of full self-driving (link:
). Knowing Moore's Law, which states that computer processing speeds double about every 18-24 months, we can see the processing speed should have doubled (24 months) or almost quadrupled (18 months) since this video. Also, Tesla has announced it has created a processing chip that can perform a 10-fold increase in calculations over the current chip in use in today's vehicles. The bottleneck, IMHO, will occur with legislation being slow-rolled to allow full self-driving, autonomous vehicles.

That video is widely believed to be fake, a carefully staged run on a well mapped route.

For example, there is no sign that they have managed to build up a 3D model of the area around the car, something absolutely necessary for FSD.
 
  • Like
Reactions: am_dmd
Where is any research that contradicts this:

"The two main results of this work are that (1) drivers elect to use Autopilot for a significant percent of their driven miles and (2) drivers do not appear to over-trust the system to a degree that results in significant functional vigilance degradation in their supervisory role of system operation. In short, in our data, drivers use Autopilot frequently and remain functionally vigilant in their use."

Quote any contrary research with links. Still waiting.

Or anything to backup your contrary claim,

"The better a level 2 system gets, the more dangerous it tends to be (due to driver inattention getting worse and worse)."

Anything?

Happy to be of assistance! We’ll see how things go...the main thing is to approach this in a manner which minimizes risk while still allowing adequate progress. Given that there are quite a few unknowns that is the prudent thing to do.
 
Last edited:
Where is any research that contradicts this:

"The two main results of this work are that (1) drivers elect to use Autopilot for a significant percent of their driven miles and (2) drivers do not appear to over-trust the system to a degree that results in significant functional vigilance degradation in their supervisory role of system operation. In short, in our data, drivers use Autopilot frequently and remain functionally vigilant in their use."

Quote any contrary research with links. Still waiting.

Or anything to backup your contrary claim,

"The better a level 2 system gets, the more dangerous it tends to be (due to driver inattention getting worse and worse)."

Anything?

You must have picked a lot of cherries in your life! I like particularly how you completely ignored all the other quotes from the paper, apparently. The paper has what is called a “nuanced view” of the results...

My statement is not a contrary claim (perhaps I should have added the caveat “the better a level 2 system gets without robust driver attention management...” . It is, however, really a hypothesis at this point, since no system capable enough to exhibit the possible effect has yet been deployed to test the hypothesis.

There certainly has been research on the topic (several references above), though perhaps in your eyes it only counts as a mouse model, because it is done with a simulator. I can’t access this and haven’t poked around for a PDF hiding somewhere - but the abstract gives a good idea. I don’t know anything about the authors’ motivations either. It’s just a reference:

SAGE Journals: Your gateway to world-class journal research

I’m just saying that it does not seem like an unreasonable hypothesis to me (it’s not my hypothesis). It seems fairly consistent with what I would expect to happen, and since lives are at risk, it seems to me something that we should be keenly focused on as the technology becomes more capable. I would hate to see careless implementation result in a slow roll (due to punitive but deserved regulatory scrutiny) of automated driving and high capability driver-assistance systems.
 
So in other words, no research, no data to back up your claim Just a hypothesis. Got it.

On the other hand the real data that is available, with real drivers in a real car shows that they do maintain vigilance.

And no other data contradicts that nor supports your claim.

Thanks.
You must have picked a lot of cherries in your life! I like particularly how you completely ignored all the other quotes from the paper, apparently. The paper has what is called a “nuanced view” of the results...

My statement is not a contrary claim (perhaps I should have added the caveat “the better a level 2 system gets without robust driver attention management...” . It is, however, really a hypothesis at this point, since no system capable enough to exhibit the possible effect has yet been deployed to test the hypothesis.

There certainly has been research on the topic (several references above), though perhaps in your eyes it only counts as a mouse model, because it is done with a simulator. I can’t access this and haven’t poked around for a PDF hiding somewhere - but the abstract gives a good idea. I don’t know anything about the authors’ motivations either. It’s just a reference:

SAGE Journals: Your gateway to world-class journal research

I’m just saying that it does not seem like an unreasonable hypothesis to me (it’s not my hypothesis). It seems fairly consistent with what I would expect to happen, and since lives are at risk, it seems to me something that we should be keenly focused on as the technology becomes more capable. I would hate to see careless implementation result in a slow roll (due to punitive but deserved regulatory scrutiny) of automated driving and high capability driver-assistance systems.
 
  • Funny
Reactions: AlanSubie4Life
So in other words, no research, no data to back up your claim Just a hypothesis. Got it.

On the other hand the real data that is available, with real drivers in a real car shows that they do maintain vigilance.

And no other data contradicts that nor supports your claim.

Thanks.

Sigh...
If you are content with HW1 & 2, yes, 21 specially-selected drivers with cameras installed in the car, from a small area around MIT, driving 120k miles total, do maintain high levels of functional vigilance when using AP, in that case.

That is the real data we have. One could almost call that a mouse model as it pertains to HW3!

I just don’t get you. I think I have been very clear about why I think what I think, and provided plenty of quotes from the paper. I’ve tried to add context to my original statement. Yet you won’t even discuss it, and you won’t discuss why you don’t find the hypothesis to be worth testing. It’s kind of weird. o_O
 
it only counts as a mouse model, because it is done with a simulator.

Exactly. How many have died, or even crashed their expensive car, because they weren't paying attention to a simulator in a lab? An appreciation of life generally tends to focus the mind.

provided plenty of quotes from the paper.

I'm just interested in the data, or the specific conclusions summarizing the data. That is what I quoted.

If we wanted to compare intuition driven rather than data driven hypothesis, mine is that humans plus just about any version of L2 will be safer than only humans with no driver assistance and the few noncompliant distracted drivers will be outweighed by those who are saved by L2. That intuition is informed by tens of thousands of miles using L2 -- but it isn't informed by any experience with mice in simulators.
 
Last edited:
or the specific conclusions summarizing the data. That is what I quoted.

I’m happy to “concede” the conclusions. Because I believe them and they make sense. As explained by the authors. The results are consistent with my (and others) hypothesis.

That intuition is informed by tens of thousands of miles using L2 -- but it isn't informed by any experience with mice in simulators.

Again, the current experience with L2 is great, but would your car be intact if you had not been paying attention?

I think in regards to the “few non compliant,” the root of our disagreement is our level of trust in our fellow humans, I think. I trust myself, but not so much others. I tend to think the number of people who will be irresponsible may actually be quite high. So with a sufficiently capable system, enough people using it, with a high enough hazard density, I think we could have potential issues without good attention management. And the types of accidents may be a bit more serious than the standard rear-end collision, due to the high complexity locations where the attention deficit could occur and is most likely to have consequences.