Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What the chances Tesla cars will be self driving in 3 years? Why do you think that way?

What the chances Tesla cars will be self driving in 3 years?


  • Total voters
    215
This site may earn commission on affiliate links.
Well you're at it again. All those wild guesses without the slightest understanding of what are going on there. If even you can think of it's better to do this or add that what's the chance you think the Tesla team had not considered that?

I’m sure Tesla has considered many things. My question is can they deliver on what they have considered and more importantly their leadership has promised? It is perfectly okay we disagree with our thoughts as this is a future outcome nobody knows yet. :)
 
Can you give us more details about what you see happening? How well will it handle snow? How often will it false brake? How often will it not brake when it should? Can I sleep in the car while it drives me cross country? Will it handle driving through a recent crash site? Will it handle a policeman giving hand signals? Will it handle San Fran double parked cars requiring crossing of double yellow lines?

+ Next week's winning lottery numbers. ;)

Thanks
 
Elon said in the podcast "we have just rolled out the AI chip three years in the making". They had the right answer at least three years ago.

Three years ago they had a strong sense of the right development direction to pursue, but no firm evidence to back it up. (Didn't stop them posting the curated AP Paint It Black video and overstating the capabilities of AP with what turned out to be forward-looking statements)

Musk being Musk has dropped over-confident tweets and soundbites during the last three years and this month some feel that he has finally hit "peak-Elon" in the run up to April 22.

It's interesting that he posts pretty much the same level of predictions for Spacex, but of course none of us care because we're not in the market, looking for our next orbital booster for our SO.

Maybe there's a forum of satellite manufacturers somewhere online, where right now @SoyuzSuxz is bitching about Musk over promising ;)
 
Three years ago they had a strong sense of the right development direction to pursue, but no firm evidence to back it up. (Didn't stop them posting the curated AP Paint It Black video and overstating the capabilities of AP with what turned out to be forward-looking statements)

Musk being Musk has dropped over-confident tweets and soundbites during the last three years and this month some feel that he has finally hit "peak-Elon" in the run up to April 22.

It's interesting that he posts pretty much the same level of predictions for Spacex, but of course none of us care because we're not in the market, looking for our next orbital booster for our SO.

Maybe there's a forum of satellite manufacturers somewhere online, where right now @SoyuzSuxz is bitching about Musk over promising ;)


Tesla hired Jim Keller and Peter Bannon three years ago. Those are the very top chip designers in the industry. That was exactly what they planned to do not just Elon saying it now.
 
  • Informative
Reactions: malcolm
What the chances Tesla cars will be self driving in 3 years? Why do you think that way?

0% if you're referring to complete autonomy in all situations.

I work in the broader AI field, admittedly in the NLP/NLU/NLC space rather than image processing, but autonomy (which is what we're really talking about) is an extraordinarily difficult problem to solve, where the inputs are not finite. There are literally an infinite number of "ifs" with a finite number of "thens" to consider, and so this is both a computational problem as well as a learning problem and a recall (storage/latency) problem.

I certainly think that something close to self-driving will be available for "sunny day" scenarios, like driving around relatively calm and quiet grid-based cities with good weather and predictable traffic. But negotiating poorly signed road-works on smashed up highways around New York or Detroit, or dealing with heavy rain, snow, fog, erratic drivers, cyclists, traffic cops telling drivers to do the opposite of what the signs say, dealing with emergency vehicles, navigating parking lots, doing all of that in the dark, doing any of that when there's no wireless signal and not enough information in the onboard database, etc, all mean that "full" autonomy is probably 3 or 4 generations away (10 - 15 years IMO).

In technology people talking about 80/20 where 20% of the effort gets you 80% of the result, and the remaining 80% effort gets you the final 20% result. In the development of autonomous driving, those "edge cases" which might only be 1 - 2% of driving situations, will consume more than 95% of the effort.

I would so love to be wrong BTW.
 
Last edited:
0% if you're referring to complete autonomy in all situations.

I work in the broader AI field, admittedly in the NLP/NLU/NLC space rather than image processing, but autonomy (which is what we're really talking about) is an extraordinarily difficult problem to solve, where the inputs are not finite. There are literally an infinite number of "ifs" with a finite number of "thens" to consider, and so this is both a computational problem as well as a learning problem and a recall (storage/latency) problem.

I certainly think that something close to self-driving will be available for "sunny day" scenarios, like driving around relatively calm and quiet grid-based cities with good weather and predictable traffic. But negotiating poorly signed road-works on smashed up highways around New York or Detroit, or dealing with heavy rain, snow, fog, erratic drivers, cyclists, traffic cops telling drivers to do the opposite of what the signs say, dealing with emergency vehicles, navigating parking lots, doing all of that in the dark, doing any of that when there's no wireless signal and not enough information in the onboard database, etc, all mean that "full" autonomy is probably 3 or 4 generations away (10 - 15 years IMO).

In technology people talking about 80/20 where 20% of the effort gets you 80% of the result, and the remaining 80% effort gets you the final 20% result. In the development of autonomous driving, those "edge cases" which might only be 1 - 2% of driving situations, will consume more than 95% of the effort.

I would so love to be wrong BTW.

Finally a well thought out argument based on knowledge from someone versed in the art.
 
0% if you're referring to complete autonomy in all situations.

I work in the broader AI field, admittedly in the NLP/NLU/NLC space rather than image processing, but autonomy (which is what we're really talking about) is an extraordinarily difficult problem to solve, where the inputs are not finite. There are literally an infinite number of "ifs" with a finite number of "thens" to consider, and so this is both a computational problem as well as a learning problem and a recall (storage/latency) problem.

I certainly think that something close to self-driving will be available for "sunny day" scenarios, like driving around relatively calm and quiet grid-based cities with good weather and predictable traffic. But negotiating poorly signed road-works on smashed up highways around New York or Detroit, or dealing with heavy rain, snow, fog, erratic drivers, cyclists, traffic cops telling drivers to do the opposite of what the signs say, dealing with emergency vehicles, navigating parking lots, doing all of that in the dark, doing any of that when there's no wireless signal and not enough information in the onboard database, etc, all mean that "full" autonomy is probably 3 or 4 generations away (10 - 15 years IMO).

In technology people talking about 80/20 where 20% of the effort gets you 80% of the result, and the remaining 80% effort gets you the final 20% result. In the development of autonomous driving, those "edge cases" which might only be 1 - 2% of driving situations, will consume more than 95% of the effort.

I would so love to be wrong BTW.

I think you are right but that is why there are different SAE levels of autonomy. Full autonomy in all conditions and all weather as you describe is L5, the highest. That is a long ways off. But I do think that we will get to L3 autonomy and L4 autonomy in limited cases much sooner than 10-15 years.
 
I think you are right but that is why there are different SAE levels of autonomy. Full autonomy in all conditions and all weather as you describe is L5, the highest. That is a long ways off. But I do think that we will get to L3 autonomy and L4 autonomy in limited cases much sooner than 10-15 years.

Agreed. I think someone earlier in this thread or another thread said the biggest leap is from level 2 to level 3, but it's not. By far the biggest leap is from level 4 to level 5, by several orders of magnitude. Each step actually gets harder.

If you use a space travel analogy (where level 0 is maybe balloon flight) then level 1 is putting a satellite in orbit, level 2 would be a person or object on the moon, 3 would be on Mars, 4 would be the outer planets or gas giants, and 5 would be another solar system. I think that scaling explains it well - although we're likely to have level 4 autonomous cars before we have people orbiting Europa or Ganymede).
 
  • Like
Reactions: diplomat33
I said 0%.

The better a level 2 system gets, the more dangerous it tends to be (due to driver inattention getting worse and worse). At least that seems to be what research in automation systems seems to suggest.

So they'll definitely need to leave the monitoring on for quite a while, which means no self-driving. I actually have no idea how they will progress from level 2, I guess it depends on how much the accident rate increases, as the system becomes more capable (the better the system, the more likely there will be accidents, but it depends on how rapidly the capability increases and what % capability it has). The car might well be able to successfully complete trips and "drive itself" 99% of the time in good conditions within the 3 year window, but it won't be self-driving since the driver will have to be 100% engaged to avoid serious accidents in the <1% failure situations.
 
Please cite any, and preferably ALL of this research that you are referring to. And by research I don't include arm-chair academic theorizing using the equivalent of mouse models, but real research collecting data on how people actually drive with real L2 driver assistance.

The better a level 2 system gets, the more dangerous it tends to be (due to driver inattention getting worse and worse). At least that seems to be what research in automation systems seems to suggest.
 
  • Funny
Reactions: AlanSubie4Life
Please cite any, and preferably ALL of this research that you are referring to. And by research I don't include arm-chair academic theorizing using the equivalent of mouse models, but real research collecting data on how people actually drive with real L2 driver assistance.
The research is for automation in general. No real L2 driver assistance is very good yet so it may not be problem, however it could be a problem in the future.
There are a bunch of references in Lex Fridman's Autopilot paper.
https://hcai.mit.edu/tesla-autopilot-human-side.pdf
 
  • Informative
Reactions: kablammyman
And by research I don't include arm-chair academic theorizing using the equivalent of mouse models, but real research collecting data on how people actually drive with real L2 driver assistance.

As Daniel says, the jury is still out. Paper is linked above. I strongly recommend reading through this Lex Fridman paper retweeted by Elon Musk (actually he retweeted Teslarati I think but never mind...). It provides some insights into what people in the field think about the issues with driver assistance systems. I think it's accurate to say Fridman is quite skeptical of the safety of level 2 systems that exist without driver attention management systems. (Tesla's system currently has a driver attention management system in the form of the wheel torque sensor.)

That's not to say he is skeptical of automation in general - I think most people think that eventually it will happen - but it's a question of timing and the safest trajectory to get to truly autonomous systems.
 
Please cite any, and preferably ALL of this research that you are referring to

As previously posted, this research does NOT entirely focus on L2 driving systems and applies to other types of automation as well, however, there are some studies listed below, done with automated driving systems. See below, copied from the paper for convenience:

[17] M. R. Endsley and E. O. Kiris, “The out-of-the-loop performance problem and level of control in automation,” Human factors, vol. 37, no. 2, pp. 381–394, 1995.
[18] R. Molloy and R. Parasuraman, “Monitoring an automated system for a single failure: Vigilance and task complexity effects,” Human Factors, vol. 38, no. 2, pp.311–322, 1996.
[19] R. Parasuraman and D. H. Manzey, “Complacency and bias in human use of automation: An attentional integration,”Human factors, vol. 52, no. 3, pp. 381–410, 2010.
[20] R. Parasuraman, R. Molloy, and I. L. Singh, “Performance consequences of automation-induced ”complacency”,”The International Journal of Aviation Psychology, vol. 3, no. 1, pp. 1–23, 1993.
[21] N. Bagheri, G. A. Jamieson et al., “Considering subjective trust and monitoring behavior in assessing automation-induced ”complacency.”,”Human performance, situation awareness, and automation: Current research and trends, pp. 54–59, 2004.
[22] C. D. Wickens and S. R. Dixon, “The benefits of imperfect diagnostic automation: A synthesis of the literature,” Theoretical Issues in Ergonomics Science, vol. 8, no. 3,pp. 201–212, 2007.
[23] P. A. May, “Effects of automation reliability and failure rate on monitoring performance in a multi-task environment,” Ph.D. dissertation, Catholic University of America, 1993.
[24] M. W. Wiggins, “Vigilance decrement during a simulated general aviation flight,” Applied Cognitive Psychology,vol. 25, no. 2, pp. 229–235, 2011.
[25] J. A. Caldwell, “Fatigue in aviation,” Travel medicine and infectious disease, vol. 3, no. 2, pp. 85–96, 2005.
[26] R. L. Helmreich, “On error management: lessons from aviation,” Bmj, vol. 320, no. 7237, pp. 781–785, 2000.
[27] B. Reimer, A. Pettinato, L. Fridman, J. Lee, B. Mehler,B. Seppelt, J. Park, and K. Iagnemma, “Behavioral impact of drivers’ roles in automated driving,” in Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, 2016, pp. 217–224.
[28] Z. Lu, R. Happee, C. D. Cabrall, M. Kyriakidis, and J. C.de Winter, “Human factors of transitions in automated driving: A general framework and literature survey,”Transportation research part F: traffic psychology and behaviour, vol. 43, pp. 183–198, 2016.
[29] E. T. Greenlee, P. R. DeLucia, and D. C. Newton, “Driver vigilance in automated vehicles: hazard detection failures are a matter of time,”Human factors, vol. 60, no. 4, pp.465–476, 2018.
[30] O. Carsten, F. C. Lai, Y. Barnard, A. H. Jamson, and N. Merat, “Control task substitution in semiautomated driving: Does it matter what aspects are automated?” Human factors, vol. 54, no. 5, pp. 747–761, 2012.
[31] I. S. Marcos,Challenges in Partially Automated Driving:A Human Factors Perspective.Link ̈oping University Electronic Press, 2018, vol. 741
 
Last edited:
  • Funny
Reactions: bhzmark
Wow! They tested L2 driving assistance in 1995! and in 1996! and in 1993!

Please quote the exact language from any of this list that is actually responsive to my question asking for research behind your statement that "The better a level 2 system gets, the more dangerous it tends to be (due to driver inattention getting worse and worse)."

Are any of these actually responsive to my question? Other than Lex Fridman's recent paper with his actual research with Tesla cars? which suggests the opposite?

Is the "rest of the research" below simply academic theorizing based on the equivalent of using mouse models to theorize about human disease treatment?

As previously posted, this research does NOT entirely focus on L2 driving systems and applies to other types of automation as well, however, there are some studies listed below, done with automated driving systems. See below, copied from the paper for convenience:

[17] M. R. Endsley and E. O. Kiris, “The out-of-the-loop performance problem and level of control in automation,” Human factors, vol. 37, no. 2, pp. 381–394, 1995.
[18] R. Molloy and R. Parasuraman, “Monitoring an automated system for a single failure: Vigilance and task complexity effects,” Human Factors, vol. 38, no. 2, pp.311–322, 1996.
[19] R. Parasuraman and D. H. Manzey, “Complacency and bias in human use of automation: An attentional integration,”Human factors, vol. 52, no. 3, pp. 381–410, 2010.
[20] R. Parasuraman, R. Molloy, and I. L. Singh, “Performance consequences of automation-induced ”complacency”,”The International Journal of Aviation Psychology, vol. 3, no. 1, pp. 1–23, 1993.
[21] N. Bagheri, G. A. Jamieson et al., “Considering subjective trust and monitoring behavior in assessing automation-induced ”complacency.”,”Human performance, situation awareness, and automation: Current research and trends, pp. 54–59, 2004.
[22] C. D. Wickens and S. R. Dixon, “The benefits of imperfect diagnostic automation: A synthesis of the literature,” Theoretical Issues in Ergonomics Science, vol. 8, no. 3,pp. 201–212, 2007.
[23] P. A. May, “Effects of automation reliability and failure rate on monitoring performance in a multi-task environment,” Ph.D. dissertation, Catholic University of America, 1993.
[24] M. W. Wiggins, “Vigilance decrement during a simulated general aviation flight,” Applied Cognitive Psychology,vol. 25, no. 2, pp. 229–235, 2011.
[25] J. A. Caldwell, “Fatigue in aviation,” Travel medicine and infectious disease, vol. 3, no. 2, pp. 85–96, 2005.
[26] R. L. Helmreich, “On error management: lessons from aviation,” Bmj, vol. 320, no. 7237, pp. 781–785, 2000.
[27] B. Reimer, A. Pettinato, L. Fridman, J. Lee, B. Mehler,B. Seppelt, J. Park, and K. Iagnemma, “Behavioral impact of drivers’ roles in automated driving,” in Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, 2016, pp. 217–224.
[28] Z. Lu, R. Happee, C. D. Cabrall, M. Kyriakidis, and J. C.de Winter, “Human factors of transitions in automated driving: A general framework and literature survey,”Transportation research part F: traffic psychology and behaviour, vol. 43, pp. 183–198, 2016.
[29] E. T. Greenlee, P. R. DeLucia, and D. C. Newton, “Driver vigilance in automated vehicles: hazard detection failures are a matter of time,”Human factors, vol. 60, no. 4, pp.465–476, 2018.
[30] O. Carsten, F. C. Lai, Y. Barnard, A. H. Jamson, and N. Merat, “Control task substitution in semiautomated driving: Does it matter what aspects are automated?” Human factors, vol. 54, no. 5, pp. 747–761, 2012.
[31] I. S. Marcos,Challenges in Partially Automated Driving:A Human Factors Perspective.Link ̈oping University Electronic Press, 2018, vol. 741
 
  • Funny
Reactions: AlanSubie4Life
Are any of these actually responsive to my question? Other than Lex Fridman's recent paper with his actual research with Tesla cars? which suggests the opposite?

Umm...did you read that paper? It definitely does not suggest the opposite, nor would Fridman claim that. The paper is very very clear about the limited scope and how the results are unlikely to be able to be extrapolated to more capable systems. In a very specific situation, the 21 drivers in the study seemed to stay engaged and maintain good awareness when using AP. There are a number of possible reasons for this discussed in the paper. I recommend reading it through.


“...the Autopilot dataset includes 323,384 total miles and 112,427 miles under Autopilot control. Of the 21 vehicles in the dataset, 16 are HW1 vehicles and 5 are HW2 vehicles.
The Autopilot dataset contains a total of 26,638 epochs of Autopilot utilization...

…these findings (1) cannot be directly used to infer safety as a much larger dataset would be required for crash-based statistical analysis of risk, (2) may not be generalizable to a population of drivers nor Autopilot versions outside our dataset, (3) do not include challenging scenarios that did not lead to Autopilot disengagement, (4) are based on human-annotation of critical signals, and (5) do not imply that driver attention management systems are not potentially highly beneficial additions to the functional vigilance framework for the purpose of encouraging the driver to remain appropriately attentive to the road…

…Research in the scientific literature has shown that highly reliable automation systems can lead to a state of “automation complacency” in which the human operator becomes satisfied that the automation is competent and is controlling the vehicle satisfactorily. And under such a circumstance, the human operator’s belief about system competence may lead them to become complacent about their own supervisory responsibilities and may, in fact, lead them to believe that their supervision of the system or environment is not necessary….The corollary to increased complacency with highly reliable automation systems is that decreases in automation reliability should reduce automation complacency, that is, increase the detection rate of automation failures….

…Wickens & Dixon hypothesized that when the reliability level of an automated system falls below some limit (which the suggested lies at approximately 70% with a standard error of 14%) most human operators would no longer be inclined to rely on it. However, they reported that some humans do continue to rely on such automated systems. Further, May[23] also found that participants continued to show complacency effects even at low automation reliability. This type of research has led to the recognition that additional factors like first failure, the temporal sequence of failures, and the time between failures may all be important in addition to the basic rate of failure….

….We filtered out a set of epochs that were difficult to annotate accurately. This set consisted of disengagements … [when] the sun was below the horizon computed based on the location of the vehicles and the current date. [So all miles are daytime miles]

Normalizing to the number of Autopilot miles driven during the day in our dataset, it is possible to determine the rate of tricky disengagements. This rate is, on average, one tricky disengagement every 9.2 miles of Autopilot driving. Recall that, in the research literature (see§II-A), rates of automation anomalies that are studied in the lab or simulator are often artificially increased in order to obtain more data faster [19] such as “1 anomaly every 3.5 minutes” or “1 anomaly every 30 minutes.” This contrasts with rates of “real systems in the world” where anomalies and failures can occur at much lower rates (once every 2 weeks, or even much more rare than that). The rate of disengagement observed thus far in our study suggests that the current Autopilot system is still in an early state, where it still has imperfections and this level of reliability plays a role in determining trust and human operator levels of functional vigilance...

...We hypothesize two explanations for the results as detailed below: (1) exploration and (2) imperfection. The latter may very well be the critical contributor to the observed behavior. Drivers in our dataset were addressing tricky situations at the rate of 1 every 9.2 miles. This rate led to a level of functional vigilance in which drivers were anticipating when and where a tricky situation would arise or a disengagement was necessary 90.6% of the time…..

.In other words, perfect may be the enemy of good when the human factor is considered. A successful AI-assisted system may not be one that is 99.99...% perfect but one that is far from perfect and effectively communicates its imperfections….

...It is also recognized that we are talking about behavior observed in this substantive but still limited naturalistic sample. This does not ignore the likelihood that there are some individuals in the population as a whole who may over-trust a technology or otherwise become complacent about monitoring system behavior no matter the functional design characteristics of the system. The minority of drivers who use the system incorrectly may be large enough to significantly offset the functional vigilance characteristics of the majority of the drivers when considered statistically at the fleet level.”
 
Last edited:
My interpretation of the votes:
People are skeptical that Tesla will deliver full self driving in the next 3 years.
People are optimistic that Tesla will deliver something fantastic in limited situations like freeway.
The people who voted 100% have a different definition of full self driving one in which the driver has to be vigilant, which also seems to be Tesla's definition.
 
My interpretation of the votes:
People are skeptical that Tesla will deliver full self driving in the next 3 years.
People are optimistic that Tesla will deliver something fantastic in limited situations like freeway.

Yeah, that's me. I think Tesla will fall short of true L4 autonomy but I do think AP3 will delivery self-driving in some situations and I think it will be pretty darn cool too. Specifically, I do think we will get highway full self-driving. City driving is a lot trickier. I think Tesla will get the car to self-drive around town in most common situations but the driver will need to monitor.