Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AI experts: true full self-driving cars could be decades away because AI is not good enough yet

This site may earn commission on affiliate links.
FSD has shown many improvements. Until last year my car did not react to STOP signs or traffic lights. Now it reads and reacts to speed signs and I see great video’s on YouTube with FSD beta.

That is not improvement. That was already there, but never turned On. It's also easy. That is Not AI.

Now, what if someone ran into the sign and it's knocked down? I have been to many areas where stop signs are missing. As a human you can use your own judgement. Some people are not great with that and get into accidents.

BTW, FSD can't tell a flashing yellow from a Yellow even on the freeway. Try 55 South to Newport beach for anyone local.

FSD still gets speed signs wrong like 55mph with trailers or 35mph when children are around.

FSD makes insufficient improvements to allow autonomous possible. Yes, it can drive itself, but Government will never allow something half baked into people's hands. Sleeping or watching in a car by yourself, and Tesla robotaxi is decades away. 10-15 years for AI and 5-10 years for regulations approvals.

If you disagree then show me a video of the car slowing down for jwalkers at beach cities. I'm not taking about seeing them and slamming the brakes. I'm talking about anticipating them and the car makes the 1st move. See a surf board floating between cars and slows down. See some people waiting at crosswalks so slows to a stop to let them by. Let's see some real AI behaviors instead of FSD just slamming brakes.
 
  • Disagree
  • Like
Reactions: Stan930 and DanCar
You are completely correct but this thread is not about holding human drivers responsible but relieving humans (such as a blind man) from the driving task and let the machine take over as in a robotaxi without a human driver.

In 2013, Tesla thought that could be done 3 years away.

Then in 2015, it's 2 years away.

Then in 2016, it's 2 years away.

Then in 2017, it's 2 years away.

Then in 2018, it's by the end of the year.

Then in 2019, it's by the end of the year.

Then in 2020, it's 1 year away.

Then in 2021, "pure vision" comes to the rescue at last!

So, this thread is among many others trying to revise the expectation of Autonomous driving such as for those who paid $10,000: How soon?
This thread is about speculating about future state, but with most (all?) here having zero actual knowledge of what is happening behind the scenes at Tesla. Lots of speculation and back-and-forth arguments.

L2 - L3 on city streets is all I ever expected, and I am confident that that's doable by Tesla. L4 and L5? Future, and certainly not with current Tesla hardware nor any other current AVs out there. Waymo is L4 and likely will be for a very long time due to geofencing. And maybe they'll get a traffic cone solution soon. ;)

It would behoove people to distinguish between the legal treatment of "puffery" vs. enforceable promises.
 
Lets assume for a minute that I have to:
1) pay close attention for the first minute after I set the destination. Ensures FSD hardware is working and my entered destination is good. FSD requires I acknowledge this before fully taking over.
2) the last minute when I'm arriving at my destination so I can take over if needed to deal with complex parking as needed
3) during the drive when FSD runs into an edge case like a closed road or police redirecting traffic. In none of these cases would I need to take over immediately just an alert to take over within a minute rather.

That sounds a lot like L3.

And what do I get for this?
Read, text message, browse or watch a video. The value would be enormous and waiting for Level 5 without a driver doesn't have to be the end all be all so many owners are fixated on. And I dare say they are minority. Tesla's revenue upside in this scenario is very significant and would be a major advantage at least in the near term over other car makers.

And this seems doable as well. I don't really care about L5 although I realize many people do. Like most things that are worth waiting for, good things often come incrementally.

You are right that Tesla does not need to achieve L5 in order to give tremendous value to customers. L3 or even hands-off L2 would still be of huge value to most customers.
 
...the winner will be the one that will tolerate the most risk of accidents...

Uber took the most aggressive and riskier path by not properly trained its backup/safety driver (such as not telling the human safety officer that you need to be extra careful this drive because "at 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision." However, "emergency braking maneuvers" were "not enabled." Even worse, "the system is not designed to alert" the safety driver about an imminent collision."

The autonomous vehicle division was touted to be the "future" for Uber but now it is no more as it was sold to Aurora.
 
Since A.I. isn't good enough the winner will be the one that will tolerate the most risk of accidents. In other words the one that will tolerate accidents has big advantage over those that don't. Companies like Waymo and Cruise will not tolerate risk of accidents. Elon / Tesla has demonstrated ability to tolerate a lot of risk.

This is where I disagree. You say Waymo and Cruise won't tolerate risk but Waymo is ahead of everybody in actually deploying autonomous vehicles. Tesla for all their supposed risk taking, has yet to deploy any autonomous vehicles on public roads. So the ability to accept risk does not appear to be a good indicator of who is leading the FSD race.

Furthermore, safety standards for AVs are being written. For example, IEEE P2846 is due to be published at the end of this year. SAE and others have already released best practices for AV safety. It is only a matter of time before the NHTSA eventually adopts these AV safety standards and makes them mandatory. When that happens, AV companies that took too much risk will essentially get disqualified because their AVs won't meet the standards. And companies that focused on safety will be the only ones allowed to actually deploy AVs on public roads.
 
  • Like
Reactions: Tam
Risk taking might seem like a good strategy because you think it will allow you to develop your FSD faster but it is a short term strategy at best. Too many accidents and your FSD will get investigated or even shut down which will delay your FSD development more than if you had just focused on safety from the beginning. Also too many accidents and you will lose public trust. When that happens, the public won't want to use your FSD because very few people will want to ride in an AV that is perceived to be unsafe, and then you will go bankrupt. Focusing on safety is the right long term approach. You will build public trust. You will get regulator approval faster. So you will be able to deploy your FSD with fewer delays and more customers. It is the better long term approach.

And for the talk about Tesla taking risks, the fact is that Tesla has been pretty conservative IMO. Tesla has increased AP nags at times to keep driver's paying attention. Tesla has rolled out features slowly and requiring stalk confirmations. Tesla has been slow at expanding FSD beta. Also, Tesla has kicked Tesla owners out of the FSD Beta program when they were deemed irresponsible.

And yes, there have been some highly publicized AP accidents but Tesla has been able to hide behind the L2 defense. I would guess that if Tesla actually deployed cars with no drivers and they got into a few deadly accidents, Tesla's FSD would be in big trouble.
 
  • Like
Reactions: Microterf
Been in and out of the AI community for 30+ years ... there's two sides of the AI spectrum: theoretical and practical. I much prefer scrappy approaches that focus on "smarter and smarter", even if the design is more con-artist than clean.

What many miss about the term "artificial intelligence" is that it doesn't need to match "human intelligence." It just needs to be better than "human stupidity."

Another point being missed ... Tesla's fleet of driving cameras gives them an enormous advantage. Tagging and analyzing disruptions is a better approach than designing thinking machines.
 
Tesla will solve fsd to better than human accident stats this year. Although robotaxis likely won’t be deployed until 5-10x human stats.

You should state that this is your personal opinion. Don't state it so emphatically as if it is fact. You have no evidence for making such a prediction.

IMO, it is a ridiculous prediction. Tesla would need to somehow solve FSD AND collect billions of miles of data to prove safety and do both things all in the next 6 months. IMO, that is highly unlikely.
 
Tesla will solve fsd to better than human accident stats this year. Although robotaxis likely won’t be deployed until 5-10x human stats.

fsd isn’t about perfection, as many people want to believe.

What good about the machine is: It's consistent, repeatable, and reliable.

Humans can be unreliable and may make a mistake in a simple task such as adding numbers. It helps to count it twice or with another person to detect any variations.

When we switch to adding machines, we don't want to pay for something that also gives us a result of a simple math mistake. If the machine does make a math mistake, the company needs to make sure to include an error check system to make sure it corrects the mistake internally and seamlessly before showing the result to humans.

I think Autonomous Driving doesn't have to be perfect but at least, it has to prove that it is reliable in collision avoidance (repeatability and consistency). I think we may have to accept the imperfect intelligence for a while: The car doesn't know what to do in a new scenario (but it never collides with any obstacles) and it shuts down that requires a rider to call for help.
 
Last edited:
I think Autonomous Driving doesn't have to be perfect but at least, it has to prove that it is reliable in collision avoidance (repeatability and consistency).

Correct. In the paper entitled "AVSC Best Practice for Metrics and Methods for Assessing Safety Performance of Automated Driving Systems (ADS)", the SAE recommends that autonomous vehicles be able to maintain a safe longitudinal and lateral distance with other objects and have good OEDR reaction time. These two metrics are key for avoiding collisions.
 
Is it an accident if it runs into a pothole and damages the car? If so then I doubt this is true. Although avoiding potholes sounds like one of the easier automated driving tasks. Similarly it will be interesting to know how many curbs are being hit.

Nah, I'd define accident as any lateral collision with another road object. Scraping a curb probably wouldn't be included, unless the car jumps the curb.
 
  • Like
Reactions: DanCar
Another point being missed ... Tesla's fleet of driving cameras gives them an enormous advantage. Tagging and analyzing disruptions is a better approach than designing thinking machines.

Tesla Data Collection Advantage has been touted for years and in many threads too.

2016:

The Tesla Advantage: 1.3 Billion Miles of Data (paywall)

But despite the advantage, the 2017 Model X still slammed onto the concrete median divider, even though Tesla touted its vast data collection capability in its 2018 blog that:

"Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015 and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of. There are over 200 successful Autopilot trips per day on this exact stretch of road."

The thought of "Tesla's fleet of driving cameras gives them an enormous advantage" is a nice thought but then why that thought was not put into practice to prevent another fatal crash for the 2018 Model 3 in 2019 in Florida?

How do we know that thought is now put in practice currently in June 2021 in order to avoid another Tesla Collision?
 
Is Waymo not already working in Chandler, AZ? Will it take them decades to roll that out for wide release — or is their AZ thing just smoke and mirrors (I haven’t followed closely)?

what do you think self driving tech will look like ten years from now?

No, Waymo is not smoke and mirrors. But their vehicles are Level 4 with some serious restrictions. They also aren't lashing themselves to the wheel of "AI" in a lot of their work, either. It's a ton of traditional code, which is one of the major things that researchers in the space say is going to be necessary, whereas the CEOs receiving tens and hundreds of millions in compensation are hyping full on machine learning nonsense.

All solutions will necessarily rely on "computer vision", though how they solve it may drastically differ over time and between competitors.

In 10 years time, I suspect self driving tech will look very similar to what it looks like today, which is very similar to what it looked like 10 years ago, and 10 years before that. In fact, they are all extremely similar to CMU's NavLab from 1986 which used computer vision based on radar, cameras, and lidar. Honestly, I think if more people knew the history of autonomous vehicles they'd be much less hyped about all of this.



The key to this would be, initially, deployment of network receivers in AVs, and progressively-improving beacons/sensors/repeaters to feed the mesh network.

The entire concept of X2V and V2X is dead on arrival. Anything that requires a signal from a roadside device is just a complete non-starter. If your car takes behavior cues from some sensor network rather than the information presented to it by the scene it is in, then it is susceptible to trivial attack. I can sit on the roadside with a backpack and either jam the signal telling your car to stop, or I can send messages to all of the cars around me to tell them all to stop. I mean, just look at all of the nefarious stuff happening on the internet, and now think if you're willing to put your life in the hands of some network maintained by the lowest bidder?
 
...
The entire concept of X2V and V2X is dead on arrival. Anything that requires a signal from a roadside device is just a complete non-starter. If your car takes behavior cues from some sensor network rather than the information presented to it by the scene it is in, then it is susceptible to trivial attack. I can sit on the roadside with a backpack and either jam the signal telling your car to stop, or I can send messages to all of the cars around me to tell them all to stop. I mean, just look at all of the nefarious stuff happening on the internet, and now think if you're willing to put your life in the hands of some network maintained by the lowest bidder?
Sorry but this is simply an unrealistic objection strategy, proposing hypothetical nefarious and technically sophisticated actors, dedicated to the sabotage of infrastructure that is actually much harder to disrupt than what exists today. The reason such a fantasy has a chance of scaring anyone away is the same reason that one can create FUD around any new technology, including EVs, AVs, air traffic control, cellular communications, the internet or you name it, past present and future - opacity of the technical details, an instinct to protect the familiar and natural human wariness of change (except, it seems, for poorly-conceived yet popular social/economic revolutions in the name of weaponized justice).

If you want to disrupt transportation infrastructure, to whatever diabolical and presumably profitable end, then there are thousands of easier and more frightening ways to do it, right now today, that don't involve
  • waiting for development of a modern interconnected and BTW highly localized network,
  • spoofing the protocol,
  • breaking the ID encryption and substituting your own hashkey-correct and somehow untraceable fake ID,
  • intercepting and somehow cancelling the legitimate network broadcast traffic,
  • broadcasting your evil fake traffic that will be logged, localized, and recorded by the very cameras and sensors that are the intended spoof-targets,
  • and most unreasonably, coordinating this evil hack-attack on a massive scale so as to accomplish more than an annoying tie-up at some interchange.
A properly-constituted V2X network infrastructure would be the very definition of robustness, fault tolerance and de-localized redundancy. Recent hacks in the news, you'll note, exploited central points of failure. The roots and trunks of distribution networks, not the twigs and leaves. A snot-nosed hacker or anarchist kid with a backpack and a Guy Fawkes mask isn't going to accomplish very much by fouling an intersection this way, and it'd quickly become apparent as a pretty stupid plan with high barrier of entry and low realizable gain. Kind of a self-solving problem really.

Just gaze out your window and think for a minute about all the damage that you or any other clever chap from this forum could do to your town's infrastructure and citizens, from pranking to mayhem to murder, if you put your mind to it. Civilization and its technology is always fragile in the micro but robust in the macro, and if it fails it's for much deeper flaws than technical sabotage.

And no I will not list examples or post videos of how to mess things up much easier, cheaper and scarier than your V2X network-hacking idea. (As I don't work for Consumer Reports ;))

I really try hard to engage everyone here with a friendly, open-minded and constructive discussion of current reality and future (informed) predictive hypotheses. But I'll admit that this ill-considered "non-starter-dead-on-arrival" dismissal, of a very sensible though surely incomplete concept, really kind of frosted me. You could have asked me how the network could be made resistant to pranking or sabotage; I think there are good answers but that would have required a lot of work or a disclaimer because it's not my field. Too late now.

Instead I'll throw it back to you, to justify why the V2X network would be more vulnerable and/or a juicier target than attacks on our communications, food-distribution, water supply or power grid.
 
Last edited:
If your car takes behavior cues from some sensor network rather than the information presented to it by the scene it is in, then it is susceptible to trivial attack.
I forgot to mention, aside from responding to the trivial-attack point: I never proposed, and I don't think anyone would, that the V2X info is "rather than", i.e. supersedes, camera vision. The point is that cars (and their drivers even if not robots) can always benefit from a helpful scouting report, what's up ahead, is there something in the tunnel, is there a toddler behind that parked car, is there something odd I dont really know but let's calmly slow down before we get right up on it!

The issue of exactly how to use and prioritize various bits of received information, possibly conflicting with other bits, or with the latest map or with our real-time view, is of course a major topic but beyond the present scope. Let's just understand that we won't fly through a light that reports green if the camera says it's red, nor if other X2V info tells us there are hidden cars or vulnerable pedestrians.

The impetus for the whole discussion is in the thread title. If we stipulate that AI will not soon achieve human or super-human reasoning and extrapolation when presented with unforeseen scenarios, then is there a path to reliable autonomy? I think yes, by giving the admittedly dull-witted robot a super-human information advantage, in the form of V2V/V2X reports on the things that cannot be seen.

Further, I would argue that this concept should be developed even if its purpose had nothing to do with enablement of fully-autonomous cars. In these discussions about the coolness and technical development towards L5, we (myself included) too easily forget that traffic accidents are a leading cause if injury, death and economic loss. Therefore, so what if human drivers remain better anticipators than robots? They aren't good enough to make tragedy rare, and today we have the know-how to help them out, a potential March of Nines improvement by making awareness better no matter who's at the wheel. Personally I still think safe AVs will be an enabled result, but I'm saying there's ample justification without that.
 
  • Like
Reactions: nativewolf
image recognition and simple patterns is one thing, the whole context is another which is lacking.

I would think they could do what Microsoft etc does to describe a picture: What’s that? Microsoft’s latest breakthrough, now in Azure AI, describes images as well as people do - The AI Blog

1623063226684.png


They could then use this to create context, feeding into a "classical" piece of code, that determines what to do.

vehicle communication is dead in the water. No need for that. Brings more problems like security etc than it solves.
 
In 10 years time, I suspect self driving tech will look very similar to what it looks like today, which is very similar to what it looked like 10 years ago, and 10 years before that. In fact, they are all extremely similar to CMU's NavLab from 1986 which used computer vision based on radar, cameras, and lidar. Honestly, I think if more people knew the history of autonomous vehicles they'd be much less hyped about all of this.

It would be easy to look at cars with cameras, radar and lidar from the 80's and from today and think that autonomous driving has not really changed all that much. But that would be wrong IMO. The software under the hood is radically more sophisticated. The computer vision is better. The algorithms are more advanced etc.. Autonomous vehicles today can do a lot more. I would also argue that the engineering of the sensors is probably more advanced as well. The cameras, radar, and lidar are probably better engineered and better quality today than they were in the 80's.
 
You should state that this is your personal opinion. Don't state it so emphatically as if it is fact. You have no evidence for making such a prediction.

IMO, it is a ridiculous prediction. Tesla would need to somehow solve FSD AND collect billions of miles of data to prove safety and do both things all in the next 6 months. IMO, that is highly unlikely.
Why the attack? Either the vision beta is good or not, if it works than we are at a state better than humans. It is not a high bar, it is a low low bar. Such a low bar that 8.x is probably already there. The second statement admits uncertainty.
 
  • Like
Reactions: mikes_fsd