Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

For anyone who wants to know why we *won't* have self-driving cars soon...

This site may earn commission on affiliate links.

neroden

Model S Owner and Frustrated Tesla Fan
Apr 25, 2011
14,676
63,892
Ithaca, NY, USA
This says it better than I can. Cosma Shalizi is brilliant as always:

Revised and Extended Remarks at "The Rise of Intelligent Economies and the Work of the IMF"

Key quotes:
This is that almost everything people are calling "AI" these days is just machine learning, which is to say, nonparametric regression. Where we have seen breakthroughs is in the results of applying huge quantities of data to flexible models to do very particular tasks in very particular environments. The systems we get from this are really good at that, but really fragile, in ways that don't mesh well with our intuition about human beings or even other animals. One of the great illustrations of this are what are called "adversarial examples", where you can take an image that a state-of-the-art classifier thinks is, say, a dog, and by tweaking it in tiny ways which are imperceptible to humans, you can make the classifier convinced it's, say, a car. On the other hand, you can distort that picture of a dog into an image something unrecognizable by any person while the classifier is still sure it's a dog.

Rodney Brooks, one of the Revered Elders of artificial intelligence, puts it nicely recently, saying that the performances of these systems give us a very misleading idea of their competences3.

In the meanwhile, though, lots of people will sell their learning machines as though they were real AI, with human-style competences, and this will lead to a lot of mischief and (perhaps unintentional) fraud, as the machines get deployed in circumstances where their performance just won't be anything like what's intended.
 
My take on self-driving cars is the biggest limitation is the infrastructure and regulatory aspects. So what's going to happen is we'll only see self-driving in specific white listed areas. Where the only thing that really counts as self-driving is going to be fleet vehicles.

We seem to be lacking in a lot of critical areas

We don't have intelligent stop lights that communicate to cars
We don't have roads that communicate to cars (like the speed limit, construction, etc)
We don't seem to have a really good publicly accessible database of HD maps that's constantly updated

Instead we have competing standards, and a patch work of regulation.

We have no clear path towards self-driving.

AI is often argued as the solution. That instead of fixing the infrastructure/regulatory problem we'll instead have some super robust AI/software combination that will magically drive like a human.

Why do we care if it an AI might misinterpret a sign? Why is it even sign reading? Even humans fail at noticing signs, and correctly interpreting the data on them. Especially school zone signs.

Driving a Model 3 with EAP is both exciting, but it's a futile exercise in what happens without any direction in moving towards self-driving.

The car has no idea what the speed limit is in lots of areas
The car doesn't use HD maps to prevent it from doing stupid things (like diving for the exit)
The car struggles with determining the distance between the vehicle, and the car next to it. The only thing that can tell it the distance is the ultrasonics, and they're not good if it's away from the vehicle or if the vehicle next to you is high up.
The car doesn't have any sensors to reliable detect a vehicle coming in from the rear at a large speed differential.

We need to stop looking at AI as EVERTHING, and instead look for a more balanced solution. Where we use AI to augment a self-driving vehicle, and not the entire solution.
 
Waymo has self driving cars.
I would title the post 'Why Musk's shortcut to self driving cars using NN and "narrow AI" ain't gonna work'

Why do we care if it an AI might misinterpret a sign? Why is it even sign reading? Even humans fail at noticing signs, and correctly interpreting the data on them. Especially school zone signs.

Because actions by a self driving cars are probability based. More inputs increase the odds of a correct action.

Good enough FSD will be complex at first. HD maps AND NN sign reading.
 
Last edited:
  • Like
Reactions: neroden
I should note that self-driving trains have been a solved problem since the 1970s and have been implemented in many metro systems. The difference is that they don't have to routinely deal with pedestrians, deer, unmarked pavement, street festivals, road construction (rail construction is done under much more stringent and rigid procedures), etc. etc. etc.

We can probably make isolated, carefully maintained superhighways for "self driving mode".
 
  • Like
Reactions: Owner and Fiddler
Interesting headline, but we have all heard things like this before by people that can't visualize ahead.... even ones whom are really smart. I guess the key word in the headline is "soon", While agreeing there are challenges and technical roadblocks at present, history shows that technology moves a lot faster than people think.

The few odd quotes I think of off hand are that occurred over a long time in history...
"There is no reason for an individual to have a computer in their home" - former president of Digital Equipment Corp.
"I think there is a world market for maybe 5 computers" - I believe it was Thomas Watson, chairman of IBM.
"Heavier than air flying machines are impossible" - President of the Royal Society.
"There is practically no chance communications space satellites will be used to provide better telephone, telegraph, television, or radio service inside the United States" FCC commissioner.
"The subscription model of buying music is bankrupt." - Steve Jobs!
And my favorite....... "Everything that can be invented, has been invented." - Commissioner of US patent office.

I'm sure you can think of lots more, but the point is that the passing of time usually proves many predictions to be dumb. ;)
 
And Elon uses the example of AI learning to play video games. After a painful amount of time the AI gets to the point where it can tie a human, but by the following afternoon, it can beat all humans. All it needs is the example material, and that's what Tesla is getting when we disengage and submit a bug report.

-Randy
 
  • Disagree
  • Like
Reactions: EinSV and neroden
The A12X Bionic chip exists in Apple showrooms right now. It packs 10Bn transistors into 122mm2.

Consider saying "Elon has got this" 10Bn times over the next 950 years before writing off the power of clever people and learning algorithms all over the world trying to get these 10Bn transistors to stop an auto at a red light.
 
  • Disagree
Reactions: neroden
Waymo has self driving cars.
I would title the post 'Why Musk's shortcut to self driving cars using NN and "narrow AI" ain't gonna work'
No, they don't. Waymo has trams, which look like cars and ride on invisible rails. These rails invisible for you, but they are up there, and Waymo cars can ride only if the "rails" are in and actual. Waymo cars have serious problems on intersections and vs. clueless badly driving humans. Like trams. And of course waymo cars ride with the speed of trams.
Because actions by a self driving cars are probability based. More inputs increase the odds of a correct action.

Good enough FSD will be complex at first. HD maps AND NN sign reading.
HD maps are needed for way planning. Arguably most developed and known part of self-driving algorithm.
It is "boring".
Tesla can and does read signs. Not all of them because the signs used (especially in US) are "arbitrary" and way too often are done outside of standard specifications. "Too much work to comply" at the moment.
Fuzzy algorithms are much more than "probability" based.
 
  • Like
Reactions: neroden
No, they don't. Waymo has trams, which look like cars and ride on invisible rails. These rails invisible for you, but they are up there, and Waymo cars can ride only if the "rails" are in and actual. Waymo cars have serious problems on intersections and vs. clueless badly driving humans. Like trams. And of course waymo cars ride with the speed of trams.

HD maps are needed for way planning. Arguably most developed and known part of self-driving algorithm.
It is "boring".
Tesla can and does read signs. Not all of them because the signs used (especially in US) are "arbitrary" and way too often are done outside of standard specifications. "Too much work to comply" at the moment.
Fuzzy algorithms are much more than "probability" based.

I prefer to travel by clapping my hands three times and being instantly transported to my destination. But there is little evidence that my desired approach is happening any time soon.

When is Summon coming out of beta? When is Tesla going to become accountable for accidents at 2 mph while using Summon?
 
  • Like
Reactions: neroden
I prefer to travel by clapping my hands three times and being instantly transported to my destination. But there is little evidence that my desired approach is happening any time soon.
Yea I want that. Clap on..... drive me home. Clap off... stop driving. Sounds familiar to a commercial I've see hundreds of times :D Hey, why is this so difficult? As long ago as 1939, Dorothy was able to click her ruby slippers 3 times and be instantly transported back to Kansas :eek:
Ok, apologies for off topic. Just thought you needed a laugh.
 
  • Like
Reactions: neroden
I prefer to travel by clapping my hands three times and being instantly transported to my destination. But there is little evidence that my desired approach is happening any time soon.

When is Summon coming out of beta? When is Tesla going to become accountable for accidents at 2 mph while using Summon?
Tesla would say "when it is ready".
I say when the legislation will cover it. The same applied to self-driving, full auto-cruise etc. there are plenty of cases which would blow existing standard for liability. So for now stick to beta, and won't expect anything else. Whatever happens it is owner's fault for now.
Summon works pretty fine, on a level of autopilot. you're still a "driver" and it is your job to anticipate spacial collisions. Model S is very much fine, I don't believe Model 3 is any worse.
Tesla's existing sensor set is incapable to detect ~80cm+ high and relatively big objects. Radar doesn't detect radio transparent objects i.e. it is good to detect water objects =humans, and metals=cars only. For optical proper detection they need sufficiently fast systems. The first variant will be introduced next spring.

As it was already said many many times.
It was fine to hear neesayers in 2008. 2012 was also still OK. In 2014 already and definitely in 2016 "it is impossible" was getting boring and increasingly irritating. now especially after February, and May-June events "Nee" becomes ridiculous.
It is no question they can do it. The technical problems are of "experience" character. Banal technical complexity of numerous exceptions, border cases and just copious "error management", where autopilot has to detect and correct whatever mistakes done by ..tards building and managing roads, and of course ....tards riding badly serviced cars. When experienced people (in MIT for example) count Tesla's miles, they do it for a reason. Because that is the only way to solve it.
 
Last edited:
Self driving cars are coming. My bet is that that legal liability issues in Western countries would mean that non western countries have it first. Like singapore or japan.

But what is a self driving car. There are probably 50 descreet levels of capability between one definition of self driving and another.

If singapore had a fleet that could travel the entire country but not go to malaysia? Is that self driving?

If singapore had a fleet of self driving cars that lacked steering wheels, but could be remote controlled by humans in case of edge case uses. Is that self driving?

If nissan sold self driving leafs in japan that were limited to the routes that the elderly required? Is that self diving?
 
  • Disagree
Reactions: neroden
Yeah, seems like none of you have actually read Cosma Shalizi's piece. Pearls before swine. I will note that Shalizi teaches both so-called "deep learning" in the CS department and statistcis in the statistics department... he knows what he's talking about.

Nonparametric regressions aren't magic.
 
history shows that technology moves a lot faster than people think.
(insert bunch of quotes, many of them out of context)

It is amusing that so many people like to cite such quotes. While some of them are geniuely stupid, other... aren't.

For example, I am quite sure that when Thomas Watson said his "I think there is a world market for maybe 5 computers" thing at time when there WAS world market for computers in single digits. Unless this sentence was in piece titled "How Thomas Watson imagines world in 50 years", this quote has no right to be on list.

I'm sure you can think of lots more, but the point is that the passing of time usually proves many predictions to be dumb. ;)
Indeed. There were many predictions about AI just around a corner in last few decades. Nothing ever came out of it and nothing will come out of it any time soon, since surprise, surprise, AI is harder than people think. Even "AI" that is not actually any AI.

And I don't understand what you are arguing against here. No one here says "self-driving is impossible" or "in 100 years" (beside people like dondy protesting against that strawman, of course).
 
I worry that AI advantages are so great that to make AI perform reliably we humans will be forced to adapt.

Basically making AI understand irrational human behavior will be deemed too costly and preferred way will be to change or make illegal certain human behaviors instead.

With that said driving rules has been dumbed down so much that I think AI has high chances to succeed sooner than later. It may not be able to drive in every situation but it will be good enough.
 
Last edited:
It is amusing that so many people like to cite such quotes. While some of them are geniuely stupid, other... aren't.

For example, I am quite sure that when Thomas Watson said his "I think there is a world market for maybe 5 computers" thing at time when there WAS world market for computers in single digits. Unless this sentence was in piece titled "How Thomas Watson imagines world in 50 years", this quote has no right to be on list.

The no clear evidence that he ever said that. It _might_ have come from someone saying that he had said that they had only anticipated demand for 5 copies of a particular computer design, conflated with other people saying that a small number of computers could provide all the necessary calculations for the world.

ndeed. There were many predictions about AI just around a corner in last few decades. Nothing ever came out of it and nothing will come out of it any time soon, since surprise, surprise, AI is harder than people think. Even "AI" that is not actually any AI.

And I don't understand what you are arguing against here. No one here says "self-driving is impossible" or "in 100 years" (beside people like dondy protesting against that strawman, of course).

If you think of AI only as general intelligence only, then I think AI will never be achieved.
But that's a rather useless definition.

The world has lots of tasks it wants to be performed cheaper and better than humans can do them. Some of those tasks require intelligence and it would be great if we could replace or improve the expensive humans that we need to do them.

Do we care whether a taxi driver can play chess?
 
Yeah, seems like none of you have actually read Cosma Shalizi's piece. Pearls before swine. I will note that Shalizi teaches both so-called "deep learning" in the CS department and statistcis in the statistics department... he knows what he's talking about.

Nonparametric regressions aren't magic.
He is teaching in Carnegie Mellon. It's not exactly the best place to learn NN in USA. He comes from theoretical physics and his experience was in genetic algorithms. Dead horse for practical NN indeed.
The only valid points he makes is that NN is indeed no AI, and end-to end NNs are valid by definition within the data sets they are trained with ONLY. It is of course valid for any mathematical models having parameters or functions derived from data (i.e. all practical models) and I would be really happy if people in academic would hammer this truism into all students heads. They don't.
No cookie here.
 
Thanks, dondy, for agreeing with me.

Obviously, what Tesla is doing is just NNs -- not AI, as you agree -- and are valid on the data sets they're trained on, ONLY -- as you agree. So they can't be used for general purpose driving. The other car companies are doing the same thing, except Waymo, which is running on rails so is even less general-purpose. Quod Erat Demonstrandum.