Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Edge cases

This site may earn commission on affiliate links.
Reminds me of the documentary AlphaGo, where the software improved over time and got increasingly better at learning and decision-making - to the point where it was able to master a game in which there are 10 to the power of 170 possible board configurations.

Wasn’t that remarkable! shortly after, they made AlphaZero which taught itself how to play better than any other prior computer engine—a remarkable algorithm…that being said, I think FSD is harder because we don’t know all the rules of the game, like you do with Go or Chess—these are environments constrained by specific rules and limited predictable results, so you can mini-max the best decisions every time….I don’t think you can mini-max FSD…or can you?
 
  • Like
Reactions: Ben W and daktari
So there are three promises:
1. Elon's different verbal promises
2. purchase page circa 2017 After what time has passed would you consider an FSD class action lawsuit?
3. purchase page today (doesn't seem to have much specificity, just "autosteer on city streets")

For 1, Elon clearly promises L5 autonomous driving. I do not believe that this is happening until we see a breakthrough in generalized artificial intelligence. Are these statements legally binding? I'm not sure. But his claims are grossly exaggerated.

Elon also verbally promised robo-taxies. I don't see this happening before they say FSD is feature complete. I believe they will also need a network of remote operators to make any system that allows the driver to leave the driver's seat.

Personally, I see FSD as being marked as feature complete when door to door L2 is deployed to the fleet. Tesla never used levels in their marketing that I'm aware of. I'm only aware of a single verbal comment by Elon that referred to L5. Also all of the marketing material I posted above does refer to somebody in the driver's seat. That eliminates L4 and L5 anyway.

Now L3 is interesting. I think L3 is possible if they demonstrate that their system can always give "enough" of a warning before requiring human input. What "enough" is needs to be the topic of research. They will have to prove this over the course of many millions of miles. Then Tesla themselves can decide if they want to accept the responsibility of letting the drivers not pay attention. Since the L2->L3 distinction comes down to liability and not technology, I think FSD will be marked "complete" before this stage.

I don't believe assistanceless L4 will be possible for a long while because I don't belive that SDCs can get parked in all the situations where human judgement is required. I think that this is the big challenge. As long as a car can get stuck in a situation where it isn't possible to change drivers, L4 will simply not be possible without remote assistance. Maybe that assistance could come from a user who's taking a nap in the back of the car.

Another question I won't go into (it's been beaten to death elsewhere) is that of hardware. Is HW3 sufficient for the L2 door to door that (I believe) will fulfill their promise? I'm not sure.
My thoughts exactly. But you said it better!
 
  • Like
Reactions: idriveacar
tldr; if we ever can create a brain that behaves like a human brain, it will suffer the same issues that make human brains unsuited to the task of driving. The real solution is a mix. But the proof that such a system can exist (which may or may not itself exist!) does not follow from the fact that brains are physical.

Humans are already very good at driving, but if we can create a computer that can perform like a human brain, we should be able to make it a responsible human brain that doesn't take the tired, lazy, dangerous and selfish actions that are the source of the large majority of collisions.
 
  • Like
Reactions: gaspi101
The way I take your comment to mean is that if we can build a computer that behaves like a brain, it could be harnessed to solve the same problems humans can, except it will not ever make mistakes and be infinitely faster in decision making.

This is a very common, likely false, line of reasoning. It's one of the things that bothers me about how androids and AI are represented in movies.

Computers and humans think in fundamentally different ways. Computers are nothing more than very, very, fast calculators. Brains are more like... super sophisticated, unfathomably complex pattern matching machines, that self mutate and rearrange based off of rules that aren't entirely understood.

If you build a brain out of a computer, you don't end up with the best of both worlds. As the artificial brain gains that fuzzy contextualization and pattern matching capabilities, its ability of executing rote logic and calculations is effectively sacrificed.

You could then try to teach that artificial brain how to drive, but now you are working with a (probably much, much, worse!) brain like intelligence that is prone to making mistakes and getting distracted, just like a human. If you let that brain self mutate and learn during operation, now it's prone to going insane!

Maybe humans will discover the secrets of the brain one day, and learn to make one that has the attributes we want without the ones we don't. But that's not happening in my lifetime, so I feel free to speculate. My speculation is that the things that make humans great at solving highly contextual situations on the road are fundamentally the exact reason why humans suck at solving the rote, boring, tasks. Pattern matching and precise calculation are mutually exclusive, and perfect mechanical driving at all time is mutually exclusive to highly intelligent, context aware, driving.

tldr; if we ever can create a brain that behaves like a human brain, it will suffer the same issues that make human brains unsuited to the task of driving. The real solution is a mix. But the proof that such a system can exist (which may or may not itself exist!) does not follow from the fact that brains are physical.
Stopped reading at the first paragraph. I am familiar with the theoretical differences between brains and computers. See Turing’s works. I said It’s going to be possible. I said nothing about a silicon version of our brains. Your response will be useful to someone else, just not me.
 
  • Disagree
Reactions: gaspi101
I don't understand the fascination with "levels". I use NOA on freeways all the time, I don't care what level it is, it's very useful to me. Like riding a horse. It would be counterproductive to blindly let the horse make all decisions. That doesn't make it any less useful.

I've changed by interacting with the system for 3 years. It's adaptation of something like the dividing line between the autonomic and conscious nervous systems. I don't think about it much, I punch in and out of auto-steer as needed in contexts where human conscious decisions work better. Nicki and I together make a better driver than either of us alone. So why should I send Nicki out by him/her self as a "level 5" but less optimal device? We don't know how this whole vehicle autonomy story will develop. Maybe the environment will change to help the cars, as ALL cars evolve towards greater autonomy. For instance additional markers.

There's a trait that good programmers/designers have, and it's the optimism that allows them to get up in the morning and tackle "impossible" problems and schedules that many people see as madly unrealistic. They might be late, but they usually get there. You can't solve anything without that approach. The pedestrians who shout "fraud" can only imagine straight lines or criminal intent. They couldn't ever start, let alone solve, a tenth of what the Tesla team has accomplished. Or predict the outcomes. Wake up, it's never been done. It's a wonderful and exciting adventure.
 
I don't understand the fascination with "levels". I use NOA on freeways all the time, I don't care what level it is, it's very useful to me. Like riding a horse. It would be counterproductive to blindly let the horse make all decisions. That doesn't make it any less useful.

I've changed by interacting with the system for 3 years. It's adaptation of something like the dividing line between the autonomic and conscious nervous systems. I don't think about it much, I punch in and out of auto-steer as needed in contexts where human conscious decisions work better. Nicki and I together make a better driver than either of us alone. So why should I send Nicki out by him/her self as a "level 5" but less optimal device? We don't know how this whole vehicle autonomy story will develop. Maybe the environment will change to help the cars, as ALL cars evolve towards greater autonomy. For instance additional markers.

There's a trait that good programmers/designers have, and it's the optimism that allows them to get up in the morning and tackle "impossible" problems and schedules that many people see as madly unrealistic. They might be late, but they usually get there. You can't solve anything without that approach. The pedestrians who shout "fraud" can only imagine straight lines or criminal intent. They couldn't ever start, let alone solve, a tenth of what the Tesla team has accomplished. Or predict the outcomes. Wake up, it's never been done. It's a wonderful and exciting adventure.
Basically the only meaningful difference between levels is liability, it doesn't tell me how sophisticated the software is. You could have a perfect driving agent that will drive down any road anywhere in the world, the AlphaGo of driving, and it's level 2 simply because the company doesn't want to become liable for accidents.
 
reminds me of this excellent Tom Scott video where he discusses how computers may be unable to solve human language because of their inability to resolve contextual clues. I feel it must be the same with autonomous driving.
On a podcast with the legendary linguist Noam Chomsky - he had a similar opinion about natural language capabilities of computers. He said they don't "understand" language.

More broadly, the basic idea of neural ntworks is that they work like biological neurons. So, anything human brain can solve, with enough training, they should be able to solve. This is the "first principle" that Musk keeps talking about.

But, biological neurons and NN have differences. So, we don't really know what the limitations of NN are - this is what I gather from the best minds in academia about it.

Here is a good podcast on differences between neural networks and the biological brain.

 
  • Informative
Reactions: gaspi101
Saying computers can't "understand language" because they don't follow context is no longer correct, because of the immense growth in processing power and memory.

It's largely a matter of memory, a) memory within a segment, reaching back to build context, e.g. a vector 3D representation, and b) predictive NN Deep Learning - memory of countless previous cases being applied forward.

Algorithmic autopilot does OK in simple cases. I believe that's what we saw with base AP driving on city streets. Where the rules applied it did better than the current "FSD Beta", which seems to be starting over - precisely in developing context, applying memory. It seems short on useful canned rules.

After 3 years of using NOA on Freeways I can anticipate where the system will have trouble, where its rules break down. It's become second nature, like I've internalized Nicki's rule set. Driving in NOA works great for me, with say 95% utility.

At this Beta stage I expected NOA on City Streets to be similar, though with at best 25% utility. But it's quite different. Aside from the obvious wide boulevards with no turns, I'm finding it harder to predict where the trustworthy 25% will be, and to anticipate problems. In a way it's all edge cases :eek:
 
Last edited:
  • Like
  • Disagree
Reactions: helvio and gaspi101
Saying computers can't "understand language" because they don't follow context is no longer correct, because of the immense growth in processing power and memory.

It's largely a matter of memory, a) memory within a segment, reaching back to build context, e.g. a vector 3D representation, and b) predictive NN Deep Learning - memory of countless previous cases being applied forward.

Algorithmic autopilot does OK in simple cases. I believe that's what we saw with base AP driving on city streets. Where the rules applied it did better than the current "FSD Beta", which seems to be starting over - precisely in developing context, applying memory. It seems short on useful canned rules.

After 3 years of using NOA on Freeways I can anticipate where the system will have trouble, where its rules break down. It's become second nature, like I've internalized Nicki's rule set. Driving in NOA works great for me, with say 95% utility.

At this Beta stage I expected NOA on City Streets to be similar, though with at best 25% utility. But it's quite different. Aside from the obvious wide boulevards with no turns, I'm finding it harder to predict where the trustworthy 25% will be, and to anticipate problems. In a way it's all edge cases :eek:
I’m not sure I understand what you mean by no longer being correct...contextual information can only be analyzed with the use of the sum of human experience…if I see a stopped car and the driver is elderly and looks like their wheels are turned to the side, I’m going to infer that they’re having a hard time judging when it’s safe to make a left. If I’m behind them, perhaps I would decide based on those clues that maybe I shouldn’t zoom around them to get by. That might cause an accident, although FSD would analyze that it’s a stopped vehicle and we need to go around…there are millions of examples…what do you mean contextual clues are no longer problems?
 
Saying computers can't "understand language" because they don't follow context is no longer correct, because of the immense growth in processing power and memory.

It's largely a matter of memory, a) memory within a segment, reaching back to build context, e.g. a vector 3D representation, and b) predictive NN Deep Learning - memory of countless previous cases being applied forward.
Is this your take or the industry view ? Any papers you can refer me to ?

Otherwise I'll have to take Chomsky's words ...
 
I suppose the question I’m trying to get to his weather L4/L5 autonomy is ultimately a pipe dream, and far far beyond our current technological capabilities. I’m thinking 2058, Elon announces HW15.0 retrofit, which is a repurposed human brain in a jar with cables sticking out.
L4-5 is only possible if
1. You remove all human drivers. Only self driving cars on the road.
2. You upgrade all the roads to have proper markings for FSD.
Then it is possible
 
So ... ?

On AI podcast by Lex Fridman - he has interviewed pioneers & legends in the field who will not say whether L4/L5 can be achieved.

BTW, Waymo is already L4.

ps : If what you are saying is the actual industry / SME consensus, billions would not be poured into AV industry.
This is just my personal informed opinion. I don't think L4-5 "everywhere" is possible without road upgrades, have you seen roads in Istanbul or Mumbai?
Waymo is only L4 in very few selected regions. Nowhere near working everywhere.
TESLA NOA can be considered L4 on highways with proper lane marks it is pretty good.
"Everywhere" is a completely different story.
 
TESLA NOA can be considered L4 on highways with proper lane marks it is pretty good.

This statement is false and dangerous. No Tesla vehicles ship with any L4 features today, and no filings with the government have been made from Tesla to indicate that they are even actively developing any > L2 features.

L4 has a strict definition and NOA does not meet it. Your post is dangerous because this sort of misinformation is liable to get somebody killed. In fact the belief that NOA is more capable than it is has caused unnecessary deaths already.

Please be more mindful of how you communicate about Tesla vehicle capabilities.
 
This statement is false and dangerous. No Tesla vehicles ship with any L4 features today, and no filings with the government have been made from Tesla to indicate that they are even actively developing any > L2 features.

L4 has a strict definition and NOA does not meet it. Your post is dangerous because this sort of misinformation is liable to get somebody killed. In fact the belief that NOA is more capable than it is has caused unnecessary deaths already.

Please be more mindful of how you communicate about Tesla vehicle capabilities.
I appreciate your concern, but nobody will go risk their lives based on an online opinion post by someone they don't know.
 
I appreciate your concern, but nobody will go risk their lives based on an online opinion post by someone they don't know.

People's understanding about things are shaped bit by bit from all of the information they may come across. Lies spread and gain traction the more frequently that they are repeated. In the case of Teslas, lies from an _owner_ carry more weight.

Why do you want to be part of the problem?