Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Edge cases

This site may earn commission on affiliate links.
This is just my personal informed opinion. I don't think L4-5 "everywhere" is possible without road upgrades, have you seen roads in Istanbul or Mumbai?
I've not seen roads in Istanbul - and Mumbai I've only visited briefly. But I did spend some 30 years in India ;)

Interestingly enough - so did a lot of Tesla FSD Beta team - including one who presented at Tesla AI day and even showed a clip/image of the chaotic traffic in Madras/Chennai.

Complexity per se is not the limiting factor. The limiting factor I think is the structure of current CNN - may be they need new types of NN which are closer to biological NN that solves the biggest current issue - you need a LOT of training. Biological NN doesn't need that much training - babies watch something new a couple of times and they learn how to do it. They don't need to watch 10,000 times.
 
I've not seen roads in Istanbul - and Mumbai I've only visited briefly. But I did spend some 30 years in India ;)

Interestingly enough - so did a lot of Tesla FSD Beta team - including one who presented at Tesla AI day and even showed a clip/image of the chaotic traffic in Madras/Chennai.

Complexity per se is not the limiting factor. The limiting factor I think is the structure of current CNN - may be they need new types of NN which are closer to biological NN that solves the biggest current issue - you need a LOT of training. Biological NN doesn't need that much training - babies watch something new a couple of times and they learn how to do it. They don't need to watch 10,000 times.
I think problem isn't the complexity. Fundamentally trained IA can only interpolate, its extrapolations can be very inaccurate. When humans encounter a situation that is not similar to anything that they have seen before they can use their learning in other areas to extrapolate what is the action. They can do the wrong thing too unfortunately.
Since self driving IA is only trained with driving data if it is in a situation that is not similar in any way to what it was trained on it has to extrapolate and make a huge mistake in that.
That is why they must operate in an area where these events are extremely rare or never happen.
That said, I believe self driving IA can be better than humans in well designed cities.
 
Fundamentally trained IA can only interpolate, its extrapolations can be very inaccurate.
Not sure what you mean. BTW, do you work on CNN or just ML ?
Since self driving IA is only trained with driving data if it is in a situation that is not similar in any way to what it was trained on it has to extrapolate and make a huge mistake in that.
FSD has to just learn to drive on the road. It doesn't need to develop "general intelligence". Enough training should give it the ability to drive correctly with fewer mistakes than humans.
That said, I believe self driving IA can be better than humans in well designed cities.
Well designed city is a utopia we should all just forget about. There are crumbling old bridges governments don't want to repair and you think there will be "well designed cities" ?
 
Not sure what you mean. BTW, do you work on CNN or just ML ?

FSD has to just learn to drive on the road. It doesn't need to develop "general intelligence". Enough training should give it the ability to drive correctly with fewer mistakes than humans.

Well designed city is a utopia we should all just forget about. There are crumbling old bridges governments don't want to repair and you think there will be "well designed cities" ?
Interpolation and extrapolation:
Each training data is a point in a multi dimensional space. The space is infinite, but driving situations cover a certain area in that space. So does the training data. When you encounter a situation that falls somewhere in that area AI can do a very good job by interpolating from nearby training data points.
If it encounters a situation that is way outside the training area it needs to extrapolate, and that can be very inaccurate. There can be infinite ways you can encounter data that is outside the training range. Space is infinite.
Therefore if all the roads are well marked, where a car can go and can't go are well defined then AI will do a super human job. If not it can fail miserably in some situations.
Due to this extrapolation problem, self driving car that can drive everywhere is a pipe dream. Self driving where lanes are properly marked and signs are properly posted can be reality. We just need to require cities to make their roads self driving compatible.
 
I appreciate your concern, but nobody will go risk their lives based on an online opinion post by someone they don't know.
Not to put too fine a point on it, but recent shortages of veterinary ivermectin would seem to refute this.

Regarding L4: I've driven (or been driven) many thousands of miles on highway NOA, and it is nowhere close to L4. I'd guess that it currently requires an elective disengagement about every 20 miles (where its behavior is not actively dangerous, but dumb enough that I immediately take control), and a necessary disengagement about every 200 miles (where there would be a nonzero risk of incident without intervention). L4 will require several orders of magnitude higher reliability than that, like one such incident per million miles.
 
Last edited:
We have large areas of the country where there is no cell or high speed internet. The government isn't going build infrastructure for self driving cars anytime soon
Agreed, which is why it took until self driving cars didn't need special infrastructure to operate for them to become a thing.

The fact that they run on normal roads is exactly what will drive their adoption. They just wouldn't work any other way.
 
  • Like
Reactions: gaspi101
Not sure what you mean. BTW, do you work on CNN or just ML ?

FSD has to just learn to drive on the road. It doesn't need to develop "general intelligence". Enough training should give it the ability to drive correctly with fewer mistakes than humans.

Well designed city is a utopia we should all just forget about. There are crumbling old bridges governments don't want to repair and you think there will be "well designed cities" ?
I finally received FSD beta 10.5. It is very impressive. That said I have to intervene in average every 10 minutes of city driving. Every time it encounters a situation that it was not trained for it does the wrong thing. For example, I am going straight, stopped at the trafic light, the light turns green, someone turns left from the opposite direction without waiting for me. A human would stop, honk maybe, then proceed after the intersection is clear. FSD tried to make a hard accelerated right turn, because it assumed the intersection is blocked.
Unfortunately there will be infinitely many such cases. So level 4 is not anytime soon. With some improvements this can become level 3.
 
  • Disagree
Reactions: TessP100D
Tesla still states that:

Full Self-Driving Capability​

All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat.

Here are some of the common times when my Autopilot does not work. Most seem worse with the RADAR disabled for FSD:
  • Morning or afternoon sun "Blocking" the front or a B-Pilar camera
  • Fog, rain, or snow, especially at night
  • Glare from the road after a rain causing the front camera to be "blocked"
  • Snow packed streets with lines and curbs covered
  • Sun reflecting off of snow causing any camera to be "blocked"
  • Leaves covering the street until enough cars grind them up
So far AP/FSD vision systems have not learned to see through solids, reflective liquids, and glare.

Even a cop, fireman, or worker waving cars around into oncoming traffic is not an Edge Case. It's normal.
Cops directing traffic to ignore stop lights is not an Edge Case either. It's called rush hour.

So what is an extreme or Edge Case that AP/FSD has trouble with? Every case I have seen is pretty normal.
 
You might be right. But I will tell you this: I have been humbled several times in my life by believing something was outside the realm of feasibility or even possibility, only to be proven dead wrong. When I saw the first iPhone, I thought it was absolute magic. When SpaceX started landing boost rockets on earth, I was utterly stunned. I‘d like to think I’m a pretty smart person…but I have no illusions—I’m a complete knuckle-dragger in comparison to people like Elon Musk. I just wish there were someone around much smarter than me that could dumb it down to my level so I could understand how the hell Elon plans to get there…
Landing rockets back on earth is easier than FSD.
 
  • Funny
Reactions: gaspi101
Saying it can't be done is just ridiculous. Flying was once considered impossible. As was space travel. So... "can't get there" is silly. Of course it will happen. Can it happen with the hardware currently in the cars? Can it happen in the next decade? or two? So "when" is the only real question.
Sorry you don’t believe it. FSD is a fraud.
stop and think about what your trying to say.
 
After testing FSD Beta for a little while, I’ve encountered a few edge cases where I am not sure whether a solution is possible at all. For example, if you are driving on a one lane road, and you encounter a stopped vehicle, FSD beta invariably chooses to leave the lane and go onto the opposite lane in order to go around the stopped vehicle. however, it’s often the case that the stopped vehicle is only the last in a long string of stopped vehicles in gridlock traffic. Or perhaps traffic on a one lane road that is turning. on my commute to and from work, it’s decisions in this regard or approximately 90% incorrect, and I have narrowly avoided a few accidents after the car chose to go around at full speed.

The decision to go around a stopped car on a one lane road is purely contextual. If there is a traffic light a quarter-mile ahead, and you can see that the number of cars is constant bumper-to-bumper, clearly that is not a case where you go around. But other situations are much more subtle, where there could be a stopped car that wants to either make a U-turn (no turn signal, but you see that its wheels are turned all the way to the left, ready to make that U-turn) or maybe the stopped car wants to turn left and is waiting for the opportunity to do so since there is oncoming traffic… perhaps that traffic is a good deal away, but the driver is elderly, so you don’t want to blow past them and scare them, or worse, cause an accident ….or maybe the stopped car is waiting to allow a pedestrian or a cyclist or a school bus to clear the way up ahead. Human drivers notice this through contextual clues, maybe by looking at the driver him/herself— contextual clues are nearly infinite, and must be nearly impossible to quantify.

reminds me of this excellent Tom Scott video where he discusses how computers may be unable to solve human language because of their inability to resolve contextual clues. I feel it must be the same with autonomous driving.

how can Tesla solve this with as much certainty (or more!) than a human? Is that even possible? The human ability for pattern recognition based on an almost infinite number of contextual clues really seems like an unsolvable problem to me… and that’s just for the simple situation of going around a stopped car! What do you folks think? Have you found similar cases like this? What other edge cases do you think are unsolvable, or perhaps I am wrong in my thoughts here?
One issue I noticed today regarding a line of stopped cars ahead is the position of the camera. As the driver I sit on the left and was able to see down the line of cars ahead to understand the situation. The car’s camera is right in the center so all it could see was the one car in front so had no way to sense the other cars ahead. The camera was not able to get all the information I had as a human because of it’s different placement.
 
  • Informative
Reactions: gaspi101
Sorry you don’t believe it. FSD is a fraud.
stop and think about what your trying to say.
At the risk of this being a troll comment, rather than a misunderstanding of what I was was saying I will respond...

My point was: to say a car that can fully self drive will never happen in all the future of humanity is just silly. Of course it will. The only question is how long it will take. May not even be my lifetime, but at some point it will. May not be a Tesla, or any existing company or even existing tech.