Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
There are too many issues at that intersection to use it as a test bed for ULTs. The side street visibility is poor because of obscuring vegetation. The highway has a 55 mph speed limit and there isn't a nearby upstream controlling traffic signal. The median barely fits a vehicle. That last factor is irrelevant because FSD beta presently only stops in the median through its own poor judgement.
I agree its a very hard test for the car, but I think it's a useful exercise and a reference for how the car handles such situations, and I'm glad Chuck keeps doing this. Sure, the car got an F for 10.10, but that in itself is a useful data point (especially for Tesla).

As others have noted (and posted in the comments to the video), the car seems to think that the median "island" is not present (it shows a slight flickering line there as if the median is solid), and so, yes, the car thinks it cannot do a left turn at all. Of course, the fact that the nav system then makes it go into what would be an infinite loop is a different issue.

There is then the distinct issue of why the car pauses so much and/or gets stuck in an intersection. The former is, I suspect, related to the camera lost-distance acuity, the latter is just bad programming on the part of Tesla.

-- I'm staring to think that at the limits of camera vision the car can distinguish something that "might be" an approaching car, but it isnt sure. So what does it do? It waits, to see if "whatever it is" gets bigger (i.e. is approaching). If it doesnt after some time then either the "thing" is stationary in the distance (safe) or approaching so slowly it doesnt block the car making the turn (also safe). Hence the pauses.

-- The car stopping in intersections needs to be addressed, and imho is still the most dangerous thing the car does. Tesla seem to have programmed a base rule "if you are not sure, slow down/stop and reevaluate", which is many cases is fine but not in an intersection. Essentially they need to add a state where the car commits to a maneuver and, high-priority stuff aside (e.g. pedestrians) it pushes through until it reaches another "safe" point.

These are both of course total speculation, but I feel that are at least plausible :)
 
  • Like
Reactions: nativewolf
I’ll be looking here for news on how fsd is working. I’ll check it out again if everyone starts saying it really works.

For me.. I don’t need it to drive around stopped cars, but the jerky steering needs to go away. Along with the phantom stopping (at pedestrians crossings with no pedestrians around). And most of the “just push the go pedal” situations need resolved.
tbh, you dont seem to know what the beta is for .. did you think it was so you could show it off to friends? No, it's for testing a system that is a work in progress and reporting back issues (both big and small) in a wider variety of environments than can be achieved through closed testing.
 
  • Like
Reactions: PACEMD
I agree its a very hard test for the car, but I think it's a useful exercise and a reference for how the car handles such situations, and I'm glad Chuck keeps doing this...
I also think it's an interesting academic exercise, but more often than not, the exercise is just exposing the weakness of the front and B pillar cameras for safely executing the ULT maneuver at this intersection.
 
I also think it's an interesting academic exercise, but more often than not, the exercise is just exposing the weakness of the front and B pillar cameras for safely executing the ULT maneuver at this intersection.
I will also argue that besides the B-pillar limitation all cameras suffer from something we humans take for granted. Our "cameras" are on dynamic and movable jibs. We duck, tilt, peer, stretch, and peak between occlusions. Fixed camera can't do this.

If you have ever driven a car with cameras for rear view mirrors you will see the weakens since you can't vary/expand your field of view by moving your head.
 
  • Like
Reactions: Dan D. and daktari
I will also argue that besides the B-pillar limitation all cameras suffer from something we humans take for granted. Our "cameras" are on dynamic and movable jibs. We duck, tilt, peer, stretch, and peak between occlusions. Fixed camera can't do this.

If you have ever driven a car with cameras for rear view mirrors you will see the weakens since you can't vary/expand your field of view by moving your head.
this is true, but FSD has 8 eyes, which reduces the need to stretch and tilt - it is also arguably more than offset by FSD viewing all 8 cameras/view points simultaneously, whereas human eyesight only looks in one direction at a time, aided by mirrors if applicable - and our peripheral vision is not great.
 
  • Like
Reactions: PACEMD
tbh, you dont seem to know what the beta is for .. did you think it was so you could show it off to friends? No, it's for testing a system that is a work in progress and reporting back issues (both big and small) in a wider variety of environments than can be achieved through closed testing.
Usually a beta is like that, but the FSD beta is throwing a bone to customers so they can show the world and investors that Tesla is best. Pure and brilliant marketing stunt, exploiting a huge fan base in social media.
 
this is true, but FSD has 8 eyes, which reduces the need to stretch and tilt - it is also arguably more than offset by FSD viewing all 8 cameras/view points simultaneously, whereas human eyesight only looks in one direction at a time, aided by mirrors if applicable - and our peripheral vision is not great.
Human peripheral vision is excellent for sensing movement.
 
Human peripheral vision is excellent for sensing movement.
Not if you wear glasses and most people can drive when wearing glasses with little problem (except when it rains and there is more glare).

Besides, humans eyesight didn't evolve to drive. So, it is not known exactly what kind of vision you need for driving. Definitely you don't need 538 Mega Pixel resolution per ye that we have ... or the dynamic range that no camera has.
 
...

Besides, humans eyesight didn't evolve to drive. So, it is not known exactly what kind of vision you need for driving. Definitely you don't need 538 Mega Pixel resolution per ye that we have ... or the dynamic range that no camera has.
Our vision and other senses did evolve to allow us to excel in our "ODD", navigating the natural environment at a decent running speed. Turns out we do quite well at faster speeds too.

The reason there are car speed limits and headlights etc is because without them our limitations may be exceeded. Street signs need to be a certain size so we can read them. We have designed our cars and infrastructure to accommodate our (typical) abilities and driving licenses require a minimum vision standard.

If we had evolved thermal vision or 360 degree vision then we'd be driving differently. This is what we have so we make the best use of it.
 
Last edited:
Usually a beta is like that, but the FSD beta is throwing a bone to customers so they can show the world and investors that Tesla is best. Pure and brilliant marketing stunt, exploiting a huge fan base in social media.
That may or may not be an ancillary reason, but do you think its the only one? Tesla need the feedback from beta testing, since (as Elon has admitted) most the the closed testing was done in CA and so the car is good at driving in NoCal, but not so good elsewhere.
 
I will also argue that besides the B-pillar limitation all cameras suffer from something we humans take for granted. Our "cameras" are on dynamic and movable jibs. We duck, tilt, peer, stretch, and peak between occlusions. Fixed camera can't do this.
To an extent that's true, but we also do a lot of that because the actual FOV in which we can see detail is VERY restricted (basically it'd whatever part of our vision falls on the fovea part of the retina). OTOH I agree that moving our eyes within this field allows our brains to distinguish objects from very poor inputs .. to an extent the car can do this either when it is moving or when the object in question is moving (or approaching), but to a lesser extent (if fact, this is why our eyes are never truly still, even when we focus on one thing they always flutter slightly to improve our actual perception of the item .. if they are ever stilled our visual system blurs within seconds).
 
Human peripheral vision is excellent for sensing movement.
.. but not identifying what is moving .. that's why that peripheral movement always causes us to "focus" on that object by moving our eyes.

There is no doubt that the brain+eyes can beat out cameras+NNs in many cases, but the reverse is also true. It will be interesting to see of the camera+NN advantages can offset for their disadvantages as AVs evolve.
 
  • Informative
Reactions: EVNow
tbh, you dont seem to know what the beta is for .. did you think it was so you could show it off to friends? No, it's for testing a system that is a work in progress and reporting back issues (both big and small) in a wider variety of environments than can be achieved through closed testing.

Yup. I know. I was commenting on when I’m interested in seeing fsd again. I don’t need to beta test it anymore. Waiting for it to be a bit more ready.
 
  • Disagree
Reactions: Knightshade
The reason there are car speed limits and headlights etc is because without them our limitations may be exceeded. Street signs need to be a certain size so we can read them. We have designed our cars and infrastructure to accommodate our (typical) abilities and driving licenses require a minimum vision standard.
I think the driving requirements already exceed a lot of people's abilities. That is the reason for crashes.

In natural environment we'd never have to face a 1 ton killing machine hurtling towards us from the side at 60 mph. We are pretty bad a figuring out the depth and speed of anything that comes towards us from the side (see all those stories about approaching trains, for eg).

Anyway - obviously there is more to driving than just sensors and we definitely don't need 538 Megapixel cameras to drive.

We'll know exactly what is needed when someone can achieve human level driving with cameras only. Until then its all speculation.
 
  • Like
Reactions: Sporty
It is truly amazing that there aren’t more accidents, and that’s probably a testament to all the work that has been done in standardization, regulation, legality, and technological advancement.

Especially considering the miles people travel nowadays, it’s something to behold
 
It is truly amazing that there aren’t more accidents, and that’s probably a testament to all the work that has been done in standardization, regulation, legality, and technological advancement.

Especially considering the miles people travel nowadays, it’s something to behold
Also chance. That plays a very big part - if every time someone makes a mistake, there were to be an accident - there would be a lot more.
 
  • Like
Reactions: AndreP
That may or may not be an ancillary reason, but do you think its the only one? Tesla need the feedback from beta testing, since (as Elon has admitted) most the the closed testing was done in CA and so the car is good at driving in NoCal, but not so good elsewhere.
I think they are overwhelmed by data on failures at the level of (low) quality FSD are now.
Having a team of 50 testers, seeking different cities, would be more than enough. Each of these days testing would net them a few hundred issues pr driver. When they don't report any failures for weeks, one can spread out testing further. But they are not there yet
Maybe the main reason for public beta is cost reduction, but I think that is the ancillary reason. Publicity, promises made, and keeping the innovation hungry fans satisfied is the main reason.
 
  • Like
Reactions: Sporty