Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
I feel like he is downplaying the areas where it needs improvement. It is not just "one small area" that needs improvement. For one, doing left turns with fast approaching traffic is not a "small area", it is a pretty major driving task that humans have to do all the time when driving in cities. Also, FSD Beta has more problems than just not being able to reliably do left turns with fast approaching traffic at high speed.

Exhibit A:


0:13 First Mistake
2:34 Disengagement
3:12 Car Hits Bush
3:38 Wrong Lane!
4:50 Left Turn Fail
5:25 Near Collision
9:12 Disengagement
9:25 Disengagement

Exhibit B:

Car almost hits large pillar at 17:06.

Our depth perception is tricked into thinking it’s closer than it really is, however steering toward a pillar is uncomfortable regardless.
 
  • Like
Reactions: diplomat33
The issue with the pillar incident isn't that it was necessarily going to drive into it, yes, that's a problem...but at what point does the car see an object ahead of it and decides it should intervene and not collide with it?!?! Why is FSD overriding or not seeing a collision hazard, like a car stopped ahead, or a person etc?!?! Instead, it just decides to drive into the object instead of avoid it.
 

Thanks for sharing. FSD Beta definitely did a lot of things really well. There were some cases that FSD Beta did not do so well. City FSD can be very challenging.

The garbage truck scenario is actually a pretty typical problem that many AVs struggle with. It seems obvious to humans to go around but it is not obvious to AVs. Cruise also showed examples where they needed remote assistance to tell the AV what to do in a similar situation. The Cruise AV would wait behind a double parked delivery truck expecting it to move, instead of going around it. This is a good example of how city FSD can be challenging for AVs and why prediction and planning are more difficult problems than perception. City driving is full of situations where the challenge is not so much detecting objects but figuring out what lots of other road users are doing and then making a decision about what the car should do.
 
Thanks for sharing. FSD Beta definitely did a lot of things really well. There were some cases that FSD Beta did not do so well. City FSD can be very challenging.

The garbage truck scenario is actually a pretty typical problem that many AVs struggle with. It seems obvious to humans to go around but it is not obvious to AVs. Cruise also showed examples where they needed remote assistance to tell the AV what to do in a similar situation. The Cruise AV would wait behind a double parked delivery truck expecting it to move, instead of going around it. This is a good example of how city FSD can be challenging for AVs and why prediction and planning are more difficult problems than perception. City driving is full of situations where the challenge is not so much detecting objects but figuring out what lots of other road users are doing and then making a decision about what the car should do.
I wish Tesla would let us provide feedback/decisions to the car. For example, a pop-up would come on the screen and show multiple paths. We select the correct one. That will train NN and next time, it will learn from it and increase confidence in such situation.
 
I wish Tesla would let us provide feedback/decisions to the car. For example, a pop-up would come on the screen and show multiple paths. We select the correct one. That will train NN and next time, it will learn from it and increase confidence in such situation.


No learning ever takes place on the car itself. The NNs on the car are static.

Data is collected from the fleet, labeled by humans, then master NNs are trained on giant supercomputers back at HQ- then the updated NNs are pushed out in a firmware update.

Apart from the technical/resource limited reasons training doesn't happen per-car, it'd be a nightmare to troubleshoot.... "Did the car do X because there's a bug, or because this specific car 'learned' from a bad driver?"
 
No learning ever takes place on the car itself. The NNs on the car are static.

Data is collected from the fleet, labeled by humans, then master NNs are trained on giant supercomputers back at HQ- then the updated NNs are pushed out in a firmware update.

Apart from the technical/resource limited reasons training doesn't happen per-car, it'd be a nightmare to troubleshoot.... "Did the car do X because there's a bug, or because this specific car 'learned' from a bad driver?"
I see your point and I agree. I take back my genius idea :)
 
  • Like
Reactions: cbrigante2
I see your point and I agree. I take back my genius idea :)
The idea is not wrong however, just the implementation may need tweaking. Instead of the NN in the car changing, the driver input may be uploaded to Tesla so that the data may be incorporated into the next release. I've always wondered how Tesla gets data from all the FSD Beta testing. Karpathy touched on it a bit in a recent video how it's using various triggers and uploading 10 second clips, but it'll be nice to have a way for driver to send/mark specific incidents which may not necessarily be triggered (for example all the bookmarks in previous video).
 
it'll be nice to have a way for driver to send/mark specific incidents which may not necessarily be triggered (for example all the bookmarks in previous video).


The FSD beta folks have that- it's a special button on the screen they can press.

They can't offer that to the general fleet because they don't have anywhere NEAR the manpower to process millions of such reports they'd get, rather than the ones they get from their ~2000 person test group.
 
The FSD beta folks have that- it's a special button on the screen they can press.

They can't offer that to the general fleet because they don't have anywhere NEAR the manpower to process millions of such reports they'd get, rather than the ones they get from their ~2000 person test group.
Thanks, I didn't know that. I was talking about FSD Beta specifically. I certainly realize why Tesla wouldn't want that in the other cars.
 
No learning ever takes place on the car itself. The NNs on the car are static.

Data is collected from the fleet, labeled by humans, then master NNs are trained on giant supercomputers back at HQ- then the updated NNs are pushed out in a firmware update.

Apart from the technical/resource limited reasons training doesn't happen per-car, it'd be a nightmare to troubleshoot.... "Did the car do X because there's a bug, or because this specific car 'learned' from a bad driver?"
I forgot where I saw this, but this idea of learning within the car should be reality after FSD 9, something about learning your parking specifics, which wouldn't make sense to deploy to the whole fleet. But don't quote me on that haha
 
I forgot where I saw this, but this idea of learning within the car should be reality after FSD 9, something about learning your parking specifics, which wouldn't make sense to deploy to the whole fleet. But don't quote me on that haha

AFAIK this isn't really possible today for multiple reasons.

For one- the firmware that controls driving behavior is a singular read-only CRC checked blob. The only way to change it is an entire firmware update.... (or technically a map update since that can change driving in a sense)

For another- the only thing NNs are used for is perception. All the actual BEHAVIOR code is static/hard coded. That part might eventually evolve, but you'd still have the issue that you can't individually write to that code.... (nor does the in-car computer really have the power to do training of large NNs)



Certainly there can be settings for general parking preferences- as there are today for things like if you want summon to work through especially narrow areas for example- but that's about it.
 
Thank you for nice video. I wonder about the camera which is showing you. It looks like your steering wheel is on the right side 🤔
I mirrored the video. If I don't mirror the video, it looks strange because I basically look the opposite direction from where I should. I fixed this by removing the front facing camera :) I want to leave more room for the roof camera + viz so got rid of my silly face
 
AFAIK this isn't really possible today for multiple reasons.

For one- the firmware that controls driving behavior is a singular read-only CRC checked blob. The only way to change it is an entire firmware update.... (or technically a map update since that can change driving in a sense)

For another- the only thing NNs are used for is perception. All the actual BEHAVIOR code is static/hard coded. That part might eventually evolve, but you'd still have the issue that you can't individually write to that code.... (nor does the in-car computer really have the power to do training of large NNs)



Certainly there can be settings for general parking preferences- as there are today for things like if you want summon to work through especially narrow areas for example- but that's about it.
The firmware is not the full NAND, just a partition on it. You'd still have a user partition, a log partition, and I also think something to store bug checkpoints. The Only the firmware partition (usually called system) would be CRC checked. The others can be written to. Think about your profile. Seat/Wheel/Mirror settings. Camera calibration data. All of that persists over a firmware update. Your car also ~somehow~ learns to fold mirrors or activate sentry mode in specific locations. And Homelink. It does more locally than we think!

My address takes my car to the front of my house but my garage is in the back - technically a different street. Going around the block (residential development) to park is something that should be very specific about my car and use a very limited map plot to achieve. In my mind, it makes sense to be strictly local.

Also thanks for taking my suggestion of don't quote me on that literally 🤣🤣🤣
 
The firmware is not the full NAND, just a partition on it. You'd still have a user partition, a log partition, and I also think something to store bug checkpoints. The Only the firmware partition (usually called system) would be CRC checked. The others can be written to. Think about your profile. Seat/Wheel/Mirror settings. Camera calibration data. All of that persists over a firmware update. Your car also ~somehow~ learns to fold mirrors or activate sentry mode in specific locations. And Homelink. It does more locally than we think!

My address takes my car to the front of my house but my garage is in the back - technically a different street. Going around the block (residential development) to park is something that should be very specific about my car and use a very limited map plot to achieve. In my mind, it makes sense to be strictly local.

Also thanks for taking my suggestion of don't quote me on that literally 🤣🤣🤣
Speaking of which, @verygreen - I presume from experience that's how the NAND is setup. Any way you can confirm?
 
The firmware is not the full NAND, just a partition on it. You'd still have a user partition, a log partition, and I also think something to store bug checkpoints. The Only the firmware partition (usually called system) would be CRC checked. The others can be written to. Think about your profile. Seat/Wheel/Mirror settings. Camera calibration data. All of that persists over a firmware update. Your car also ~somehow~ learns to fold mirrors or activate sentry mode in specific locations. And Homelink. It does more locally than we think!

My address takes my car to the front of my house but my garage is in the back - technically a different street. Going around the block (residential development) to park is something that should be very specific about my car and use a very limited map plot to achieve. In my mind, it makes sense to be strictly local.

Also thanks for taking my suggestion of don't quote me on that literally 🤣🤣🤣

So....nope.

Because

Speaking of which, @verygreen - I presume from experience that's how the NAND is setup. Any way you can confirm?


yes he addresses that EXACT thing a while back when someone claimed the V8 beta had learned (which it can't) OR Tesla has somehow sent an update to driving behavior WITHOUT a full firmware flash and he explained changing anything there would break DMVerity.

Stuff written to the temp areas of the AP computer do not survive reboot.

Stuff like user settings aren't used by the DRIVING computer anyway so don't need to be stored there-- hence why I said there could be general user preferences like how narrow a space it'll park in.... but NOT actual "change specific NN/driving code behavior"

 
So....nope.

Because




yes he addresses that EXACT thing a while back when someone claimed the V8 beta had learned (which it can't) OR Tesla has somehow sent an update to driving behavior WITHOUT a full firmware flash and he explained changing anything there would break DMVerity.

Stuff written to the temp areas do not survive reboot.

I think both scenarios are not related. You're probably talking about a locally patched NN, but I'm talking about an entry in the user's preferences storage (which, just like maps, would input a destination for the car to navigate, and perhaps a few extra maneuvers).

Sure, what I'm talking about is not deployed today, It doesn't exist. But think about "park seek mode" which is expected. It's really not very different, except you are only allowed to park in your own driveway / garage - which would be the setting I'm talking about (and already exists somehow. If you have homelink you know how that goes).