Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
There are many other possible reasons. Main one is probably him not remembering accurately and the situations were not identical. But it could also be that Tesla has decided to enable feature that previous were disable as they didn’t have enough validation. Or something else.

Here is the exact same version with similar lighting conditions back in March 2021. He even says, "it never sees this speedbump":

 
  • Like
Reactions: rxlawdude
There are many other possible reasons. Main one is probably him not remembering accurately and the situations were not identical. But it could also be that Tesla has decided to enable feature that previous were disable as they didn’t have enough validation. Or something else.


They can't change the actual driving code without a firmware update

They can technically enable an entire, already in the code, feature without one (for example adding acceleration boost to a car that doesn't have it doesn't require a firmware update) but I'm not aware of anything like that ever happening... and I'm not sure what "code that makes it somewhat better specifically at seeing speed bumps which it already often saw in earlier code" there'd be as some pre-existing but we can't turn it on till we're sure it's safe" would look like.

Not to mention config updates to 3+ month old code they've already said they've abandoned doesn't make much sense either.


So occams razor gets us back to the 2 things I originally suggested, one of which you seem to agree with- maps, or humans are bad at measuring differences.
 
Here is the exact same version with similar lighting conditions back in March 2021. He even says, "it never sees this speedbump":



Notice in that clip it also stopped too early for the sign right after- but does not do so in the newer footage.

Which, again, points directly to a map update.

I've seen that exact thing on non-FSDBeta- shortly after I first turned on reacting to stop signs, one sign near me it'd stop like 5-10 feet short of where it ought to have- a map update fixed it.


Unfortunately because of how much of a black hole Teslas map data is (it's a combination of various mapbox stuff which itself draws from a bunch of sources, plus, allegedly, some fleet-gathered data too) it's difficult to show like say you could with a car that uses google maps for this stuff.
 
Which, again, points directly to a map update.

I've seen that exact thing on non-FSDBeta- shortly after I first turned on reacting to stop signs, one sign near me it'd stop like 5-10 feet short of where it ought to have- a map update fixed it.

Per Karpathy, maps provide upcoming controls, but the car still validates using vision once it's there. I'm not sure if the car relies on maps to know exactly where to stop. I haven't heard Karpathy say something to that effect.

I'm sure you've seen the stop line on the visualization slide back and forth.
 
Which, again, points directly to a map update.
Maps is one thing that could be the reason for the changes.
But as we saw from the settings available, they could be updating the config/and weights on the FSD Beta cars without reloading the firmware.

There were screens upon screens of different options that green showed.
Plus we do not know what kind of agreement the FSDBeta folks signed up for, so these types of updates maybe covered?

I am only talking in the context of FSD Beta videos.
 
Per Karpathy, maps provide upcoming controls, but the car still validates using vision once it's there. I'm not sure if the car relies on maps to know exactly where to stop. I haven't heard Karpathy say something to that effect.

I'm sure you've seen the stop line on the visualization slide back and forth.


It uses maps to know when to slow down for a stop it can't see yet for example (which the video in question is a perfect example of- he even SAYS it's slowing for the stop sign as it climbs the hill and then stops well short of the actual sign once it's visible--- and sure enough in the later video both problems are fixed).

It also does that because sometimes a sign might be obscured by vegetation for example but you wouldn't want to blow through it even if it's not easily/obviously visible.

Plus yeah as you note the visualization aren't always perfect either.


Maps is one thing that could be the reason for the changes.
But as we saw from the settings available, they could be updating the config/and weights on the FSD Beta cars without reloading the firmware.

Settings yes, weights no.

As @verygreen explained on twitter a while back

greentheonly said:
the weights are part of the firmware rootfs, measured with dm-verity

Meaning you can't change any of it without updating all of it (ie firmware update)
 
Settings yes, weights no.

As @verygreen explained on twitter a while back


Meaning you can't change any of it without updating all of it (ie firmware update)
Settings are enough to change the behavior drastically in some cases (new city streets, smooth city streets, etc)
But are we sure that FSD Beta firmware does not have hooks to allow overlaying?
 
Invisible car?
1624933061555.png


1624933211685.png

How close are they? This comment is making my head explode. haha.
 
Speaking of that thread BTW- he specifically mentioned mapped stop lines being in FSDBeta.

So yeah, that dude got a map update :)



I'll believe what Karpathy says regarding maps. Karpathy specifically spelled out how Tesla's SD maps work:


Also, green's tweet doesn't mean the stop lines are hard coded into the map. Karpathy says, "we do know that there's a stop sign somewhere in the vicinity" with regard to maps. Once the car gets there, it looks at the scene with vision (see the "lines visible" part):

Screen Shot 2021-06-28 at 7.25.17 PM.png
 
  • Like
Reactions: mikes_fsd
What is driving like when you are not using the FSD Beta mode? In the near future, before governments approval, the concern is that after hours and hours of trusting your Tesla to take you to your work or the store that you will forget to pay attention. How do you see this eventuality playing out? Should it be a concern? I know we should always pay attention while driving, but as I see it, It would be easy to forget to do so while having conversation with passengers, listening to podcasts, making "to do" lists etc. We do know that Tesla safety features prevents many accidents, so is it an issue? Yes I would think so. Maybe the FSD Beta driving should be limited to only 50% or so of one's driving?
Thank you for your opinions in this matter.
 
I am not on the beta, but I can tell you that I have no clue how anyone can enable FSD then move to the passenger or back seat (you know who I am talking about here) or even read a book. My car makes all sorts of wrong decisions, including swerving to miss vehicles that aren't there, changing into the wrong lane on autopilot, phantom braking and more.

Whether or not my hands are on the wheel, my eyes are scanning the roadway.
 
What is driving like when you are not using the FSD Beta mode?
I am not because only a few selected.
In the near future, before governments approval, the concern is that after hours and hours of trusting your Tesla to take you to your work or the store that you will forget to pay attention. How do you see this eventuality playing out? Should it be a concern?
Yes. There are 2 different philosophies.

1) Waymo: Humans are not to be trusted and must be removed from operating the car.
2) Tesla: Humans have passed a driving test so they know how to brake and steer so there's no need to withhold beta technologies.

I know we should always pay attention while driving, but as I see it, It would be easy to forget to do so while having conversation with passengers, listening to podcasts, making "to do" lists etc. We do know that Tesla safety features prevents many accidents, so is it an issue? Yes I would think so.
Yes. There have been Tesla accidents and fatalities. So, it is still an issue.

...Maybe the FSD Beta driving should be limited to only 50% or so of one's driving?...
I have no idea what's that 50% plan mentioned above?

FSD beta is only 2,000 among millions of Tesla cars, so that is not 50%.

I think the better solution is to use anti-collision technology that's been proven to work since 2009 with Waymo: LIDAR. Its anti-collision has been further proven since 2019 with its current Robotaxis that have no drivers or staff in those vehicles that are geofenced for 50 square miles at Chandler, AZ.

Currently, Waymo's issues are not about collisions. It's about other issues such as intelligence. Its robotaxis don't collide with anything but it might encounter a new scenario and just keep on calculating and become stationary in the meantime.

So, if Tesla could adopt the anti-collision technology with the addition of LIDAR, the collision issue is taken care of, and with the intelligence issue, human drivers can manually take the car out of the pausing mode while it's still thinking about how to solve a new scenario.
 
  • Like
Reactions: diplomat33
What is driving like when you are not using the FSD Beta mode? In the near future, before governments approval, the concern is that after hours and hours of trusting your Tesla to take you to your work or the store that you will forget to pay attention. How do you see this eventuality playing out? Should it be a concern? I know we should always pay attention while driving, but as I see it, It would be easy to forget to do so while having conversation with passengers, listening to podcasts, making "to do" lists etc. We do know that Tesla safety features prevents many accidents, so is it an issue? Yes I would think so. Maybe the FSD Beta driving should be limited to only 50% or so of one's driving?
Thank you for your opinions in this matter.
1) Waymo: Humans are not to be trusted and must be removed from operating the car.
2) Tesla: Humans have passed a driving test so they know how to brake and steer so there's no need to withhold beta technologies.

Just to expand on this, before it was called Waymo, Google's self-driving project conducted an experiment in 2013 where they put their employees in cars equipped with Google's early self-driving. The system could do hands-free L2 highway driving but the employees were told to pay attention. They had cameras in the cars to monitor the drivers and found that the drivers did trust the system too much and did become complacent and not pay attention. The drivers were found checking their phone, texting, using their laptop, applying make-up, and even sleeping. So Google decided to skip L2 and to focus entirely on L4 with Waymo. Based on these experiments, they believe that humans cannot be trusted to monitor a system that is "almost FSD". So they believe L2 "FSD" is a bad idea and that L4 that does not require a driver is the better approach.

Here is some video of the Google experiment where we see the drivers not paying attention:


More info: Google's Waymo Asked People To Test Its Semi-Autonomous Car Tech. What Happened Next Will Not Surprise You
 
Last edited:
  • Like
Reactions: S4WRXTTCS and Tam