Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Almost ready with FSD Beta V9

This site may earn commission on affiliate links.
Do we know that for certain - that FSD actually learns over time? My greatest concern with FSD/NoA is that it is entirely reactionary and not at all predictive. It doesn't seem to take the long view of the road, just what's immediately in front of it. I liken it to the skills of a first-year driver. It can obey traffic controls nicely, but dealing with other (human) drivers and their random actions are problematic.

As someone who's driven for 50 years, I've got a well honed set of situational awareness skills that allow me to predict problem situations. I haven't seen that degreen of learning evident in FSD yet. There's a video of a Tesla highway accident on Twitter where the cars in the adjacent lane panic brake, but the Tesla proceeded happily in it's lane at 70mph til a car swerved directly in front of it. Bang. Anyone who watched that video commented that they would have braked or slowed down immediately when they saw trouble in the adjacent lane about 3 seconds earlier. 3 seconds at 70mph is about the length of a football field.

Until all cars are automated, humans will be the largest safety hazzard. It would be cool if FSD evolved to the point where it possessed the sum total of all the best techniques and habits of the best human drivers to cover every situation.

You may already know some or all of this: Contrary to what many believe on YouTube, FSD does not immediately learn from inputs or situations that cars encounter on the street.

Instead, information from disengagements or scenarios that Tesla is looking to solve is harvested from the fleet. It's then labeled as necessary. Unit tests are created (to verify that these situations get solved) and the new data is fed into the training computer. Once the network is resolved with 100s or 1000s of new examples of the problem scenario (which can take days or longer), the unit tests are used to verify that a given situation has been addressed, and they move onto the next thing.

But the above is only how the "Software 2.0" portion of the FSD stack is implemented--which is primarily recognition and layout of objects in the scene. There is still a fairly large portion (planning/driving logic for example) that is coded by hand in C++. Portions of that over time will be replaced by neural network versions.

When Elon said that 8.3 (now 9.0) has over 1000 improvements, many of these are likely in the manually coded C++ planning and control portion of the car. At least I hope so, because IMO the biggest roadblock right now to true FSD is not the vision processing, but the path planning and control of the vehicle.

Part of what you're asking is about "temporal persistence"...seeing the trajectory of hazards over time, predicting where things will go, etc. The code does account for that, although indeed they have improvements yet to make.

Part of what you're asking about is reading a situation as a whole. The "braking in adjacent lanes" example you gave above is a good example of this. Granted, the accident you're talking about occurred with AP and not FSD, and I don't recall seeing an FSD video that encounters this situation, but the current vision system is easily able to identify that cars are braking hard to a stop in the adjacent lane. They may just not have logic to slow the vehicle in this situation.

Many of these situations will eventually be accounted for. It's just that there are so many possibilities that it takes time for Tesla to work them all out. That's is what Elon means when he refers to the "march of 9s". Getting to 99.9% safety, then 99.99%, then 99.999%, etc.

As for possessing the total of the best driving techniques and habits, I believe we'll get there, but it will take awhile. But even before then, FSD should still be much safer than a human driver because the biggest contributions to accidents on the roadways are the biggest human flaws: inattention, impatience, sleepiness, lack of following distance, etc. The first 3 immediately go away with an autonomous system. The last (and other "driving technique" issues) get addressed slowly over time.
 
vision can be fairly easily trained to recognize the second vehicle ahead by seeing thru the windows
I mean I hope this is true, but do we actually know that through fairly obscured/small windows (one through which a human would be able to definitively identify a vehicle but it would take a bit of focus to do so...), this can be done consistently by FSD at the current time? I mean I certainly think it will need to be able to do this at least as well as a human eventually (it’s not something we rely on a huge amount for driving I don’t think since we can deal with vehicles with tinted windows), but not sure that there is clear evidence it does right now. Would be hard to tell, with radar being used currently. Guess someone could look at Green’s marked up videos and find an example...but radar may be involved there too (not sure how those boxes are generated and whether radar plays any part in it).

And how much memory does the system currently actually have? Do we know? If it doesn’t see the car ahead of the car ahead for two minutes does it still know it is there (like a human)? Again, not sure we can say as long as radar is in the picture.


If you are following a bus or semi truck where you cannot see the second car ahead, the mass of the truck/bus is so much larger that a collision ahead wouldn’t reduce the truck’s velocity as much due to its much higher momentum. Therefore the need for rapid braking is much reduced.
Unless there is a another truck or cement mixer ahead of the truck ahead. Or the truck ahead has a high volume to weight ratio, due to its current lack of load. There are a lot of cases where this assumption would not be valid.
 
  • Like
Reactions: APotatoGod
“Almost ready” = not ready.

I wonder whether the reason they are looking at radar is because of conflicts between camera and radar. After all Tesla has been looking at all the issues and would be doing root cause analysis. They might be finding that conflict with radar is the reason for many failures ?

BTW, this “radar will be removed” should be read in the context of city NOA only.
 
  • Informative
Reactions: APotatoGod
Sometimes I wonder if Elon is like one of those people you see in movies (and probably real life) who make an *incredible*, statistically unheard of run at the craps table, and then start getting more and more cocky, and then suddenly the dice stop rolling their way but they bet too big and throw away all their big wins.

if so, i wonder how far we are from seeing the losing bets getting scooped up by the croupier.
Yes - landing rockets back vertically is exactly like someone winning in casinos.
 
Spacex is very clearly not tesla.
Similarities:
  1. Run by a madman, aren't we all? :)
  2. Disregard for the impossible
  3. Both have accomplished what others deemed impossible
  4. Elon makes impossible demands of his people and the peeps try to deliver
  5. Lots of resources to make it happen
  6. Often late with projects. Example: Falcon heavy
 
Similarities:
  1. Run by a madman, aren't we all? :)
  2. Disregard for the impossible
  3. Both have accomplished what others deemed impossible
  4. Elon makes impossible demands of his people and the peeps try to deliver
  5. Lots of resources to make it happen
  6. Often late with projects. Example: Falcon heavy
Differences: only one of them has been taking money from consumers for almost 5 years for a product that can only be used on one device that may not even be functional anymore if the product is ever finished.
 
I mean I hope this is true, but do we actually know that through fairly obscured/small windows (one through which a human would be able to definitively identify a vehicle but it would take a bit of focus to do so...), this can be done consistently by FSD at the current time? I mean I certainly think it will need to be able to do this at least as well as a human eventually (it’s not something we rely on a huge amount for driving I don’t think since we can deal with vehicles with tinted windows), but not sure that there is clear evidence it does right now. Would be hard to tell, with radar being used currently. Guess someone could look at Green’s marked up videos and find an example...but radar may be involved there too (not sure how those boxes are generated and whether radar plays any part in it).

And how much memory does the system currently actually have? Do we know? If it doesn’t see the car ahead of the car ahead for two minutes does it still know it is there (like a human)? Again, not sure we can say as long as radar is in the picture.


Unless there is a another truck or cement mixer ahead of the truck ahead. Or the truck ahead has a high volume to weight ratio, due to its current lack of load. There are a lot of cases where this assumption would not be valid.

I can't say for certain that FSD can *currently* recognize vehicles through a window (I'd have to review footage to check), but it is certainly doable with current vision and NN technology without too much trouble. We do have lots of examples where the car in front of a lead car is shown on the display, however.

There is some temporal persistence in the current system, but it needs to be improved and certainly won't keep track of something that it hasn't seen for 2 minutes.

In the scenario of 2 high gross weight vehicles the lead vehicle wouldn't be able to stop fast anyway, so that's not much of an issue either.
 
  • Like
Reactions: AdamMacDon
Differences: only one of them has been taking money from consumers for almost 5 years for a product that can only be used on one device that may not even be functional anymore if the product is ever finished.

Keep complaining over and over again, but remember you're the one that bought it despite the disclaimer...
 
In the scenario of 2 high gross weight vehicles the lead vehicle wouldn't be able to stop fast anyway, so that's not much of an issue either.
It's actually not necessary for the vehicle in front of the lead vehicle to stop quickly for this to be problematic (think: inattention on the part of the lead vehicle...). Anyway, I guess you can come up with a variety of non-problematic scenarios if you want, but practically speaking, I've encountered many situations where it is difficult to see the lead car, visually - and it hasn't shown up on the visualization either (whether or not the car knows it is there, I have no idea). And it doesn't require extremely high weight vehicles.

Anyway, to minimize risk, it's a good idea to increase following distance to max and watch for slowing traffic. And increase following distance even further (beyond max) if you are being followed closely and can't do anything about it, of course.

As discussed earlier, I don't think radar is required to make it very safe. We don't come equipped with radar, and some drivers have excellent safety records when it comes to avoiding this sort of accident.
We do have lots of examples where the car in front of a lead car is shown on the display, however.
Right, due to radar, (as far as we know, anyway).
 
  • Like
Reactions: APotatoGod
“Almost ready” = not ready.
...
BTW, this “radar will be removed” should be read in the context of city NOA only.
Sounds a lot more then "City NOA only"

Elon Musk.PNG
 
  • Like
Reactions: DanCar
Tesla is the first successful new automobile manufacturer in US in the last 100 years.
Which doesn't change the fact that they've been selling features for about half a decade that they're still years at best from delivering.

yes, they do a lot of things well, but they're absolutely deserving of the criticism for the ongoing failures to deliver FSD *as they continue to hype it and pretend it's almost ready*
 
  • Like
Reactions: BooMan
No, the are already 5 forward facing cameras when counting the pillars.
Perhaps I am not understanding this, but isn't radar an easier and mathematically simpler way to measure distance to an object than vision? With radar, it's just d=v*t, with t being the radar signature return time, right? With visual image, the computer needs to interpret the objects in the image based on object recognition algorithm, then it needs to interpret the objects orientation and compare its shape and size to surrounding objects. Seems like vision is more likely to lead to an error?

Besides, our eyes can get trick easily with 3d images that's actually not 3 dimensions in real world, so how does a vision only drive system solve that?
 
Perhaps I am not understanding this, but isn't radar an easier and mathematically simpler way to measure distance to an object than vision? With radar, it's just d=v*t, with t being the radar signature return time, right? With visual image, the computer needs to interpret the objects in the image based on object recognition algorithm, then it needs to interpret the objects orientation and compare its shape and size to surrounding objects. Seems like vision is more likely to lead to an error?

Besides, our eyes can get trick easily with 3d images that's actually not 3 dimensions in real world, so how does a vision only drive system solve that?
I honestly bet that vision-only can't tell these aren't tunnels, or at best gets confused most of the time. Radar & Lidar would say "Wall" from any distance or approach angle.

StreetArt_Painted_Tunnel_02.jpg


StreetArt_Tunnel_To_Nowhere.jpg
 
Tesla is the first successful new automobile manufacturer in US in the last 100 years.

but better cars and manufacturing processes have been available in, for example, Japan for decades, Musky trying to reinvent the manufacturing process has been a disaster e.g. not having enough staff to move the cars when M3 went into production, not bothering to wait for paint to dry...