Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Poll: when will FSD Beta v9.x go fleet-wide?

When will FSD Beta v9.x go fleet-wide?


  • Total voters
    111
  • Poll closed .
This site may earn commission on affiliate links.
Elon's tweet about removing production radar is incredibly ballsy IMO. If FSD Beta gets into accident and it is revealed that the camera vision made a mistake, and radar would have prevented the accident, there will be loud cries for Tesla to put radar back into cars.
I would be far more impressed if they announced a rollout of improved hardware rather than removing any hope of redundancy. Put another camera at least in the front side units. Add 360 radar. Fix the close-up blind spots near the ground. Add IR driver monitoring.

Look at what happened to the 737 Max. If you get shut down for an extended time it costs you far more than doing it right in the first place.
 
My hunch is that more important than the hardware constraint of lacking 360-degree radar sensors is the software constraint of having no mature deep learning research on feeding radar signals into a vision neural net architecture that is itself a very recent innovation.

If they could "just add radar", of course they would. The question is how to feed both radar signals and video into the same HydraNet (or whatever they're calling their nets nowadays) and turning that into a top-down, 3D (or 4D) model of the car's surroundings that has fewer false positives and false negatives than it would without radar input.

I suspect that Tesla AI has decided this software problem isn't a cost-effective use of their scarce scientific and engineering resources.
 
Unclear. I don't think it would be terribly complex to add a selfie cam as an accessory.
On the Model X, this is a bunch of work. Remember the route to the mirror is a 2'+ long strip along the window which goes into the headliner in the back of the car. Not impossible, but a lot of labor to get a wire from there to the HW3 computer.
Does the HW3 computer they put in S/X have a spare camera input?
 
On the Model X, this is a bunch of work. Remember the route to the mirror is a 2'+ long strip along the window which goes into the headliner in the back of the car. Not impossible, but a lot of labor to get a wire from there to the HW3 computer.
Does the HW3 computer they put in S/X have a spare camera input?
I think it can maybe go into the infotainment computer? Not sure if the visual driver monitoring is running on the Autopilot computer or the Tesla OS computer.
 
My hunch is that more important than the hardware constraint of lacking 360-degree radar sensors is the software constraint of having no mature deep learning research on feeding radar signals into a vision neural net architecture that is itself a very recent innovation.

If they could "just add radar", of course they would. The question is how to feed both radar signals and video into the same HydraNet (or whatever they're calling their nets nowadays) and turning that into a top-down, 3D (or 4D) model of the car's surroundings that has fewer false positives and false negatives than it would without radar input.

I suspect that Tesla AI has decided this software problem isn't a cost-effective use of their scarce scientific and engineering resources.
From what @verygreen has posted, my understanding (it could be wrong) is their distance (and relative speed which would require observations over time) estimates using vision are kind of noisy and maybe not that great yet. It's possible the new PureVision® fixes this issue; seems like a difficult problem though. I'm not sure even how quickly the vision solution would pick up sudden speed changes of vehicles in front (humans are pretty good at this if they're paying attention) without observing brake lights. I could see this being less of an issue in City Streets but perhaps more of an issue with freeway driving (though to some extent the distinction is arbitrary since high speeds are possible in City Streets too).

I guess we'll see. I have no idea what are the actual capabilities there. For sure vision has to work really well though. Maybe when it works well enough for general environment perception the speed estimates naturally end up being really good, and then maybe they won't need radar. I think it's here to stay for a few years though!
 
  • Informative
Reactions: pilotSteve
iWFQYAW.jpg


kYB3UNh.jpg
 
Based on the info in the other thread, it seems Tesla is working hard on camera driver monitoring. So I think it is possible that V9 FSD Beta will also come with camera driver monitoring in order to both improve driver attention and also offer hands-free.

So you will have your wish :)... Want to have a friendly wager how long it will be before people: (a) complain about false positives (claiming they are not paying attention when they SWEAR they were) and (b) someone figures out a defeat device/trick? :)
 
So you will have your wish :)... Want to have a friendly wager how long it will be before people: (a) complain about false positives (claiming they are not paying attention when they SWEAR they were) and (b) someone figures out a defeat device/trick? :)

Yeah, I will definitely be very happy if Tesla enables camera driver monitoring and allows us to go hands-free.

Oh, I am sure it won't take long for people to complain about false positives or find a way to defeat the system.
 
What a crazy word salad from Elon. He first says bits/sec is important, but then he mentions signal/noise dominates. Nothing makes it clear why optical wavelengths are inherently higher S/N than radar.
This is a criminal misuse of the phrase "word salad". Just because you don't understand something doesn't mean it's nonsense.

Radar is a sparse point cloud. HD video is a high-resolution capture of every inch of the driving scene. Therefore, the amount of signal that neural networks can extract from HD video is inherently higher than from radar.
 
This is a criminal misuse of the phrase "word salad". Just because you don't understand something doesn't mean it's nonsense.

Radar is a sparse point cloud. HD video is a high-resolution capture of every inch of the driving scene. Therefore, the amount of signal that neural networks can extract from HD video is inherently higher than from radar.

Yet another case of someone saying "Read what Elon meant, not what he said.". Your description is reasonable, but it uses almost none of the same words as Elon.

He complains about low bit rates from Radar (which doesn't have anything to do with being a sparse point cloud). Low bit rates can easily represent data sources which already have lots of processing and very high S/Ns. I can then turn around an have an 8K sensor with a crappy lens on it that is no better than a 1K sensor.

And as an FYI, I work with machine vision daily at my job. I understand what he means. He's just SO awful at describing it that it's more like he's just using random words he knows. I mean, some guy on the internet was able to describe it better than him in the same number of words yet he's the zillionare "technoking" of what is supposedly a leading AI company.
 
Fleetwide, as in every car that bought FSD gets FSD Beta? I’d be shocked if it was available by this time next year. I’m still pretty sure the “Button” will only be available to a limited amount of people, up to the 10k Elon said they would be expanding the beta to back when 8.3 was going to be a thing before they decided to apparently just skip the 1000+ fixes in 8.3 and go straight to v9’s “step change” in vision.

Maybe it will be available to everyone if/when they get the interior camera detection fairly reliable and they’re ruthless in cutting people from the Beta when the interior camera’s NN reaches some threshold of “Inattentive driving detected” for X percentage of FSD driving, but otherwise it sounds like a fleetwide distribution of v9 would be flirting with disaster for Tesla.
 
This is a criminal misuse of the phrase "word salad". Just because you don't understand something doesn't mean it's nonsense.

Radar is a sparse point cloud. HD video is a high-resolution capture of every inch of the driving scene. Therefore, the amount of signal that neural networks can extract from HD video is inherently higher than from radar.
Cameras also provide additional data not provided in radar, in addition to their higher spatial data density. Color information is one example.

Radar bouncing characteristics actually CAUSE a lot of problems too. Phantom braking and driving through tunnels are two examples where radar is a hindrance instead of an improvement.
 
  • Informative
Reactions: shrineofchance
Yet another case of someone saying "Read what Elon meant, not what he said.". Your description is reasonable, but it uses almost none of the same words as Elon.

He complains about low bit rates from Radar (which doesn't have anything to do with being a sparse point cloud). Low bit rates can easily represent data sources which already have lots of processing and very high S/Ns. I can then turn around an have an 8K sensor with a crappy lens on it that is no better than a 1K sensor.

And as an FYI, I work with machine vision daily at my job. I understand what he means. He's just SO awful at describing it that it's more like he's just using random words he knows. I mean, some guy on the internet was able to describe it better than him in the same number of words yet he's the zillionare "technoking" of what is supposedly a leading AI company.
I would have expected someone working in the field of machine vision to understand what he said better.

Low bit rate doesn’t have to do with being a sparse point cloud? Ok.

By definition, bitrate is the amount of data per unit time.

Radar gives you x/y/z data. Vision gives you x/y/color data, and with 2 or more cameras, z can be calculated from the other data through photogrammetric bundle adjustment.

Even with a 1k camera/lens combo, 1000x1000x3x30 gives you 90M pieces of information per second. Radar doesn’t come close to that.
 
  • Helpful
Reactions: shrineofchance
I'd be shocked too. Sadly I am pretty sure it will only be 'fleetwide' in the US. :(
For the purposes of this poll, I am assuming "fleet-wide" was in reference to the US fleet only. And I'm also assuming that "fleet-wide" means an "opt-in" choice for anyone who has purchased FSD in the US. Meaning only people who click through and say they want it will get it. I don't think it will be limited to those with driver monitoring (Model 3/Y). Can you imagine how upset people would be if it were?

I chose September 2021 but I have no idea. It's very hard to predict! All depends on Tesla's risk tolerance (and their progress, of course).
 
December 2021 update is going to be 🔥🔥🔥🔥 again!

Its pretty amazing what Tesla is going to be doing with just camera data. Doesn’t the auto wiper function using FFC’s? 🤷‍♂️
🤔
I wonder, does fsd beta/v9 address the “x camera blinded, and idk wtf to do now” (paraphrasing) problem when you are driving in a mild rain, and NoA turns off? I thought that poor (human eye) visibility conditions is where radar was a better choice?

Do the cameras already ”see” better then we can? What about at night? The resolution of the rear and repeater cameras in the dark takes me back to the days of Motorola flip phone video. Maybe another camera upgrade is in the future?
 
December 2021 update is going to be 🔥🔥🔥🔥 again!

Its pretty amazing what Tesla is going to be doing with just camera data. Doesn’t the auto wiper function using FFC’s? 🤷‍♂️
🤔
I wonder, does fsd beta/v9 address the “x camera blinded, and idk wtf to do now” (paraphrasing) problem when you are driving in a mild rain, and NoA turns off? I thought that poor (human eye) visibility conditions is where radar was a better choice?

Do the cameras already ”see” better then we can? What about at night? The resolution of the rear and repeater cameras in the dark takes me back to the days of Motorola flip phone video. Maybe another camera upgrade is in the future?
If you've watched the videos, you'll know that the x camera blinded problem doesn't affect FSD as far as forcing it to disengage.