Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
Yeah but what disengagement rate is good enough for release? If we have to wait for Tesla to get to 10,000 miles per disengagement, it will be awhile before Tesla releases it to the entire fleet.
No idea. I hope they have some targets in mind.

In some companies to try to out new features (on the web), the dev team has to have KPIs and what is the expected change before they are flighted in production.
 
I'll bet the bar is lower for release than disengagements per mile. My dreamed up release bar:
  1. HW3 computer doesn't reboot. In other words stable system.
  2. Make sure they have done everything they can to
    • warn the driver
    • inform the driver
    • make sure driver stays engaged.
  3. Perhaps: serious disengagements are near zero. Example of serious disengagements is heading into oncoming traffic.
 
My opinion is that Tesla’s biggest reason for disengagements right now is because of vehicle path planning/execution problems and not so much perception. I believe currently much of that code is still Software 1.0 (coded logic) vs. Software 2.0 (neural networks).

As a result, I don’t see a good reason for Tesla to do wide release until they’ve fixed the current issues. After all, more data doesn’t help much with fixing known logic bugs/problems.

If, however, a portion of path planning execution is driven by a neural net, or if it is PLANNED to be neural net in a relatively short timeframe, then I could see a benefit to a wider release, because in that space data is king.

I don’t know enough about under the hood to know for sure, but I lean toward path planning being mostly hand-coded logic in current iterations.
 
  • Like
Reactions: WattPwr

"This policy is still in the land of 1.0"

I don't think the policy is going to be 2.0 until such a time as Dojo is up and running as the overhead/cost in training a "software 2.0" policy engine is far too high compared to the 1.0

Dojo will bring down training time and cost by at least 10x, probably closer to 100x and this is when they can start training a policy neural net. If we assume this to be true, then Dojo is still a year away at best before its completed so the software 2.0 release of the policy within AutoPilot would by that virtue still be many years away.
 
  • Informative
Reactions: Todd Burch
I don't think the policy is going to be 2.0 until such a time as Dojo is up and running as the overhead/cost in training a "software 2.0" policy engine is far too high compared to the 1.0

Dojo will bring down training time and cost by at least 10x, probably closer to 100x and this is when they can start training a policy neural net. If we assume this to be true, then Dojo is still a year away at best before its completed so the software 2.0 release of the policy within AutoPilot would by that virtue still be many years away.

I actually have a difficult time imagining what software 2.0 driving policy would look like. Right now, perception is pretty straight forward; the input is camera/sensor data, and the output is a 3D map of what's presently around the vehicle... For policy, the input would presumably the the output from the perception stack, but what would the output be? The precise steering wheel angle and acceleration/braking percentage needed at that millisecond to accomplish a goal? A 3D plan for how to navigate any given situation (at which point software 1.0 would execute the plan)?
 
For policy, the input would presumably the the output from the perception stack, but what would the output be? The precise steering wheel angle and acceleration/braking percentage needed at that millisecond to accomplish a goal? A 3D plan for how to navigate any given situation (at which point software 1.0 would execute the plan)?
The dream is an end-to-end NN. Input is x number of cameras + radar etc - output is acceleration/deceleration+steering.
 
The dream is an end-to-end NN. Input is x number of cameras + radar etc - output is acceleration/deceleration+steering.
At that point you might as well make a humanoid robot with an end-to-end NN and then you would be able to replace every human driver on the road using existing vehicles. It would also help with other FSD problems like plugging in at a supercharger (or pumping gas :eek:) and changing a tire.
 
The dream is an end-to-end NN. Input is x number of cameras + radar etc - output is acceleration/deceleration+steering.

Apologies if this is in one of the Karpathy videos above that I keep meaning to watch, but I'm always curious how you test edge cases in a driving policy NN. Also, what type of overriding constraints do you have on it to make sure the car isn't regressing from "ideal" to "average" driving behavior as it gets real data input from the fleet and that it also is strictly following all applicable laws?
 
Apologies if this is in one of the Karpathy videos above that I keep meaning to watch, but I'm always curious how you test edge cases in a driving policy NN. Also, what type of overriding constraints do you have on it to make sure the car isn't regressing from "ideal" to "average" driving behavior as it gets real data input from the fleet and that it also is strictly following all applicable laws?
I remember hearing in an interview with George Hotz (comma.ai CEO and proponent of end-to-end NN self driving) that his hypothesis is that all good drivers are good in the same way whereas bad drivers are bad in different ways therefore the center of the distribution is actually a good driver.
 
Looking forward to seeing how the FSD visualisation changes for the broader release.

I hope they resist the temptation to make it look like a video game. It should be something that a driver can just glance at and see key info instantly. The current visuals are a boaderline "fail" for that already - you cannot see traffic approaching from the rear until it is virtually alongside, for example.
 
Do you really think Tesla's acceptable casualty rate is that?

Can they afford the lawsuits?

What happened to robotaxi by the end of the year?
I think it's pretty clear that when the public FSD is released it's going to still be a Level 2 driver assist like it is for the beta now. Hopefully this cuts down on the liability. I was initially skeptical that a human could react in time if the FSD beta did something stupid, but I've been pleasantly surprised by the videos so far.
 
I think it's pretty clear that when the public FSD is released it's going to still be a Level 2 driver assist like it is for the beta now. Hopefully this cuts down on the liability. I was initially skeptical that a human could react in time if the FSD beta did something stupid, but I've been pleasantly surprised by the videos so far.
I agree. For those of us that use AP on city streets already (even without FSD) this will be a huge improvement in just about every way.