Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Anecdotally, a month or two to general roll out.
Not possible in a month. I've not even heard that v12 has gone to employees. After it goes to employees, it will likely take two, or more updates before it goes to Whole Mars and the rest of the OG testers. It will take at least two more updates before it starts to roll out to a small number of us ordinary users on the test branch. Probably a couple updates later before it goes wide on the test branch.

That's at least six updates before I see it (unless I win the rollout lottery). I wonder how long it takes to do v12 training? Two weeks? A month?

All this assumes that there are no major setbacks or roadblocks. This is likely a very bad assumption.
 
From the company liability perspective it makes a big difference.

At what x will the jury decide it's not the manufacturer fault?
It’s not about what X is, it’s about the difference between an inherent defect and manufacturing failure rates. cars fail and cause acidnts, since all components have some level of manufacturing defect. Liability comes about when the part or design was not fit for purpose, rather then after an isolated failure. Testa will certainly argue that this applies to systems such as FSD.
 
  • Helpful
Reactions: pilotSteve
All this assumes that there are no major setbacks or roadblocks. This is likely a very bad assumption.
Absolutely. I gave it 6-12 months as soon as I heard they were working on it. It's one thing to write safety critical C++ code. Now they've gotta figure out how to produce safety critical neural networks. They've never done that, so it's going to take time. It's a new skill set.

The faster they turn around V12 to match the current system, the better an indication it will be that V12 has real promise. If it's six months, that's a great sign. If it's a year, not so much. Longer, and I'd assume that either they can't figure out how to master the necessary techniques, or the current hardware and software just doesn't have the potential to do what's needed.
 
The question is, how much unacceptably and surprisingly bad behavior could come because the system doesn't know, and presently has no way of being told, information that human citizens take for granted?

The solution is a LLM "copilot" to coach the autopilot on navigation and any other relevant information (traffic, wx, passenger requests). The stubs to receive this input are obviously already built into FSD v12 stack, or else it couldn't change a destination while en-route.

Grok is the basis of this missing piece of the puzzle, and will use language cues to guide the autopilot.
 
The solution is a LLM "copilot" to coach the autopilot on navigation and any other relevant information (traffic, wx, passenger requests). The stubs to receive this input are obviously already built into FSD v12 stack, or else it couldn't change a destination while en-route.

Grok is the basis of this missing piece of the puzzle, and will use language cues to guide the autopilot.
Can you give a concrete example to explain how things work from the first step of seeing things around the car via the cameras at time T1 to the last step of making the car move at time T2? Let's say T2 - T1 = 0.05 second.
 
Last edited:
Can you give a concrete example to explain how things work from the first step of seeing things around the car via the cameras at time T1 to the last step of making the car move at time T2? Let's say T2 - T1 = 0.05 second.

What do you mean by "a concrete example"? Do you mean an analogy to they way humans drive? Okay. ;)

In a rally race team, the driver (pilot) is responsible for the immediate control of the car. If it starts to slide, the pilot countersteers. If a deer runs in front of the car on a blind corner, he dodges it without being told. It's all the 'twitch' parts of driving.

Meanwhile, the navigator (co-pilot) says that an intersection is coming in 200 meters, make a right turn. Slow down for the blind corner ahead (because the co-pilot knows its a high accident location).

The real point of all this is that text prompts can be sent in a just-in-time fashion to guide the pilot to the destination while dealing w. the unexpected. The pilot remains fully in control and carries out the instructions from the co-pilot, which is implemented as a large-language model (LLM) in the Navigation stack.

HTHs. ;)
 
Absolutely. I gave it 6-12 months as soon as I heard they were working on it. It's one thing to write safety critical C++ code. Now they've gotta figure out how to produce safety critical neural networks. They've never done that, so it's going to take time. It's a new skill set.

The faster they turn around V12 to match the current system, the better an indication it will be that V12 has real promise. If it's six months, that's a great sign. If it's a year, not so much. Longer, and I'd assume that either they can't figure out how to master the necessary techniques, or the current hardware and software just doesn't have the potential to do what's needed.
I consider the existing perception NNs to be safety-critical.
 
how is the 1 every 10 mile disengagement rate calculated? I seem to recall a lower disengagement rate being touted shortly after FSD was expanded to highway driving. If you're including highway driving then it will automatically be lower because the number if decisions per mile is much lower. one needs to look at city vs highway miles. You can further break it down to rural vs urban highway miles since the former is about the easiest task FSD has to handle.
Crowd sourced from 159 testers.

Used to be around 10 in earlier releases. Now its more like 4 in the city. 35 on highways.

1699894095588.png




 
  • Informative
Reactions: beachmiles
It’s not about what X is, it’s about the difference between an inherent defect and manufacturing failure rates. cars fail and cause acidnts, since all components have some level of manufacturing defect. Liability comes about when the part or design was not fit for purpose, rather then after an isolated failure. Testa will certainly argue that this applies to systems such as FSD.
Right - so the question is what "X" is likely to convince juries that FSD is "fit for the purpose" ?

Surely saying FSD has only 10% accident rate compared to humans as opposed to 100% will make a difference.
 
If this accelerated schedule holds, meaning within the next couple months, around end of the year, then I suspect it is not end to end. Seems a bit quick for a “complete rewrite!”

On the other hand, end to end has not really been defined so I guess Elon can call it whatever he wants.
 
Why would they need to be 100x safer than a human? How about 2x safer than the average taxi driver?
Right - so the question is what "X" is likely to convince juries that FSD is "fit for the purpose" ?

Surely saying FSD has only 10% accident rate compared to humans as opposed to 100% will make a difference.
FSD should be approved when it’s better than the worst ten percent of drivers. Getting seniors and drunks off the road would be a positive for society. We need legislatures to limit the size of awards to encourage adoption. No 100 million dollar jury verdicts.
 
FSD should be approved when it’s better than the worst ten percent of drivers. Getting seniors and drunks off the road would be a positive for society. We need legislatures to limit the size of awards to encourage adoption. No 100 million dollar jury verdicts.
Two problems
- people who are not in the bottom 10% will use FSD and cause more accidents
- any limits on awards will be used by companies like Cruise and Uber to putout bad software
 
FSD should be approved when it’s better than the worst ten percent of drivers. Getting seniors and drunks off the road would be a positive for society. We need legislatures to limit the size of awards to encourage adoption. No 100 million dollar jury verdicts.
Aside from the obvious problems with this rollout metric: It has a LONG way to go by that metric. Give it another 5 years. Maybe! Hard to predict with much certainty over such massive periods of time.

What is certain is billions more will go into R&D to see if this can be done, in that time period.

The learning approach seems like the right idea, since it seems utterly impossible to program every corner case, but it’s not clear whether the technology is up to this daunting task of imitating the incredible & versatile human brain.
 
Right - so the question is what "X" is likely to convince juries that FSD is "fit for the purpose" ?

Surely saying FSD has only 10% accident rate compared to humans as opposed to 100% will make a difference.
IMHO this will all come down to insurance. If the insurance companies figure out FSD (or similar) is safer, they are going to push people into using these assists more, since most of theme they pick up the tab. It will be interesting, if a little alarming, to see when the first insurance policy comes out that REQUIRES that the car is driving most (all?) of the time.