Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Profound progress towards FSD

This site may earn commission on affiliate links.
Since the "4D rewrite" is foundational per Elon, I suspect that we will get it all at once when Tesla feels confident enough in it. I doubt that Tesla will be able to release it piece by piece if it is foundational to how the whole system works.
I disagree on the grounds that I can see 4D rewrite being rolledout out for Tesla Vision but with the "planning" portion still in the old realm, and the new planning running in 'shadow mode'.

4D rewrite is foundational, but my wild @ss guess would be that the vision part is the smallest part of the rewrite that can be run without other parts.
 
I disagree on the grounds that I can see 4D rewrite being rolledout out for Tesla Vision but with the "planning" portion still in the old realm, and the new planning running in 'shadow mode'.

4D rewrite is foundational, but my wild @ss guess would be that the vision part is the smallest part of the rewrite that can be run without other parts.

Except Elon says that they are combining planning and perception together and the NN will handle both:

"Instead of having planning, perception, image recognition, all be separate, they are being combined. The NN is absorbing more and more of the problem. " Elon

It sounds like the two parts will be inseparable. If so, I don't think they could release them separately. At least, that's how I read it.
 
Except Elon says that they are combining planning and perception together and the NN will handle both:

"Instead of having planning, perception, image recognition, all be separate, they are being combined. The NN is absorbing more and more of the problem. " Elon

It sounds like the two parts will be inseparable. If so, I don't think they could release them separately. At least, that's how I read it.

I have to say, I didn't find your last post controversial at all. It doesn't make sense to have the 2.5D and 4D systems running simultaneously. The delay in releasing the Autopilot rewrite is probably because it is in fact a full replacement for almost every perception net in the vehicle. It's going to affect Lane Keep, Lane Change, TACC, Summon, everything.
 
I have to say, I didn't find your last post controversial at all. It doesn't make sense to have the 2.5D and 4D systems running simultaneously. The delay in releasing the Autopilot rewrite is probably because it is in fact a full replacement for almost every perception net in the vehicle. It's going to affect Lane Keep, Lane Change, TACC, Summon, everything.

Thanks. I thought my post was pretty logical. I mean, Elon says the rewrite fundamentally changes how the entire system works because it will go from processing discrete images and handling planning and perception separately to processing video from all 8 cameras and a NN that handles both planning and perception together. That seems like a big foundational and fundamental change. In software, if different pieces are interdependent on each other, you can't break them up. So I think it makes sense that it would be released all at once. I know we are used to features being released piece by piece but I don't see how something that foundational could be released piece by piece. And yes, I agree that this will probably mean that it will get delayed until the whole thing is ready.
 
it is in fact a full replacement for almost every perception net in the vehicle.
key word perception. (and its every net that will be replaced by the rewrite, not almost)
Planning, while it will be in the full 4D can still be running in "shadow mode" and allow for old planning logic to use the labeled results of the new Tesla Vision stack.
Once planning is validated in shadow mode, I can see it turned on and the old planning logic removed/deactivated.

Either way, it is all speculation until it happens.
 
key word perception. (and its every net that will be replaced by the rewrite, not almost)
Planning, while it will be in the full 4D can still be running in "shadow mode" and allow for old planning logic to use the labeled results of the new Tesla Vision stack.
Once planning is validated in shadow mode, I can see it turned on and the old planning logic removed/deactivated.

Either way, it is all speculation until it happens.

True, but I believe the Autopilot rewrite will also substantially improve planning. The time dimension of the 4D perception won't be exclusively for remembering what happened in the past, but also for predicting future trajectories.
 
  • Informative
Reactions: pilotSteve
True, but I believe the Autopilot rewrite will also substantially improve planning. The time dimension of the 4D perception won't be exclusively for remembering what happened in the past, but also for predicting future trajectories.
Exactly!
Would you not want to validate that new 'prediction' feature against the entire FSD capable fleet for a few weeks/months before you let it loose?
I am not saying it would be separate from the 4D rewrite, I am saying that certain features will be turned on separately.
 
  • Like
Reactions: pilotSteve
Exactly!
Would you not want to validate that new 'prediction' feature against the entire FSD capable fleet for a few weeks/months before you let it loose?
I am not saying it would be separate from the 4D rewrite, I am saying that certain features will be turned on separately.

The new features will definitely be trained with months and months of data collected from the fleet, but it's probably easier for Tesla to just collect this data through their current shadow-mode campaign methodology. If the inputs to the new planning code require unique outputs from the 4D perception network, you couldn't use the outputs from the 2.5D perception network to train it.
 
If the inputs to the new planning code require unique outputs from the 4D perception network, you couldn't use the outputs from the 2.5D perception network to train it.
This here is why I think the planning functionality would first run in shadow mode after the deploy of 4D rewrite. They can collect fleet data right now, but it wold seem more efficient to have the planning logic run a campaign on the new vision stack and report any discrepancies between what new planning predicted vs what the car did in the real world. (plan: curve right car: drives straight)
 
  • Helpful
Reactions: willow_hiller
key word perception. (and its every net that will be replaced by the rewrite, not almost)
Planning, while it will be in the full 4D can still be running in "shadow mode" and allow for old planning logic to use the labeled results of the new Tesla Vision stack.
Once planning is validated in shadow mode, I can see it turned on and the old planning logic removed/deactivated.

Either way, it is all speculation until it happens.

Except Elon specifically says the NN will do both perception and planning. The 4D rewrite combines both perception and planning into one NN.
 
I did not contradict anything Elon said.
Please show me where I did?
Or in your view, turning something on later is somehow me saying that the feature does (or will) not exist?

Elon said this "Instead of having planning, perception, image recognition, all be separate, they are being combined."

Perception and planning are being combined into 1 single NN. How can Tesla run the perception part "live" but run the planning part in "shadow mode" when they are both the same NN?
 
  • Like
Reactions: Jeff Hudson
Elon said this "Instead of having planning, perception, image recognition, all be separate, they are being combined."

Perception and planning are being combined into 1 single NN. How can Tesla run the perception part "live" but run the planning part in "shadow mode" when they are both the same NN?
Simply...
The execution (driving logic) could be directed to take outputs from the planning output from old code instead of the new plans.
Heck the campaign could be to compare the 2 plans (new vs old) and then compare the vehicle execution (what happened).

just a little creativity goes a long way.
 
Simply...
The execution (driving logic) could be directed to take outputs from the planning output from old code instead of the new plans.
Heck the campaign could be to compare the 2 plans (new vs old) and then compare the vehicle execution (what happened).

just a little creativity goes a long way.

I don't think that works since the "4D rewrite" takes the camera data and outputs both perception and planning to the driving controls. So if you feed the camera data into the "4D rewrite" you get both perception and planning. What you could do is have the entire "4d rewrite" run in shadow mode and compare the new perception+planning with the old perception+planning. But you can't mix and match, I don't think. At least, if I am understanding Elon's description of the 4D rewrite correctly. In any case, I guess we will find out when they release the "4D rewrite".
 
I don't think that works since the "4D rewrite" takes the camera data and outputs both perception and planning to the driving controls. So if you feed the camera data into the "4D rewrite" you get both perception and planning. What you could do is have the entire "4d rewrite" run in shadow mode and compare the new perception+planning with the old perception+planning. But you can't mix and match, I don't think. At least, if I am understanding Elon's description of the 4D rewrite correctly. In any case, I guess we will find out when they release the "4D rewrite".
I think of the planning output as a layer that will be available along with the 3D labeled video.
Just like a labeled output, the plan has to be consumed by another client (execution layer).
The execution layer in my example would not consume the output from the rewrite for a few more weeks but rather have a campaign going to observe and report discrepancies between the plan and what happened in the real world.
 
Yes, but will my car that I love so much STOP TRYING TO KILL ME?

Does your car look like this?:
latest


:D
 
Last edited:
I swear I remember him saying something similar a year or two ago about using an alpha build on his commute.
So many companies have gotten way past the point of "almost zero interventions" for 10-20 miles years ago and they still don't have FSD. I guess the argument is that the methods that Tesla is using to get there is going to enable them to get to the ~100,000s of miles between failures required for FSD.