Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Looks like Automatic Set Speed Offset is currently limited to 50% higher than the perceived speed limit. Here it goes up to 37mph when it believes the speed limit is still 25mph after the city street transitions to a freeway.

12.1.2 auto highway.jpg


Also, similar to FSD Beta 10.x switching to Navigate on Autopilot for freeways, 12.1.2 currently switches to 11.x on freeways reverting to previous control ("Changing lanes…" and "Choosing right fork…" messages) and old set speed behavior (58mph instead of AUTO).
 
Has FSD Beta been able to make a right turn then immediately switch 4 lanes to get into a left turn lane in ~100 feet as smoothly as 12.1.2?

12.1.2 cross 4.jpg


Seems like it aimed to arrive just behind the red car on the right edge of the screenshot as opposed to going more straight to get into the beginning of the left-turn lane or going one lane change at a time probably missing the turn. It'll be interesting to see how 12.x would behave if the light was red and needed to look at the multiple lanes of cross traffic on the way the target or if there were more cars waiting behind the red one.
 
  • Informative
  • Like
Reactions: JB47394 and X Fan
Has FSD Beta been able to make a right turn then immediately switch 4 lanes to get into a left turn lane in ~100 feet as smoothly as 12.1.2?

View attachment 1011721

Seems like it aimed to arrive just behind the red car on the right edge of the screenshot as opposed to going more straight to get into the beginning of the left-turn lane or going one lane change at a time probably missing the turn. It'll be interesting to see how 12.x would behave if the light was red and needed to look at the multiple lanes of cross traffic on the way the target or if there were more cars waiting behind the red one.
Impressive! The subsequent action in response to the green light was sad though. Win some, lose some.

Definitely seems like expectations need to be very low for V12. Even if there are things to look forward to, it's going to need a LOT of monitoring in the months and years to come.
 
With regard to controlling end to end behavior, once you have the control output of a neural network you can then put additional guardrails on behavior with traditional C++ code.

For instance, the e2e network might determine the desired speed is 45. They can then reduce the accelerator input by however much they want to achieve a desired acceleration profile based on Chill/Average/Aggressive setting.

Another example: the neural network may output the desired steering wheel angle but normal C++ code can take this angle and put a maximum on it based on current speed.

So the desired control inputs are determined by the network, but then traditional C++ code can put limitations or guardrails on it.
 
  • Helpful
  • Like
Reactions: Matias and Dutchie
Just read up on the literature on end-to-end networks. They're composed of a lot of different modules, etc. There's lots of coding involved.
I think the generally accepted usage of the term "no coding" in this ecosystem means "no procedural (conventional) coding to navigate the car directly", for example, "If we're stopped at a red light and it turns green, initiate forward acceleration routine when safe" or whatever. Of course there's going to be code to implement the neural network-based behavior, but I'd characterize it as like the difference between a human making a carefully-considered logistical decision vs. "just acting from the gut."
 
With regard to controlling end to end behavior, once you have the control output of a neural network you can then put additional guardrails on behavior with traditional C++ code.

For instance, the e2e network might determine the desired speed is 45. They can then reduce the accelerator input by however much they want to achieve a desired acceleration profile based on Chill/Average/Aggressive setting.

Another example: the neural network may output the desired steering wheel angle but normal C++ code can take this angle and put a maximum on it based on current speed.

So the desired control inputs are determined by the network, but then traditional C++ code can put limitations or guardrails on it.

Doesn't that destroy the NN's path planning ability though? If it can't rely on the vehicle to do what it requests, it needs to make the route based on that reduced performance. In which case, there is no need for post NN command attenuation. Similar to adverse traction conditions.
The NN must adapt in real time to what the vehicle is doing, but can still result in bad outcomes (though again, road conditions can't always be known)

NN: I can get through this intersection before cross traffic arrives by accelerating at X m/s.
C++: Easy there buddy, I'll give you 1/2X
NN: #%^%$$%& why am I in this car's path???

C++: Hey pal, road conditions are crap and the top speed is 45 MPH
NN: Ok, I'm going to wait for this cross traffic to pass
 
I think there have to be outer limit guardrails in traditional logic-based code to prevent the net from trying to do something unsafe (turning too sharply at high speeds for instance).

I think the network would adjust in real time to the constraints placed on the actual vehicle control inputs.

How else would they have modes that adjust following distance, acceleration profiles, etc? You can’t have separate neural networks for all of those combinations.
 
I think there have to be outer limit guardrails in traditional logic-based code to prevent the net from trying to do something unsafe (turning too sharply at high speeds for instance).

I think the network would adjust in real time to the constraints placed on the actual vehicle control inputs.

How else would they have modes that adjust following distance, acceleration profiles, etc? You can’t have separate neural networks for all of those combinations.
Right, and I'm thinking those are inputs to the NN, not modifications of the NN's output. That let's the NN adapt the path planning itself.
Agree there would be rationality checks on the final result though.
 
My intuition: V12 outputs its next 4-5 seconds of control to the V11 stack, V11 projects that output as the blue path on the visualization, also this allows Tesla to put guardrails around V12's decision-making, like if V12's output decides it wants to run into a car / over a curb, Tesla can gather those videos and troubleshoot

I still don't think V12 actually uses any V11 perception assets, but the beauty of all this is that V11 augments V12... V11 double checks V12
 
My intuition: V12 outputs its next 4-5 seconds of control to the V11 stack, V11 projects that output as the blue path on the visualization, also this allows Tesla to put guardrails around V12's decision-making, like if V12's output decides it wants to run into a car / over a curb, Tesla can gather those videos and troubleshoot

I still don't think V12 actually uses any V11 perception assets, but the beauty of all this is that V11 augments V12... V11 double checks V12
You believe that the cars are running both V11 and V12? From a compute load standpoint that seems pretty unlikely.

Isn't it rather more likely that they only replaced the V11 control code with a neural network? That would leave everything exactly as it is, but with changed control behavior. The only visualization element that would come from the control system would be the noodle, and people have observed that it has gotten shorter. Everything else seems to be the same.
 
You believe that the cars are running both V11 and V12? From a compute load standpoint that seems pretty unlikely.

Isn't it rather more likely that they only replaced the V11 control code with a neural network? That would leave everything exactly as it is, but with changed control behavior. The only visualization element that would come from the control system would be the noodle, and people have observed that it has gotten shorter. Everything else seems to be the same.
I believe Green confirmed this a few weeks ago.
 
Right, the question is, why? Why only Whole Mars?
well, he's always been Elon's b*tch...
Why not Chuck Cook? He's arguably the most grounded FSD influencer.
Do they want someone grounded or someone who will gush about how incredible it is? I'd rather have someone grounded. I suspect Tesla would rather not.