Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Ah yes, our wager. In hindsight it was obvious that traditional programming would never be able to negotiate the arc of Chuck’s UPL. There’s just no way to calculate the future positions of all the vehicles with C code!
Sure there is, It would work something like this:

1. Look at a frame from the cameras. Tentatively identify cars and other objects that might be moving.
2, Check for positional changes in subsequent frames. Spawn a new thread to track each object that is determined to be in motion.
3. In each "moving object" thread, conintiually update the position and predicted path of the tracked object. Extrapolate to predict possible intersections with ego and/or ego's predicted path.
4. Terminate threads whose objects have stopped moving or cannot intersect ego's path.
5. Rinse and repeat.

You get the idea. Predicting the positions of dozens of vehicles with C code is trivial compared to, say, predicting the position of millions of particles in fluid dynamics simulations. That's not to say that NNs aren't a better solution, but procedural and object oriented programming has handled problems more complex than this for many decades.
 
Any honking from other drivers is disqualifying
I wonder if Tesla can do automated analysis based on honking or maybe even duration? Although I suppose most people wouldn't let 12.x get into a situation that would result in nearly 15 seconds of honking. o_O

12.1.2 incorrect pass.jpg


I believe it was 10.3 that introduced maneuvering into oncoming lanes then a later 10.x that fixed this issue of mispredicting a lead vehicle as parked, so I wonder if Tesla is explicitly tracking all of these previously fixed behaviors that need to be reverified/retrained with 12.x?
 
Sure there is, It would work something like this:

1. Look at a frame from the cameras. Tentatively identify cars and other objects that might be moving.
2, Check for positional changes in subsequent frames. Spawn a new thread to track each object that is determined to be in motion.
3. In each "moving object" thread, conintiually update the position and predicted path of the tracked object. Extrapolate to predict possible intersections with ego and/or ego's predicted path.
4. Terminate threads whose objects have stopped moving or cannot intersect ego's path.
5. Rinse and repeat.

You get the idea. Predicting the positions of dozens of vehicles with C code is trivial compared to, say, predicting the position of millions of particles in fluid dynamics simulations. That's not to say that NNs aren't a better solution, but procedural and object oriented programming has handled problems more complex than this for many decades.
Yep, sometimes sarcasm doesn’t come through on the internet.
I think the only possible explanation is that the perception system, which has always been neural nets, is insufficient. Tesla has sent test vehicles out to Chuck’s turn multiple times. If it was a planner problem it should have been trivial to simulate millions of scenarios.
I still think they should be able to get it to work 90% of the time with HW3 though. Maybe going to NNs for the planner will handle uncertainty in the perception better.
 
In V11 we get the degraded message, and eventually the takeover screen
There's another message that's been around from before FSD Beta that looks similar to both the usual "Apply slight turning force to steering wheel" nag with gray hands and the more recently added "Please pay attention to the road" small red hands. It's the same message as the former but with small red hands (i.e., not the big red steering wheel "take over immediately"), and it seems to show up when Autopilot knows it's confused and triggers an audible alert too, and I've experienced it more often with FSD Beta in construction areas. Here's a DirtyTesla video of it showing up in 11.4.7.2, and these all still show up with the new positioning at the top with 11.4.9 recall.

11.x small hands.jpg


However, I would guess this message is handled by traditional C++ code realizing there's oddness in inputs leading to uncertainty of control outputs, so 12.x might have lost this ability to warn the driver of low confidence when throwing out the old code. Potentially the 12.x neural networks could output a confidence score that might actually be more useful than the existing message in that there could be more early warning for the driver as opposed to an immediate "I'm confused right now." Then again, that might not be something Tesla wants to expose or even put effort into a potentially short-term issue.
 
  • Informative
Reactions: JB47394
I still think they should be able to get it to work 90% of the time with HW3 though.
Yes I have long maintained that all they had to do was speed things up. It’s ridiculously slow at everything it does. It only gets up to 11-12mph crossing speed if it’s not rolling it! I think it took 4-5 seconds to cross sometimes! If you’re not rolling it, that means it is busy, and you’re likely going to need to gun it to make it across with plenty of margin.

This is probably for safety, due to inattentive and slow-reacting “safety drivers” (who are actually just consumers with varying aptitude for such a task).

However, it would solve most of the problems of limited perception. The only issue I can think of off the top of my head would be situations where oncoming traffic is traveling much faster than expected (for example 70-80mph in Chuck’s case). For those ones it could be game over.

V12 looks like it exhibits some alacrity from stop signs right now but I haven’t paid much attention elsewhere. It still seems pretty ponderous overall in my limited viewing. In any case I’m sure they’ll dumb it down before release, pulling all the sliders to minimum, ensuring failure on the turn and continued frustration at stop signs.
 
Last edited:
Yes, it definitely looked super smooth. But it was also not a complicated roundabout.

The roundabout itself wasn't complicated, but there were potentially complicated interactions with other vehicles. At the time of entering the circle, one on-coming that would have had the right-of-way had it continued around instead of proceeding straight, and one entering that would intersect the path of ego had it not continued:

1706540466149.png


In most cases for small roundabouts near me, V11 treats them as stop signs when other vehicles are visibly in the circle already.
 
Agreed, but I think we're both saying it may not happen in the initial deployments. I'm saying it requires enough data to do that classification, and I'm assuming that right now they're trying just to get to enough training data to deploy a safe AI driver. As you say, the step beyond is likely down the road.

But, there may be some features of the model that make it easier for the AI to judge aggressiveness and therefore adjust Its behavior at user request. Maybe tied into all the various confidence judgments that we're pretty sure are already part of the decision networks.

I've long wanted that kind of customization. Starting perhaps with an interface that allows us to submit our own map corrections and hints on our own familiar routes. But understandably, I think Tesla is trying to do all this learning from Fleet telemetry and not from any level of offline user programming.

Eventually this could lead to route planning and behavioral settings tied to individual operator behavior - but it feels pretty far "down the road"! :)
At least . . . 2 Weeks?
 
My guess is that maybe Tesla could tweak parameters so that the end-to-end only does lane keeping, lane changes and keeping distance from lead car and does not do all the other stuff that FSD beta can do. I wonder if there would be a way to segment the end-to-end where Tesla just keeps the parts of the NN that only do lane keeping, lane changes and keeping distance from lead car. Or maybe Tesla could do separate end-to-end training just to redo the basic AP stack.

The other possibility, which might actually be more likely, is that once V12 is no longer beta, that Tesla just cancels AP altogether and only sells FSD. I feel like that would make sense because once FSD is good enough, why would Tesla even bother with just basic AP anymore.? In fact, selling basic AP might be less safe compared to using V12. Moreover, I could even imagine a scenario in a few years where Tesla just makes FSD V12 standard on all cars. After all, if FSD V12 becomes safer than a human, why not make it standard on all cars? Alternatively, Tesla might keep 2 stacks. The old basic AP stack just as way to give consumers a driver assist system who cannot afford to buy FSD. But then all of FSD (city, highway, smart summon, auto park) would be the V12 stack.
Why not just have one stack and make the current autonomous mode (AP or NoA/AoCS) an input to the NN(s)?
 
Why not just have one stack and make the current autonomous mode (AP or NoA/AoCS) an input to the NN(s)?

Not sure what you mean. The end-to-end stack takes input from the cameras. It does not take input from the autonomous mode. And the end-to-end stack is trained on video to perform certain tasks like handle a roundabout, a lane change, an unprotected turn, a 4 way stop etc... So I am not sure how you would input to tell the end-to-end that you just want AP mode or FSD mode if that is what you are suggesting.
 
  • Like
Reactions: powertoold