Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
It is possible they started with the existing perception NN as-was and then unlocked the weights such that the original value to object corellations no longer exist but the planning NN didn't need to deal with quite so many inputs.
Thats possible. IIRC the way Elon explained it (in one or the other interview) was that really only the planner/controller were hand coded in V11 and when that was converted to NN, it became end-to-end NN.

Ofcourse they could have moved on from there and done end-to-end relearning. Or even optimized NN to eliminate some layers etc. But that is always a double edge sword - it gives you less ability to fix individual problems.
 
In MN left turns need to enter the closest lane (I.e. you can’t make a ‘wide’ left turn into the far lane in the destination road)
This is NOT true.

169.19 TURNING, STARTING, AND SIGNALING.​

(b) Approach for a left turn on other than one-way roadways shall be made in that portion of the right half of the roadway nearest the centerline thereof, and after entering the intersection the left turn shall be made so as to leave the intersection to the right of the centerline of the roadway being entered.

The District Court concluded that the law required Birkland to turn into the innermost lane, but the appellate court asserted, “the statute is silent on which lane the driver must enter after turning.”

The statute reads, “after entering the intersection the left turn shall be made so as to leave the intersection to the right of the centerline of the roadway being entered.”
 
  • Like
Reactions: EVNow and mongo
Poor Chuck may be the last to get v12 (or at least just one before me). Kingdom rules: Never speak ill of the King or there will be hell to pay. :oops:
It is a delicate dance - between companies and influencers. They need each other - but companies want to control the narrative. Influencers want to be as independent as possible for credibility. It is always a delicate dance.

That is with professionally run companies - but when run by a king, it gets worse ;)
 
That suggests to me that it's V12's habit of trying to go around stopped traffic, even if it means moving into an oncoming lane
That's an interesting possibility, although how often does 12.x try to go around stopped traffic when there's multiple lanes for your direction? Neural networks pick up signals and share signals in unexpected ways, so it is still possible either way.

I also noticed the turn signal was already engaged for the upcoming left turn, so maybe that also resulted in end-to-end thinking it needed to get even more left. Have those with 12.x played around with manually engaging the turn signal to see if that influences behaviors (other than the usual expected changing of lanes) such as left turn signal when already next to double yellow lines making it think it needs to turn or switch towards oncoming traffic like in this scenario?
 
  • Like
Reactions: JB47394
After the initial hype of V12, reality is starting to set in. It doesn't seem like HW3's inference compute will be able to generalize all the edge cases of day to day driving.

There are just too many, and the NNs don't seem to understand broad concepts yet. The more I use V12.2.1, the more my initial fears about this approach are realized. V11.4.9 was just overall more consistent, reliable, and dependable. V12.2.1 acts like a stubborn rebellious child, and the crazy wipers don't help either. That's why I told y'all's that the wipers NN is compute limited, and it was only good with 11.4.9 because Tesla introduced a more efficient video module. It doesn't make sense that the wipers worked well for me on ~10.69 then sucked again until 11.4.9 and then sucked again with V12.

There's just too much riding on 12.3, but we already went from 12 --> 12.2, so unless Tesla has some new approach or vastly better training resources for 12.3, I'm not sure if there's much hope left for HW3.

We'll just have to wait and see :)
 
This is NOT true.




What a poorly worded statute - its not like there are other rules that say you have to drive on the right side !
I read ‘nearest the centerline’ as applying to the destination lane, but rereading the statute it appears it only applies to the originating lane.
I agree - it’s poorly worded. It also doesn’t address the scenario of multiple turn lanes. (Unless it does in a later section I didn’t read)
 
It doesn't seem like HW3's inference compute will be able to generalize all the edge cases of day to day driving. There are just too many, and the NNs don't seem to understand broad concepts yet.
Given that it seems like 11.x neural networks are still running for visualization and not control, there should be opportunity to shift inference compute from 11.x to 12.x. However, there's a complication of maybe each are running on their own SoC, so 12.x exceeding the capacity of one introduces need for transmitting/receiving data between the two.

But 12.x neural networks even staying at the same size and compute requirement can learn broad concepts it currently doesn't understand. For example, some signal that is below an activation threshold for a concept can be activated in a later trained network that boosts the signal and/or lowers the threshold.

What specific broad concepts do you think are lacking in 12.2.1 to see if it improves in later 12.x?
 
What specific broad concepts do you think are lacking in 12.2.1 to see if it improves in later 12.x?

When I assess an approach, I try to intuit any possible fatal flaws.

The end-to-end approach with video training and my experience with 12.2.1's decision wobble has me concerned:

1) In the case of the decision wobble, V12's dataset has videos of committing to some x maneuver, but it also has videos where it is behaving differently (like braking) in a similar situation. The biggest challenge and question for the current NN approach is, "can the NN conceptualize the purpose of a maneuver and collapse its decision tree in the case of a gray area in decision making."

In that video example you gave of the car parking in Dennys, you can see that right before turning into the spot, the path planner had some milliseconds of turning right. Because the decision of parking was clearer, that plan disappeared as the car inched forward.

There's actually a lot of these gray area decisions, especially in a parking lot. Say there 2 possible paths to a pin in the parking lot but there's a small island in the way. You can turn before the island or after the island, 12.2.1 seems to get tripped up with these decisions.

2) There's still many instances of freezing at stop signs / unprotected turns with 12.2.1. This is something we saw during Elon's livestream and persists today. What sort of miracle would be needed to fix this? That's what I wonder.

3) I think these problems can only be reduced with a higher parameter NN, and HW3 is already limited, so HW3 seems like a dead end for V12. Elon already mentioned that more training compute is needed to reduce inference compute, so I guess there's some headroom, but I feel like there needs to be at least 3-5x higher parameter count for V12 to work well on HW3.