Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
It’s not. I’m not following.


I don’t understand.

All I was saying is that it was not possible to predict movement of the truck, so it was inappropriate to leave no buffer zone (leaving no time to react). The car did not even drift to the left of the lane!

This is why I am curious about lane positioning and buffer zone maintenance in v12. If drivers are not routinely seeing it (it has not been addressed one way or the other what the current status is), then we have to wonder why. I don’t see how it would not show up.
We're basically saying the same thing just differently. FSD should have moved over if there was room and/or at least slowed down.
 
  • Like
Reactions: AlanSubie4Life
Interesting message from Jeremy Goldman "Manager, Programs and Data Labeling @Tesla"

The last couple of weeks would include 2023.44.30.20 / 12.2.1 going to what seems to be ~5% of California FSD Beta audience. Presumably usage and disengagements of end-to-end are highly valuable data to improve the next 12.x release with (human?) labeled examples of what it did wrong and what could be done better. Hopefully "So much insane progress across both" means Tesla will have significant improvements to 12.x (and Optimus) coming soon.
 
>The chess analogy is encouraging. For years people wondered if a computer would ever be able to beat a human at chess. Many people said no.
Chess? Please... [Chess is] a trivial sandboxed small problem space that you can brute force and that doesn't require real-time decision making and isn't safety critical.
Agree, and here's the point. Back in the day (80s, 90s?) most people really, really didn't think that a computer would ever be able to beat a grandmaster.

Now, many people really, really don't think we can get to fully autonomous driving.
 
Last edited:
It should always wait until it can see enough to assess the current situation, not a previously anticipated one
If end-to-end is making decisions for every 36 frames per second, the potential for very quick reaction time does lessen the need for some aspects of memory. The AI Day 2021 examples of where memory is useful included "How fast is this car traveling?" "Is this car double parked?" "Is there a pedestrian behind this crossing car?" where having longer memory can improve each of these, and the last example of pedestrian is most similar to the stop sign situation with turning bus occluding the oncoming vehicle.

In both cases of pedestrian and oncoming vehicle, FSD Beta should continue to yield to them even if they're temporarily out of view, but there's also a tension of how long do you rely on that memory to keep waiting? Similarly, always waiting is probably safe in one aspect but lack of progress from constantly re-assessing after every occlusion might slow things down so much that the unexpected/non-human behavior decreases safety in other aspects.

Hopefully disengagement data from 12.x being too aggressive in these situations of blocked views can result in end-to-end learning when to correctly apply your suggested wait and assess the situation concept.
 
Don't understand. Who is "their"? Do you mean his wife's car? She has FSD but his account was one on the influencers original accounts and not hers. He may can use it on his wife's car but not sure.
Originally I wrote his, meaning his family, but realized that could be ambiguous so switched to their meaning he could be doing v12 testing in someone else's car...
 
Promising, but I have the Mother of All Lane Drift Roads that V12 needs to pass before we call it fixed. I anxiously await the opportunity to put it through the ordeal.
something pretty similar to the Dirty Tesla "Michigan Left" accounts for the majority of my disengagements in recent months. In my case they are full-fledged dedicated right turn lanes which FSDb moves over into instead of just staying in the through lane. Somtimes it just barely twitches the steering, but often it goes all the way over to the lane, and sometimes, if (with not traffic nearby) I allow it to, it continues through the intersection in that lane, then uses the entry lane on the other side to get back. Ugh...I sure hope V12 fixed mine.
 
  • Love
Reactions: sleepydoc
Looks like another 12.2.1 getting really close to truck tailgates although here this one was parked.


12.2.1 truck stop.jpg


She disengaged with a hard right steering where ultrasonics showed ~14in(?) and then STOP (12"?). I wonder if FSD Beta already steering to the maximum can be disengaged by attempting to steer even more in that direction?
 
>The chess analogy is encouraging. For years people wondered if a computer would ever be able to beat a human at chess. Many people said no.

Agree, and here's the point. Back in the day (80s, 90s?) most people really, really didn't think that a computer would ever be able to beat a grandmaster.

Now, many people really, really don't think we can get to fully autonomous driving.
There are flat-earthers too. It doesn't really mean anything other than that people are generally quite stupid.
Waymo has been driving fully autonomously without a safety operator since 2017, that's seven year ago.

A small sandboxed game like Chess can be solved without AI using tree searches and Moore's Law alone. It's a completely meaningless and incorrect analogy.

What about if we compare with a simple but safety critical computer vision application, like radiology, instead?

Geoffrey Hinton said that radiologists should start looking for another line of work back in 2016. Here we are in 2024 and machine learning still can not remove the human from that equation... So computer-vision-only autonomous self-driving is likely 5-10 years away still.
 
Last edited:
  • Informative
  • Like
Reactions: Nrkl and OxBrew
She disengaged with a hard right steering where ultrasonics showed ~14in(?) and then STOP (12"?). I wonder if FSD Beta already steering to the maximum can be disengaged by attempting to steer even more in that direction?
The interesting bit for me was that V12 cranked the wheel to the limit, then backed off slightly as it approached the truck. As if it knew that it could make it with a less severe turn, and that that was somehow better. The system is obviously comfortable getting very close to other vehicles (which we've seen in other videos). I'm just surprised that there's no apparent instinct to stay away from other vehicles when practical.

"What are they teaching our autonomous vehicles these days?"
 
Here is a pretty good article on the move to End-to-End that does a good high level description. Breakdown: How Tesla will transition from Modular to End-To-End Deep Learning
Maybe some of you have read this already, but it is new to me.
That article supports the modular theory, that they basically only changed the planning part to full NNs, with the "end-to-end" part being the perception engine being aware of the planning results (instead of previously being 100% independent). This is as opposed to fully new single network where there isn't a perception dedicated part which can generate a visualization.

If it's true it makes sense how they were able to release it relatively quickly, without too many regressions.
 
  • Like
Reactions: lzolman