Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
If I could have enlightenment rather than ignorance simply by preferring it, repeatedly or otherwise, I'd be good.


I mean, I've provided several educational links that could help dispel the ignorance around what things like OEDR, DDT, ODD, and other terms that define driving and automation actually are.

Certain posters unwillingness to prefer to read any of them before continuing to try and discuss those things seems obvious at this point.
 
If I could have enlightenment rather than ignorance simply by preferring it, repeatedly or otherwise, I'd be good.
Sorry to waste this bandwidth, but the last 5 pages or so have the distinct feel to me of an extended discussion about what the meaning of "is" is...
Can we please get back to debating what Elon meant when he said “2 weeks?”
 
Jezus god Jim, I really thought early adopters would get V12 sometime by NOW… after Dec 2023 comments about rolling out now to internals, etc.. we’re now TWO months past that date. I don’t want it if its crap, but still two MONThS after projection seems about 2sd beyond normal “two weeks” hyperbowl (and I use the word BOWL purposely)
 
This thread has become painful to read, I've been adding more people to ignore (from the investment forum, I guess the stock is down too much lately) and there are people constantly quoting each other so I don't get to see what they're responding to and people argue and bicker about semantics and grandmas. But then it's not like there's anything else to do while we wait for V12.
 
I'm curious .. what so you think these things are?


For higher than L2 at all? No reason for anyone to "think" they know-- Tesla themselves explain it in the CA DMV docs, and nothing fundamentally has changed through today although the OEDR has gotten marginally less limited.


Tesla said:
City Streets’ capabilities with respect to the object and event detection and response (OEDR )sub-task are limited, as there are circumstances and events to which the system is not capable of recognizing or responding. These include static objects and road debris, emergency vehicles, construction zones, large uncontrolled intersections with multiple incoming ways, occlusions, adverse weather, complicated or adversarial vehicles in the driving path, unmapped roads. As a result, the driver maintains responsibility for this part of the dynamic driving task(DDT).

In addition, the driver must supervise the system, monitoring both the driving environment and the functioning of City Streets ,and he is responsible for responding to inappropriate actions taken by the system. The feature is not designed such that a driver can rely on an alert to draw his attention to a situation requiring response. There are scenarios or situations where an intervention from the driver is required but the system will not alert the driver. In the case of City Streets (and all other existing FSD features), because the vehicle is not capable of performing the entire DDT, a human driver must participate, as evidenced in part through torque-based steering wheel monitoring, or else the system will deactivate



So at a very high level there's 2 items missing from Tesla being able to offer an L3 system:

A complete OEDR system for all defined ODDs the system is intended to operate
and
A system designed to be capable of recognizing and alerting the driver in all cases where it needs a human to take over the DDT far enough in advance for them to safely be able to do so.


Tesla would need to create, integrate, and release both of those things to be able to offer an L3 system (either alone is insufficient).

For an L4 system they'd need to replace that second item (the alerting one) with a complete DDT fallback system, such that a human would never be required in the vehicle to operate safely.

For an L5 system they'd need the same as L4, except the OEDR would need to expand from "within all defined ODDs" to "all places a human can safely drive"
 
This thread has become painful to read, I've been adding more people to ignore (from the investment forum, I guess the stock is down too much lately) and there are people constantly quoting each other so I don't get to see what they're responding to and people argue and bicker about semantics and grandmas. But then it's not like there's anything else to do while we wait for V12.
“Ignore” is a powerful feature in TMC and can be very useful at times!
 
  • Like
Reactions: GWord and mgs333
It's fundamentally not and you make the thread dumber every time you claim otherwise.

FSD L2 can not drive at all-- that is inherent to the definition of L2

It can only assist a human who is themselves driving. As Tesla themselves tells you, even in the most current version.

That is not semantics.
Yes, it's ALL semantics. And, to an extent, as you yourself note, a pointless argument. Of course the car "drives" in the sense that it is performing all the functions of a driver for some period of time. What is at issue here is how long that time period is (I mean how long it can safely drive w/o human intervention). Everything else is noise.
 
I mean, I've provided several educational links that could help dispel the ignorance...
That's wonderful.

But it's funny how one can explain everything, yet convince of nothing. Even aas your self-scored tally sheet of correctness grows, your effectiveness wanes.

That may be fine with you, as it preserves the never-ending supply of sparring partners. But there's a reason why the age-old concept of Hell is to be condemned to lethal torture, but never escape by actually succumbing to it, rather always to return for more of it.

I kind of wish you would come up with the final incontrovertible argument, the triumphant last word that would release from the hellish tangent and let us go back to discussing the possibilities of v12. But I don't think that's really the goal for you, is it?
 
  • Like
Reactions: Usain and sleepydoc
What should end-to-end do with adversarial vehicles? I'm guessing this might be a situation where someone is driving the wrong way straight into you. I suppose this should be relatively easy to test once we get 12.x installed although has anyone done that with 11.x? At minimum I suppose it might stop similar to handling narrow roads, and maybe it knows not to swerve into oncoming traffic from normal driving behavior example?
 
Update... talked to neighbors, nearly all agree my wife's and my debate over "self baking oven" is semantics. However, one neighbor claims her oven is superior. Turns out it has just one button for baking cookies at exactly 400 degrees for exactly 30 minutes (& only enables baking when its not raining outside). I asked if that was truly better than my general purpose oven, and she responsed "oh yeah its way mo' better".
 
Yes, it's ALL semantics.

It's absolutely not though, as explained in considerable detail.

Which part of the explanation, specifically, did you get lost at?


. Of course the car "drives" in the sense that it is performing all the functions of a driver for some period of time.

It not only can not do that, tesla themselves tells you it can't. I quote them telling you that 2 posts above your statement.

Your claims are fundamentally, factually, wrong.



I kind of wish you would come up with the final incontrovertible argument, the triumphant last word that would release from the hellish tangent and let us go back to discussing the possibilities of v12. But I don't think that's really the goal for you, is it?

Get people to accept truth and facts? That's absolutely the goal.

But as a wise man once noted, you can lead a human to knowledge, but you can't make him think.

See above, and throughout the discussion, where folks have convinced themselves Tesla repeatedly lies to government agencies about how their stuff really CAN self drive but they just don't want to be regulated.... or how none of the legal or regulatory or even engineering-standards definitions are anything but "semantics" when they're fundamental to this entire process.



FSD is L5.
The car can drive itself but requires a safety driver because it’s not very good at it.
The only thing you need to believe is that Tesla would tell the DMV it’s L2 in order to avoid regulation.

(moderator edit)
 
Last edited by a moderator:
What should end-to-end do with adversarial vehicles? I'm guessing this might be a situation where someone is driving the wrong way straight into you. I suppose this should be relatively easy to test once we get 12.x installed although has anyone done that with 11.x? At minimum I suppose it might stop similar to handling narrow roads, and maybe it knows not to swerve into oncoming traffic from normal driving behavior example?

That's actually a really interesting question.... because what Elon said about V12 is it has no idea what other things are- that there's NOTHING in the system TELLING it "this is a stop sign" nor anything telling it "If you see a stop sign, stop"... just that it knows how "good drivers" it's trained by react to those situations--- like it's shown them so much footage of good drivers stopping at red lights it knows when it sees something that looks like that it's supposed to stop.

I don't imagine they've got a ton of footage of "good drivers reacting to cars aiming for them" and I'm not sure they'd have consistent enough situations and results to produce useful training from what they DO have.

I suppose simulation can help here- but to what result? Some fun Trolley problem permutations of this too.
 
Update... talked to neighbors, nearly all agree my wife's and my debate over "self baking oven" is semantics. However, one neighbor claims her oven is superior. Turns out it has just one button for baking cookies at exactly 400 degrees for exactly 30 minutes (& only enables baking when its not raining outside). I asked if that was truly better than my general purpose oven, and she responsed "oh yeah its way mo' better".

Jeez, the debate won't stop....

This neighbor of mine tells me my general purpose oven isn't "self backing". I say, "come on, if I set the controls and it bakes properly, what's the difference?" She tells me I'm dumb and cites the manual where I may need to intervene to adjust the time or temperature and I must be prepared to react if catches on fire. I told her she was throwing shade at me. She got mad and threatened that during the next superbowl she was going to post neighborhood flyers about my reckless behavior.
 
I don't imagine they've got a ton of footage of "good drivers reacting to cars aiming for them" and I'm not sure they'd have consistent enough situations and results to produce useful training from what they DO have.
I'm not even sure what's the expected / good driving behavior, and some might require new controls that FSD Beta so far hasn't done. For example, maybe honking the horn would wake up a drowsy other driver, flashing headlights could provide a better view and warning, or even reversing if already stopped could be the right move if the difference between crash or not is just inches.

I've wondered if having end-to-end internally understand the concept of crashing is useful, and if so, it would seem like severity of the crash or even probability of injury might be useful to evaluate. These could be useful in a reinforcement learning with human feedback training system where instead of only relying on "good" examples, control could be biased towards good and away from not-so-good / "bad" examples, so ideally avoid the crash but better to do so at low speeds than high speeds.

It'll be interesting to see if Tesla reveals details of how much training is needed to learn or change behaviors such as performing complete stops, correctly ignoring a left green with red straight, or handling these adversarial situations. Similar to research in large language models, people are looking into ways to learn more with less data, but Tesla has a hammer of collecting more data and growing available compute, so even if the current approach is inefficient, it might be good enough to get the job done.