Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
My thought is that both planning and control heuristics were reduced. This is according to many comments during the livestream about how long to wait at stop signs and roundabouts, etc.
From the livestream, various things were missing from the visualization potentially indicating they were not active, and this included the lack of highlighted blue vehicles that FSD Beta 11.x indicates it's planning behavior. Indeed this included situations like roundabouts where V12 seemed to know better than before when to smoothly enter.

A decent amount of time was spent explaining Planning at AI Day 2022 with examples of when to make an unprotected turn with pedestrians and cross traffic that might also temporarily occlude visibility. This last item relied on occupancy network to determine what should be visible, and I believe this is also used to decide creeping and its limit. All these explicit features could be replaced in V12 especially if based on world model.

These people who worked on the old explicit code and modules are well aware of the tricky corner cases and probably have a decent catalog of test cases to evaluate whether end-to-end will behave appropriately or requires more data collection for training.
 
These people who worked on the old explicit code and modules are well aware of the tricky corner cases and probably have a decent catalog of test cases to evaluate whether end-to-end will behave appropriately or requires more data collection for training.
Would be interesting to see how FSD12 works with CULT - getting that ULT working can't be easy.

In general I feel what FSD 12 does based on just video input is impressive - but how to get to that 1 in 10k error rate is not clear. Because ultimately it's not about what it gets right but what it gets wrong.

BTW from the biography we know AP team and Musk values the disengagement rates highly. When they start publishing it - we know they have got a handle on that.
 
how to get to that 1 in 10k error rate is not clear… BTW from the biography we know AP team and Musk values the disengagement rates highly
In the past, Musk has talked about "miles between a necessary intervention" and "safety critical," and probably that's what you're referring to for "error rate" as opposed to a more generic any disengagement because the car is doing something strange / undesired / confusing / wrong but not necessarily unsafe. Presumably that's part of the reason why Tesla added the voice drive-notes, but I would expect Tesla noticed a significant amount of non-safety disengagements that are worthwhile to address?

With end-to-end training on curated human driving examples, theoretically the car should behave much more human-like somewhat similar to how ChatGPT not only used text from humans but also human feedback to bias its responses even more. Having FSD drive like other humans probably increases people's acceptance and even understanding relative to hardcoded "robotic" controls resulting in potentially a significant decrease in overall disengagement rates.
 
In the past, Musk has talked about "miles between a necessary intervention" and "safety critical," and probably that's what you're referring to for "error rate" as opposed to a more generic any disengagement because the car is doing something strange / undesired / confusing / wrong but not necessarily unsafe. Presumably that's part of the reason why Tesla added the voice drive-notes, but I would expect Tesla noticed a significant amount of non-safety disengagements that are worthwhile to address?

With end-to-end training on curated human driving examples, theoretically the car should behave much more human-like somewhat similar to how ChatGPT not only used text from humans but also human feedback to bias its responses even more. Having FSD drive like other humans probably increases people's acceptance and even understanding relative to hardcoded "robotic" controls resulting in potentially a significant decrease in overall disengagement rates.
That is my thesis: that end-to-end with supervised human positive driving examples helps make a more acceptable high L2 project fairly quickly which feels more natural most of the time. Current "FSD" is certainly only L2 and doesn't show evidence of advancement past that).

But there's still a very long way to L4, and end-to-end training might set that back, at least for some time until deterministic controllability guarantees can be achieved with an entirely new architecture.
 
  • Like
Reactions: GSP and spacecoin
But there's still a very long way to L4, and end-to-end training might set that back, at least for some time until deterministic controllability guarantees can be achieved with an entirely new architecture.
There is, at present, really no way to get those type of guarantees in a computer vision (only) system. It's extremely hard with any type of ML system.

A better L2 system is definitely within reach, but I think we forget wide-ODD CV-only autonomy for the five coming years at least.
 
Last edited:
  • Like
Reactions: DrChaos
end-to-end training might set that back, at least for some time until deterministic controllability guarantees can be achieved
Practically, there probably can't be absolute guarantees given the many potential corner cases that might be fine almost all the time, and this is probably why Musk started talking about Uber driver ratings on Spaces just before the end-to-end demo:


So even with regulators evaluating automated driving systems, I don't think it's useful for them to comb through traditional control logic especially with who knows what other parts of the code might change that behavior anyway. So especially for these end-to-end models, there will probably be some set of testing that can be pass/fail or on a scale, but probably more importantly are safety metrics that try to use statistics to get at those "guarantees."

Even for FSD Beta so far, presumably Tesla has some evaluation of when something is ready for rollout and wider deployment that can be reused as they look towards removing human supervision.
 

Translation: they are giving up on L4 robotaxi software for a long while.

Because they won't permit what would be necessary in that scenario if you consider it like testing driving of intellectually deficient humans whose brains can't be opened: adversarial human regulator fleet testing and approval of every specific version of the software, funded by Tesla.
 
Translation: they are giving up on L4 robotaxi software for a long while.

Because they won't permit what would be necessary in that scenario if you consider it like testing driving of intellectually deficient humans whose brains can't be opened: adversarial human regulator fleet testing and approval of every specific version of the software, funded by Tesla.



...what?

In most of the states that allow L4 vehicle operation (CA being very much an outlier in this) the only "approval" you need to put your L4 car on the road is telling the state DMV "hey, my car is L4 and it's insured. Trust me bro"

That's it. That's the "regulation"
 
  • Like
Reactions: EVNow
There is, at present, really no way to get those type of guarantees in a computer vision (only) system. It's extremely hard with any type of ML system.

A better L2 system is definitely within reach, but I think we forget wide-ODD CV-only autonomy for the five coming years at least.
I think this explains MobileEye's architecture then. I was surprised when they have entirely separate vision, and then on the other side direct sensing, lidar/imaging radar, stacks.

I presume the vision operates the vehicle most of the time with ML heavy vision modeling, and then there is a fully physics-based (like DARPA 2007) robotics deterministic boundary applied by the direct sensing model which is only an override constraint for safety but doesn't operate the main planner.
 
...what?

In most of the states that allow L4 vehicle operation (CA being very much an outlier in this) the only "approval" you need to put your L4 car on the road is telling the state DMV "hey, my car is L4 and it's insured. Trust me bro"

That's it. That's the "regulation"
And when these robos hit for real there will be demands for actual testing and regulation once it's more clear when average riders and other drivers experience them for real and notice their flaws. Complaints will come first from other drivers and politicians will get whinged at.

Or they're "insured" and then there are actual lawsuits complaining for damages against that insurance and this becomes significant. Like they will be subject to liability when they drive 'unnaturally' that causes other people to have an accident.
 
  • Like
Reactions: JB47394
And when these robos hit for real there will be demands for actual testing and regulation once it's more clear when average riders and other drivers experience them for real and notice their flaws. Complaints will come first from other drivers and politicians will get whinged at.

Or they're "insured" and then there are actual lawsuits complaining for damages against that insurance and this becomes significant. Like they will be subject to liability when they drive 'unnaturally' that causes other people to have an accident.


There's already cars on the road- today- driving beyond L2 under these laws though.

Waymo has done so for many years in AZ for example, where no regulatory approval whatsoever is required-- you certify your car can self drive and obey all laws, you're insured, and there's a plan for interacting with law enforcement if needed, and they believe you, and you're good to go.

Mercedes is selling consumer L3 vehicles to the public operating today on public roads in NV under a similar set of state laws- you self-certify the system can drive safely, obey all relevant laws, and it's insured and you're good to go. Nobody checks or cares or needs to 'approve' anything.

Most other states that allow self driving, even ones where nobody's doing it yet, are largely carbon copies of these same laws.

As I say, it's largely CA that's the weird one requiring any sort of evidence or reporting of any kind.

COULD that change some day and more states require more from self-driving vehicles? Sure. Could also be that never changes because nobody is dumb enough to sell a self-driving car dangerous enough to cause those laws to change.

But that's not the world of today- in which "need to get approved by regulators" is largely a BS red herring argument because there are no regulators and they don't approve squat. TRUST ME BRO is all you need.
 
  • Like
Reactions: GSP
COULD that change some day and more states require more from self-driving vehicles? Sure. Could also be that never changes because nobody is dumb enough to sell a self-driving car dangerous enough to cause those laws to change.

But that's not the world of today- in which "need to get approved by regulators" is largely a BS red herring argument because there are no regulators and they don't approve squat. TRUST ME BRO is all you need.
OK, but Elon Musk operates very differently from Waymo or Mercedes, and he's exactly the one who might be
dumb enough to sell a self-driving car dangerous enough to cause those laws to change.
 
It seems to me that with pure, end to end, ai, since it is a black box that doesn’t allow direct modifications, there could be some major challenges in store. It would have to be taught all the different laws for different states and countries. I suppose it would have to use gps data to know which laws it needed to follow. Also when governments decide to change a law, the system would have to be taught the new law and exactly where it would take effect. This would need to be done in a timely fashion before the laws are implemented, so before they would have data of drivers following the new law.
 
  • Like
Reactions: DrChaos
It seems to me that with pure, end to end, ai, since it is a black box that doesn’t allow direct modifications, there could be some major challenges in store. It would have to be taught all the different laws for different states and countries. I suppose it would have to use gps data to know which laws it needed to follow. Also when governments decide to change a law, the system would have to be taught the new law and exactly where it would take effect. This would need to be done in a timely fashion before the laws are implemented, so before they would have data of drivers following the new law.


From the description Elon gave on the livestream this is exactly the opposite of what they'd have to do.

it doesn't know "you stop at a stop sign which is an octagonal red thing because the law says you do that and you have to stop in X way legally"

Instead it's shown millions of clips of good drivers following the law, and copies what they did in the same situation. It doesn't know what a "stop sign" is, let alone the laws around them.


So if a law changes that'd impact driving they'd need to retrain the system with lots of clips of good drivers obeying the new law.

That DOES raise the fun point that if you can think of a situation where there ARE no such clips until after the law is passed because nobody drives that way until after the law is passed they'd be unable to fix it until well after said law was passed... but I'm not sure I can think of an example off hand of a law they might change where there's NO existing footage of correct behavior somewhere?
 
  • Like
Reactions: RabidYak
Nobody checks or cares or needs to 'approve' anything
Tesla is operating at a different scale and scrutiny with different risks such as NHTSA's ability to prevent the sale of new cars with recalls even for driver assistance features such as "Full Self-Driving Software May Cause Crash" even when the manufacturer disagrees: "On February 7, 2023, while not concurring with the agency’s analysis, Tesla decided to administer a voluntary recall out of an abundance of caution."

Presumably Tesla is still continuing their usual ongoing communications with NHTSA to avoid eventual end-to-end rollout resulting in a recall stopping sales. Perhaps the dedicated robotaxi vehicle allows Tesla to initially focus the potential recall risk towards just that vehicle instead of the whole fleet.
 
Tesla is operating at a different scale and scrutiny with different risks such as NHTSA's ability to prevent the sale of new cars with recalls even for driver assistance features such as "Full Self-Driving Software May Cause Crash" even when the manufacturer disagrees: "On February 7, 2023, while not concurring with the agency’s analysis, Tesla decided to administer a voluntary recall out of an abundance of caution."
Tesla can (as near as matters) instantly disable any FSD feature on the cars, so a NHTSA recall wouldn't prevent sales for any length of time.

And that's assuming the car shipped from the factory with the problematic software active as opposed to being a first download type thing.
From the recall you linked to:
Identify How/When Recall Condition was Corrected in Production : N/A. Software releases containing the FSD Beta feature are not installed on new vehicles during vehicle manufacturing.

SmartSelect_20230927_114159_Acrobat for Samsung.jpg
 
So if a law changes that'd impact driving they'd need to retrain the system with lots of clips of good drivers obeying the new law.
Localisation has to do with culture among other things. People drive differently in NYC, SF, Milan, Nairobi, Shanghai and Tokyo. So retrain model for every market then using a completely different training set? Doesn't sound very scalable to me.
 
Localisation has to do with culture among other things. People drive differently in NYC, SF, Milan, Nairobi, Shanghai and Tokyo. So retrain model for every market then using a completely different training set? Doesn't sound very scalable to me.


They're not training it to "drive like the culture" they're training it to drive safely.

That's the same in NYC and SF.

I do expect they'd need to train differently for, say, countries that drive on the other side of the road-- or where you might commonly need to yield to elephants or something.... But the mainland EU would be all one training block.... the US (possibly CA as well) would be one too- as would Japan.

The only localization thing I can think of off hand that would really make any difference within those (in the US anyway) is where it is, or is not, ok to go right on red.

Watching good drivers can tell it HOW to safely go right on red- but probably not WHERE (geographically) it's ok to do it so they'd need at least a bit of GPS hardcoding there.
 
  • Like
Reactions: JB47394