Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla AI day: Q3

This site may earn commission on affiliate links.
It's about comparing autonomous to autonomous. Tesla's public AP is not autonomous. FSD Beta is Tesla's autonomous driving product.

No, it's not.

It's a non-public, not commercially available, not yet finished product still in development.


And Waymo's service in Chandler is autonomous driving.

That is commercially available to the public right now.


Expecting the same level of transparency doesn't really follow.




No. Waymo does publish actual disengagements. But they only publish actual safety disengagements because they are the most meaningful in terms of safety


The report discussed lists accident rate. It does not list disengagement rate.

In fact the news story I linked to specifically calls Waymo out for this lack of transparency-

Original news story said:
the company hasn't been totally transparent with metro Phoenix residents, refusing to turn over data showing how many times the vehicles' autonomous function has failed while driving around Chandler, Tempe, and other Valley areas





Elon has mentioned that the safety goal for FSD beta is 100-200% safer than average human. And Tesla rep told CA DMV that the goal is 1 safety critical disengagement per 1-2M miles. So we know Tesla's safety goals. That is why it would be nice to get actual safety data on FSD Beta to see how far or close FSD beta is to those stated safety goals.


it would be nice to have a pony too.... but until the system is available to the general public I don't see why they'd provide that info.

They do provide safety data on the widely available system though... something I'm unaware of any other vehicle OEM doing.

How many miles between accidents is it on supercruise cars with and without the system engaged? Or with or without active safety features on?
 
It's about comparing autonomous to autonomous. Tesla's public AP is not autonomous. FSD Beta is Tesla's autonomous driving product. And Waymo's service in Chandler is autonomous driving. So it makes sense to those two....
I believe that what we all call FSD beta is actually a beta release of Autosteer on City Streets. At this time that is clearly an L2 product and is likely to remain so for some time after wider release.

Now, since I don't believe in legalistic back-and-forth disputes in the forums, I will move to put more cards on the table. Indeed what we all believe and what Elon believes, is that the goal is to get to fully autonomous L4+ capability. However FSD is a viable feature even as L2 in the meantime. For my part, I completely understand Tesla's method of gathering data and not reporting all kinds of disengagement details to California.DMV or other regulatory bodies. I don't believe that legally or morally they are required to keep regulators tightly in the loop as they develop their product, and I don't believe that doing so would be a net safety benefit. On the other hand it will become very difficult to maintain this position, though I believe it is correct, if any serious accidents occur on FSD beta. That's simply a result of new technology and a growing group of interests that would like to put some obstacles in Tesla's way ( not a conspiracy / TeslaQ / FUD argument, but a reasonable observation based on market forces, news sensationalism and automotive history).

I assume or at least hope that Tesla has some prie-thought plans regarding how to handle serious incidents that may occur during this phase. Like the situation that will be in place upon L4 release, those plans should involve Tesla taking responsibility for performance of its system in traffic. This is also why, though I'm as anxious as anyone to get my hands on the City Streets feature, I do not support wide release in its current state nor to people who simply demand it angrily based on their prior purchase.
 
I believe that what we all call FSD beta is actually a beta release of Autosteer on City Streets. At this time that is clearly an L2 product and is likely to remain so for some time after wider release.

Now, since I don't believe in legalistic back-and-forth disputes in the forums, I will move to put more cards on the table. Indeed what we all believe and what Elon believes, is that the goal is to get to fully autonomous L4+ capability. However FSD is a viable feature even as L2 in the meantime. For my part, I completely understand Tesla's method of gathering data and not reporting all kinds of disengagement details to California.DMV or other regulatory bodies. I don't believe that legally or morally they are required to keep regulators tightly in the loop as they develop their product, and I don't believe that doing so would be a net safety benefit. On the other hand it will become very difficult to maintain this position, though I believe it is correct, if any serious accidents occur on FSD beta. That's simply a result of new technology and a growing group of interests that would like to put some obstacles in Tesla's way ( not a conspiracy / TeslaQ / FUD argument, but a reasonable observation based on market forces, news sensationalism and automotive history).

I assume or at least hope that Tesla has some prie-thought plans regarding how to handle serious incidents that may occur during this phase. Like the situation that will be in place upon L4 release, those plans should involve Tesla taking responsibility for performance of its system in traffic. This is also why, though I'm as anxious as anyone to get my hands on the City Streets feature, I do not support wide release in its current state nor to people who simply demand it angrily based on their prior purchase.
How will the beta of FSD L4+ be different from FSD Beta?
 
I believe that what we all call FSD beta is actually a beta release of Autosteer on City Streets. At this time that is clearly an L2 product and is likely to remain so for some time after wider release.


If this thing supported those GIF memes I'd be putting the "ALL OF THIS" one under it.

The CA emails are super clear on this point. City Streets is an L2 feature. Hence why they need not report anything to the DMV in CA.

Green has mentioned some variant of the city streets code has been in there for years, the "FSDBeta" is just turning on that part of the code (which has obviously been updated a ton since he first saw it).

How will the beta of FSD L4+ be different from FSD Beta?

It won't require a human, will have a defined ODD, and will be capable of reaching a minimum risk condition on its own if it leaves it's defined ODD or is otherwise unable to perform the DDT.

None of which the current city streets stuff can do.
 
It won't require a human, will have a defined ODD, and will be capable of reaching a minimum risk condition on its own if it leaves it's defined ODD or is otherwise unable to perform the DDT.

None of which the current city streets stuff can do.
If it doesn't require a safety driver when operating on public streets then it's not beta IMHO. As far as I know no one has tried driving FSD Beta to Mexico or Canada so we don't know what will happen if it leaves its defined ODD. :p
Elon has said at the last conference call that they're going to use disengagement data analysis from FSD Beta to determine when it's L4+. To me that sounds like a beta of L4+. It seems ridiculous to say that it has to be feature complete to qualify as autonomous vehicle testing.

You and I have had this argument repeatedly and I doubt we're going to change our minds. I'm curious what other people think.
 
If it doesn't require a safety driver when operating on public streets then it's not beta IMHO.

Well, I was more explaining what would be different in an L4 system versus the current L2 one.

"L4 beta" is a thing that does not exist so I can't really speak to what that phrase means.


As far as I know no one has tried driving FSD Beta to Mexico or Canada so we don't know what will happen if it leaves its defined ODD

Sure we do.

We've seen the city streets beta insist the driver take over tons of times when faced with a task outside its capabilities. All the way from the minor "tap accelerator to proceed" messages right up to the TAKE OVER IMMEDIATELY message some regular wide-release users might be familiar with.



Elon has said at the last conference call that they're going to use disengagement data analysis from FSD Beta to determine when it's L4+. To me that sounds like a beta of L4+.


Then you need your hearing checked :)

If it requires a human ever then it's not L4. By definition.


It's possible he's saying until they improve the L2 system to the point the disengagement rate is below X they're not going to bother wasting development and testing cycles on things like minimum risk condition coding though- since before that point the human is always there to handle that so the code isn't needed.

During the period they're testing that you could argue it's an L3 system- since at THAT point (but NOT today) the design intent is for the system to actually be doing 100% of the driving task-- but the human is still needed as a fallback since they haven't certified the system itself as one until development/testing is completed.

When they reach the point the design intent of the running system in the car is that the car can drive itself without ANY human present, then it's L4.

What the city streets beta people have today is explicitly not that
 
"L4 beta" is a thing that does not exist so I can't really speak to what that phrase means.
I'm saying that FSD Beta is L4+ beta (really L5 but I was quoting a post).
We've seen the city streets beta insist the driver take over tons of times when faced with a task outside its capabilities. All the way from the minor "tap accelerator to proceed" messages right up to the TAKE OVER IMMEDIATELY message some regular wide-release users might be familiar with.
There's nothing in the SAE spec that says that a prototype vehicle can never ask the safety driver to take over. You can see from the CA disengagement reports that systems do that all the time!
If it requires a human ever then it's not L4. By definition.
No, L4 prototype vehicles need safety drivers to operate safely on public roads. I meant that he said they're going to use disengagement data analysis from FSD Beta to determine when it's ready to no longer be beta L5. The whole point of a beta is determine when a product is ready for release.

How do you know they haven't coded the software to achieve minimal risk condition?
 
He'll answer what the hell he means by "real-world AI". Obviously, he'll discuss Dojo progress.

I'm assuming "real-world AI" has to do with:
Inferring a real-world object's future state (position / velocity / intention) with NNs
Factoring in object permanence (most recent update 2021.4.15 is doing this with cones)
I don't remember this being answered. Any thoughts?
 
I don't remember this being answered. Any thoughts?

Elon mentioned it during AI Day. He's just saying Tesla is the leader in applied real-world AI because Teslas are basically robots, and they use NNs to make sense and interact with the world. No other company is making as much of an impact in real-world interactions with NNs.

It's kind of an obvious statement, but Elon's audience isn't only geeks like us in the AV forum. He's just making it easier for other people to understand what Tesla is trying to do.
 
  • Like
Reactions: Terminator857
... Tesla is the leader in applied real-world AI because Teslas are basically robots, and they use NNs to make sense and interact with the world. No other company is making as much of an impact in real-world interactions with NNs. ...
Google has thousands of TPUs running Neural Nets doing all sorts of things like: understanding search queries, understanding written natural language, understanding spoken natural language, making sense of street view, polishing up photos, etc...
I'd be surprised if Tesla is doing one tenth of what Google does with NNs, except in one area: real world robots that people can use.
 
Google has thousands of TPUs running Neural Nets doing all sorts of things like: understanding search queries, understanding written natural language, understanding spoken natural language, making sense of street view, polishing up photos, etc...
I'd be surprised if Tesla is doing one tenth of what Google does with NNs, except in one area: real world robots that people can use.

Yup, but most of Google's use cases don't interact with the real world, mostly provides value in the software world.
 
Yup, but most of Google's use cases don't interact with the real world, mostly provides value in the software world.
I find that hard to understand. Parsing search queries is used by the real word a billion times a day or more, similar with understanding spoken natural language, similar with processing people's pics, search ranking uses NN's in different ways. Ask how tall a famous person is, that uses a NN to understand written text. Even the maps that Tesla uses for navigation are provided by Google and processed with NNs.
 
Last edited:
  • Like
Reactions: diplomat33
I find that hard to understand. Parsing search queries is used by the real word a billion times a day or more, similar with understanding spoken natural language, similar with processing people's pics, search ranking uses NN's in different ways. Ask how tall a famous person is, that uses a NN to understand written text. Even the maps that Tesla uses for navigation are provided by Google and processed with NNs.

I'm just trying to explain what Elon means when he says real world AI lol.

I use Google's services routinely, but to me, they aren't "real world" AI. There should be differentiation between AI that interacts with the physical world vs digital world. I guess that's what Elon is trying to focus attention on.
 
Weren’t you recently arguing that FSD Beta was “released”?

Not sure what specific statement you're talking about here- but regarding its release status:

To a tiny group of beta testers, 99% of which are Tesla employees, sure.

To the general public who purchased FSD- no.


I'm saying that FSD Beta is L4+ beta (really L5 but I was quoting a post).

But that is factually wrong.


There's nothing in the SAE spec that says that a prototype vehicle can never ask the safety driver to take over.

There absolutely is one that says L4 can never require a human to operate/fail safely. It can "let" one do it, but it must be able to operate without REQUIRING one to.

Which the current FSDbeta can not do.

It is not L4. by definition



No, L4 prototype vehicles need safety drivers to operate safely on public roads.

There is no such thing as "prototype L4" in the SAE terminology.

It's either L4 or it's not. FSDBeta is not


How do you know they haven't coded the software to achieve minimal risk condition?

Apart from the fact we have Green reporting new code features- we have all the videos that clearly show it requiring human take-over when it can't handle something.

So for a 78th time, an L4 vehicle by definition can not require a human to be able to take over.

It must- by definition be capable of handling situations outside it's ODD by itself.

And as a further reminder of this- SAE cares about design intent. When you turn on FSDBeta it explicitly states it is not designed to be autonomous

So it's not L4.

Not sure why you have such trouble with this.
 
There is no such thing as "prototype L4" in the SAE terminology.
Fine, I meant "test vehicle", need to be very specific when arguing with you. haha.
There absolutely is one that says L4 can never require a human to operate/fail safely. It can "let" one do it, but it must be able to operate without REQUIRING one to.
An L4 "test vehicle" can require a human to supervise it and take over as necessary (which you may have noticed that every autonomous vehicle company does).
Apart from the fact we have Green reporting new code features- we have all the videos that clearly show it requiring human take-over when it can't handle something.
Just like most L4 test vehicles. The system can hand over control to the safety driver, they get reported as disengagements to the CA DMV here.
And as a further reminder of this- SAE cares about design intent. When you turn on FSDBeta it explicitly states it is not designed to be autonomous
Yes, I'm saying they are lying to avoid regulation. The CEO of the company stated that his intent is to use disengagement data analysis from FSD Beta to prove when the safety of the system is good enough to no longer require a safety driver. That sounds like intent to me.

The FSD Beta ODD is the entire US so I don't see the relevance of ODD.
 
Wide deployment of City Streets will likely depend on disengagement data and such, but it'll still be a Level 2 system requiring driver intervention. Even Level 3 is out of reach until the Object and Event Detection and Response system can at least give drivers advanced warning to take over before it hits something it can't deal with, and I'm not sure why Tesla would have lied to the California DMV about not expecting any significant enhancements there.

Level 5 seems light-years away
 
Fine, I meant "test vehicle", need to be very specific when arguing with you. haha.

An L4 "test vehicle" can require a human to supervise it and take over as necessary (which you may have noticed that every autonomous vehicle company does).

Yes, an L4 test vehicle can.

If the DESIGN INTENT is for the system to be L4.

Teslas city streets systems design in intent is not

You can tell, because it tells you the design intent when you activate it


You keep ignoring their stated intent, both to the actual drivers, and to government agencies... insisting they're lying, and implying it secretly has L4 failsafe capabilities on top (even though Tesla in documents to said agencies explicitly said this was not true).


All with exactly zero evidence to support any of your claims.


Why?


The FSD Beta ODD is the entire US so I don't see the relevance of ODD.

The "FSD Beta" is just the city streets code.

Outside of that ODD it's running the exact same production code everyone else is

Both of course running at L2.
 
  • Funny
Reactions: AlanSubie4Life
If the DESIGN INTENT is for the system to be L4.
I think the design intent is L5 and the people actually testing FSD Beta seem to think so too. How would a warning message for an L5 test vehicle be any different? I would note the SAE says that L4-5 vehicles also don't need to be autonomous (and most aren't).

How was this person misled into thinking he was testing robotaxi software? Why didn't Elon correct him?
Why is Elon talking about generalized self-driving in a tweet about the release of FSD Beta 9? It's a mystery I suppose.

Why did Elon say Tesla is going to use disengagement data analysis of FSD Beta to prove when the system could be driverless? How could a system with the design intent to be driverless be L2?

Anyway, if the standard of proof is for Tesla to literally say "the design intent of FSD Beta is L5" then we will never agree.
 
  • Like
Reactions: Terminator857