Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

There will be NO HW4 upgrade for HW3 owners

This site may earn commission on affiliate links.
I rest my case. In the software world feature complete does not mean the software is done. It will be done when it is reliable enough to be driverless.
You need to listen to more earning calls and it will become clear to you that FSD beta is a beta version of FSD. He talks all the time about the disengagement rate of FSD beta improving to the point where you can remove the driver. The design intent of the system is a driverless.
As you state there are regulatory reasons for Tesla’s lawyers to misclassify it as L2.
But it's not Tesla's lawyers classifying it as such, it's government agencies.

I just looked and NHTSA also explicitly says FSD Beta is a L2 feature, not that it is a test of a L4 feature:
"FSD Beta is an SAE Level 2 driver support feature that can provide steering and braking/acceleration support to the driver under certain operating limitations. With FSD Beta, as with all SAE Level 2 driver support features, the driver is responsible for operation of the vehicle whenever the feature is engaged and must constantly supervise the feature and intervene (e.g., steer, brake or accelerate) as needed to maintain safe operation of the vehicle."
https://static.nhtsa.gov/odi/rcl/2023/RCLRPT-23V085-3451.PDF
 
  • Informative
Reactions: pilotSteve
But it's not Tesla's lawyers classifying it as such, it's government agencies.

I just looked and NHTSA also explicitly says FSD Beta is a L2 feature, not that it is a test of a L4 feature:
"FSD Beta is an SAE Level 2 driver support feature that can provide steering and braking/acceleration support to the driver under certain operating limitations. With FSD Beta, as with all SAE Level 2 driver support features, the driver is responsible for operation of the vehicle whenever the feature is engaged and must constantly supervise the feature and intervene (e.g., steer, brake or accelerate) as needed to maintain safe operation of the vehicle."
https://static.nhtsa.gov/odi/rcl/2023/RCLRPT-23V085-3451.PDF
Yes, I’m saying that Tesla is reporting the SAE level incorrectly to avoid regulation.
We’ll see if Elon changes the design intent at investor day but he has been very consistent so far that the goal of FSD beta is to get disengagements low enough to remove the driver.
 
  • Like
Reactions: DanCar
Yes, I’m saying that Tesla is reporting the SAE level incorrectly to avoid regulation.
We’ll see if Elon changes the design intent at investor day but he has been very consistent so far that the goal of FSD beta is to get disengagements low enough to remove the driver.
Do you think there is something wrong with SAE levels?
If I take something that is implausible, like a level 2 system and declare my intent to upgrade the firmware to make it FSD, does that mean it is level 3-5 even though that is rather ridiculous?
 
  • Like
Reactions: pilotSteve
Do you think there is something wrong with SAE levels?
If I take something that is implausible, like a level 2 system and declare my intent to upgrade the firmware to make it FSD, does that mean it is level 3-5 even though that is rather ridiculous?
I think the SAE levels are fine. In your hypothetical case I would want to see that you were attempting to do drives without human control and working towards automating the entire dynamic driving task.

To me it’s obvious the FSD beta is the beta version of Tesla’s robotaxi software. Many people (including the head of the project!) say it’s getting close to being reliable enough to remove the driver. I do think that’s ridiculous though.
 
Do you think there is something wrong with SAE levels?
If I take something that is implausible, like a level 2 system and declare my intent to upgrade the firmware to make it FSD, does that mean it is level 3-5 even though that is rather ridiculous?
I think the point here is that Tesla proper is saying one thing to regulators to keep them off their backs while at the same time Tesla's CEO is saying something completely different to the public to keep interest in the stock high. There's nothing inherently wrong with the SAE levels - they set out metrics for classifying autonomous vehicle systems. They weren't made by bureaucrats or politicians, but by engineers - real engineers, mind you, and not crazy billionaire entrepreneurs that are self-titled "engineers." You don't have to be on a progression from one level to the next, but different levels represent different use cases for automation and thus hold different values for consumers. For me L3 on highways is all I've ever wanted. I will let my M3 drive me around town on FSDb in the name of "testing" or curiosity, but it certainly holds no utility for me as a feature of the car. L3 on highways, however, would be a tremendous autonomous driving feature in a car, reducing workload and fatigue and prolonging the period of my life where I can travel long distances alone.
 
I think the SAE levels are fine. In your hypothetical case I would want to see that you were attempting to do drives without human control and working towards automating the entire dynamic driving task.

To me it’s obvious the FSD beta is the beta version of Tesla’s robotaxi software. Many people (including the head of the project!) say it’s getting close to being reliable enough to remove the driver. I do think that’s ridiculous though.
The head of the project knows he better say that… if he wants to keep his job and not be fired on the spot.
 
Tl; dr;
* no additional cameras in s/x/3/y (for now)
* cybertruck should have the extra cameras, as well as semis
* s/x/3/y to get additional cameras later in the year, s/x in a few months and 3/y later
* may offer retrofits to add cameras for hw4 vehicles later
* hw3->hw4 still up in the air but don’t hold your breath
* hw4 software not ready yet (when have we heard this before) thus no delivered car has it

 
What is the metric for level 3-5? Intent. I wouldn't call that a metric.
Not sure I understand the question, nor am I clear that your definition of "metric" meets that of an engineer. Put simply, metrics are defined measurements. The SAE Levels of Driving Automation lay out a series of metrics that differentiate the levels. These include what portions of the dynamic driving task falls to the ADS and what portions are by a human driver, what the requirements for fallback are, what limitations exist in the ODD, etc.

If your question is "what makes an, e.g., L5 system over an L2 system," then, yes, "intent" is a factor, although I would couch it more as design. As such, IMO Tesla's FSD is still an L2 system, because I don't see anything that suggests that the ADS currently contains the features necessary for it to meet the metrics for L3+, e.g., fallback or no driver. Waymo, on the other hand, has had all that in their system for a while, and while they were heavily monitored and operated in a very limited ODD, it appeared that the "intent" of the system (i.e., what it was designed for) was L4 operation.

EDIT: After reading your other posts, I think we may be saying the same thing. :)
 
Last edited:
  • Like
Reactions: stopcrazypp
Ignoring levels which are a moving target. Telsa promised a robotaxi by now.
Again, the SAE levels are the exact opposite of a moving target, IMO. They are well defined descriptions of levels, types, or capabilities (depending on what word you like) of Autonomous Driving Systems (ADS).

The only "moving target" I see here is Elon's prognostications of what FSD is or will be - although I am fully aware he has used SAE L5 in the past to describe one goal. He says things like "the car will be able to find you in a parking lot, pick you up and take you all the way to your destination without an intervention" or "summon should work anywhere connected by land & not blocked by borders, eg [sp] you're in LA and the car is in NY," and those may sound like the same thing, but the former could be done by an L3 system (or an L2, for that matter) while the latter necessarily requires L5 features, thus the "moving target" for FSD.
 
Speaking of..whens the last time Elon has referenced Robotaxi and/or updated promise of if? Seems like its been awhile...
We went from promises of 1 million on the road by end of 2019 to near total silence on the topic from him...
To be fair, I think Elon's statement that "[Tesla] will have more than one million robotaxis on the road" by the end of 2019 was meant to convey that FSD would be feature-complete and fully functional (L5) and deployed to the entire fleet of 1-million cars by the end of the year (possibly in addition to a mobile app to support robotaxi functionality). His subsequent statements during Autonomy Day 2019 provided clarity to that remark (e.g., "A year from now, we’ll have over a million cars with full self-driving, software... everything” and "These cars will be Level 5 autonomy with no geofence, which is a fancy way of saying they will be capable of driving themselves anywhere on the planet, under all possible conditions, with no limitations").

I also think it is becoming clear that neither Tesla nor Elon hold this out as a goal at this point. Instead there are rumors/hints of plans for a car specifically designed to be a robotaxi. What hardware and software platform will run this car is unclear, but it will require, IMO, an order-of-magnitude evolution of what FSD is today.
 
Yes, I’m saying that Tesla is reporting the SAE level incorrectly to avoid regulation.
We’ll see if Elon changes the design intent at investor day but he has been very consistent so far that the goal of FSD beta is to get disengagements low enough to remove the driver.
Disengagement data is an indicator of the user experience, I don't think it's an indicator of risk that would facilitate removing the human driver behind the wheel unless it also involved geofencing the system to roads with many miles of experience and probably geofencing out inclement weather while also considering interventions like people manually adjusting speed etc.

If you let the system loose in a generalized state based on disengagement data that has selection bias built into it, I think that would quickly lead not good places.



I definitely think Tesla wants this on public roads as a Level 2 ADAS for development purposes. The SAE designation is accurate, it is being used this way to feed development that will be the foundation for future iterations.
 
  • Like
Reactions: stopcrazypp
Disengagement data is an indicator of the user experience, I don't think it's an indicator of risk that would facilitate removing the human driver behind the wheel unless it also involved geofencing the system to roads with many miles of experience and probably geofencing out inclement weather while also considering interventions like people manually adjusting speed etc.

If you let the system loose in a generalized state based on disengagement data that has selection bias built into it, I think that would quickly lead not good places.



I definitely think Tesla wants this on public roads as a Level 2 ADAS for development purposes. The SAE designation is accurate, it is being used this way to feed development that will be the foundation for future iterations.
Obviously Tesla doesn’t just look at raw disengagement rate. They’ll run counterfactual simulations just like everyone else does.

"We've also got to make it work and then demonstrate that if the reliability is significantly in excess of the average human driver or to be allowed... um... you know for before people to be able to use it without... uh... paying attention to the road... um... but i think we have a massive fleet so it will be I think... uh... straightforward to make the argument on statistical grounds just based on the number of interventions, you know, or especially in events that would result in a crash. At scale we think we'll have billions of miles of travel to be able to show that it is, you know, the safety of the car with the autopilot on is a 100 percent or 200 percent or more safer than the average human driver"

Has Elon ever said that they’re trying to improve the “user” experience of FSD beta? I’ve only heard him say that they’re trying to get to driverless which is a different goal.
 
Last edited:
Obviously Tesla doesn’t just look at raw disengagement rate. They’ll run counterfactual simulations just like everyone else does.

"We've also got to make it work and then demonstrate that if the reliability is significantly in excess of the average human driver or to be allowed... um... you know for before people to be able to use it without... uh... paying attention to the road... um... but i think we have a massive fleet so it will be I think... uh... straightforward to make the argument on statistical grounds just based on the number of interventions, you know, or especially in events that would result in a crash. At scale we think we'll have billions of miles of travel to be able to show that it is, you know, the safety of the car with the autopilot on is a 100 percent or 200 percent or more safer than the average human driver"

Has Elon ever said that they’re trying to improve the “user” experience of FSD beta? I’ve only heard him say that they’re trying to get to driverless which is a different goal.
I'm looking more at what we know was said privately. I've never heard or seen Tesla/Elon publicly talk about Autosteer on City Streets, I've never heard them publicly use the correct terminology, I've only seen it in leaked correspondence.

1677166016205.png


The Camera feedback button and intervention/disengagement data etc are there to improve the user experience. In some cases they'll even send out dedicated teams, like in the case of Chuck's ULT -- it sounds like this stems from what Teslascope has relayed from employees where they started monitoring Twitter etc to improve the user experience.

HW4 and HW5 are examples of further iterative processes, and who knows how many iterations will be required.
 
I've never heard or seen Tesla/Elon publicly talk about Autosteer on City Streets
Because it's a fiction created to satisfy regulators. :p
Chuck's left turn is a perfect example of why FSD beta is L5 and not L2. Doing an unprotected left turn there with a "driver assist" system will always be a horrible user experience. I guarantee you that Chuck is far more relaxed doing the turn manually. You can watch him and even when the car is not moving it's a high stress situation. Reducing the disengagement rate doesn't really improve the user experience as you still have to respond instantly when the system makes an error.
 
  • Like
Reactions: 2101Guy and DanCar
Not sure I understand the question, nor am I clear that your definition of "metric" meets that of an engineer. Put simply, metrics are defined measurements. The SAE Levels of Driving Automation lay out a series of metrics that differentiate the levels. These include what portions of the dynamic driving task falls to the ADS and what portions are by a human driver, what the requirements for fallback are, what limitations exist in the ODD, etc.

If your question is "what makes an, e.g., L5 system over an L2 system," then, yes, "intent" is a factor, although I would couch it more as design. As such, IMO Tesla's FSD is still an L2 system, because I don't see anything that suggests that the ADS currently contains the features necessary for it to meet the metrics for L3+, e.g., fallback or no driver. Waymo, on the other hand, has had all that in their system for a while, and while they were heavily monitored and operated in a very limited ODD, it appeared that the "intent" of the system (i.e., what it was designed for) was L4 operation.

EDIT: After reading your other posts, I think we may be saying the same thing. :)
Yes, you are in agreement on those points actually. @Daniel in SD instead is arguing that FSD Beta in its current form is a L4 system under test (like Waymo for example) simply because Elon says he plans to eventually release a L3+ feature in the future on this hardware.
 
Last edited: