Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
So the approach is basically: Let’s develop a system that tries to do everything

This part is Tesla's approach in a nutshell, yes. Start by doing everything poorly, and then some things less poorly, and then eventually, all things decently.

They might bypass L3 altogether and wait until it can also achieve a minimal risk condition when it identifies it cannot safely drive.

Either way, again, I'm confident in saying the point along that timeline where they are now does not represent literally zero progress toward autonomy.
 
  • Like
Reactions: diplomat33
I can foresee a time when the HW3/4 system changes the Level (1-3) dynamically based on the detected driving conditions and capabilities of the software, with the Level being displayed prominently on the display at any given time. e.g. Level 3 in optimal conditions (highway on a bright sunny day?), scaling down to 2 and even 1 when conditions deteriorates.
 
I can foresee a time when the HW3/4 system changes the Level (1-3) dynamically based on the detected driving conditions and capabilities of the software, with the Level being displayed prominently on the display at any given time. e.g. Level 3 in optimal conditions (highway on a bright sunny day?), scaling down to 2 and even 1 when conditions deteriorates.
Mercedes explicitly did not do things this way because it can lead to mode confusion. Basically if the car falls back to level 2 seamlessly, that greatly increases the chance the driver does not notice. As such, you can only go up in levels, not down. When L3 disables, it disables completely and does not fall back to L2.
 
This part is Tesla's approach in a nutshell, yes. Start by doing everything poorly, and then some things less poorly, and then eventually, all things decently.

They might bypass L3 altogether and wait until it can also achieve a minimal risk condition when it identifies it cannot safely drive.

Either way, again, I'm confident in saying the point along that timeline where they are now does not represent literally zero progress toward autonomy.
This approach is guaranteed to maximize time to market for a system. The most likely outcome is that you cannot ever get to autonomy in a meaningful or coherent ODD.

At the end of the day, the science isn’t there yet. You simply cannot, at this point in time, get computer vision alone to the reliability levels needed. There are simply too many failure modes in a useful ODD and we’re 1000x or more away from autonomy.

I see a system that seems over reliant on maps and will make a turn into whatever the map tells it. It will stop at a stop-sign on the map and adjust the speed according to the map and so on. Crowd sourcing maps will give the illusion of better performance but as soon there is a change, you’ll see failure.

I’ll credit you with the blind trust, but I think you’re completely wrong. I think FSD on HW3 and HW4 is a perpetual L2. That’s where all the indicators point to. And regardless what one believe the strategy is, it seems years and years away from autonomous operations in any meaningful ODD.

A real stress test will come when a competitor launches highway speed L3, if that happens this decade. And another if Cruise and/or Waymo deploys at scale.
 
Last edited:
All you had to say in response was: "I believe Tesla has made zero progress toward autonomy." But thanks for confirming that you're incapable of having a coherent discussion about this. You don't care about facts, you just hate Elon Musk.
Tesla FSD/robotaxis has become a cult religion. And as is typical of religions, facts don’t matter or are explained away.
If, & it’s a big if, Tesla were ever to establish a robotaxi service (they still don’t have a permit), there would be so many competitors that it would be next to profitless
 
This approach is guaranteed to maximize time to market for a system. The most likely outcome is that you cannot ever get to autonomy in a meaningful or coherent ODD.

At the end of the day, the science isn’t there yet. You simply cannot, at this point in time, get computer vision alone to the reliability levels needed. There are simply too many failure modes in a useful ODD and we’re 1000x or more away from autonomy.

I see a system that seems over reliant on maps and will make a turn into whatever the map tells it. It will stop at a stop-sign on the map and adjust the speed according to the map and so on. Crowd sourcing maps will give the illusion of better performance but as soon there is a change, you’ll see failure.

I’ll credit you with the blind trust, but I think you’re completely wrong. I think FSD on HW3 and HW4 is a perpetual L2. That’s where all the indicators point to. And regardless what one believe the strategy is, it seems years and years away from autonomous operations in any meaningful ODD.

A real stress test will come when a competitor launches highway speed L3, if that happens this decade. And another if Cruise and/or Waymo deploys at scale.



What specific obstacles do you see to a useful ODD at L3 on divided/controlled access highways with say HW4 and the HW4 FSD computer?




if, Tesla were ever to establish a robotaxi service (they still don’t have a permit),

What "permit" do you imagine they need to be able to introduce Robotaxis in many places?


(Disclaimer- I don't think Tesla is anywhere near the system being capable of being a robotaxi--- but the idea they need "regulators to approve it to do it at all" simply is not true)
 
  • Like
Reactions: willow_hiller
Elmo promised robotaxis. There won't be any robotaxis, no matter how much hopium one chugs
You hit the nail on the head here. (At least for the next decade or two.) What people who are over-hyped on Tesla fail to realize is just how hard of a problem this is to solve.

First let's be clear to have a robotaxi you need full level 5 autonomy. The car needs to be able to drive itself without a human. That means it needs self driving completed to a 100% state. Yes that means every single error that FSD has, has to be solved with backup solutions. Software is an exponential curve, the last 5 or 10% of the solution is typically over half of the work. The simple reality, as anyone with real software experience can tell you, is that FSD is less than half way to 100% solved. Candidly it is probably less than a quarter of the way there.

Second there are a number of non-technical issues that need to be resolved to have driverless cars. Liability is the big one, who is responsible for the car's driving? I had a driver run a red light and total my car, with witness and police report and their insurance denied responsibility for 10 months. Can you imagine if one of the cars had no driver? What happens if the car crosses state lines? Who gets sued if someone gets hurt or killed? These are all questions that need answers before any driverless car is a reality. Keep in mind this is the easiest part of the problem!

Picture another decade passing without FSD being "finished" and all the promises going un-filled. How will owners/investors react? Now picture yet another decade passing? Meanwhile other competitors who have opted for mapped options in specific areas already have their products out in the wild. Now picture yet another decade passing...at what point do investors file a class action lawsuit against Tesla?
 
First let's be clear to have a robotaxi you need full level 5 autonomy. The car needs to be able to drive itself without a human.

L4 is where the car can drive itself without a human, not L5.

Indeed, the many companies operating robotaxis to the public today are all L4.

L5 is "better" in that you could take a taxi from one city to another-- but you don't need L5 to have a robotaxi "at all" as folks like Waymo are currently demonstrating.


Second there are a number of non-technical issues that need to be resolved to have driverless cars. Liability is the big one, who is responsible for the car's driving?

For current RTs that's quite simple. If Waymo causes an accident, Waymo is liable.

I had a driver run a red light and total my car, with witness and police report and their insurance denied responsibility for 10 months. Can you imagine if one of the cars had no driver?

It'd be even simpler with RTs since they'd have 360 degree video and sensor footage of the entire thing.

Fault would be MUCH easier to ascribe to the RT or the other vehicle than in normal cases.


What happens if the car crosses state lines?

For liability? Why would that matter? A car insured in one state has the same insurance in another state.

There ARE different regulations on OPERATING self driving vehicles from one state to the next- those would need to be respected- but those rarely (and only slowly) change so it'd be easy enough to program those in. If, say, Kansas decides to ban self driving cars, then your self driving car simply will refuse to operate in Kansas.



Who gets sued if someone gets hurt or killed?

The same entity sued today- whomever is responsible for the injury or death. Waymo in the example of one of their RTs for example.



That's not to say there aren't complexities that need to be worked out....just...not most of the ones you've raised.
 
What specific obstacles do you see to a useful ODD at L3 on divided/controlled access highways with say HW4 and the HW4 FSD computer?
To me, it's the brittle/unreliable computer vision range estimates and object detection is the main obstacle. Secondly dealing with highway speeds. You will need to see quite far with at highway speeds to deal with the fallback procedure. I doubt HW4 has the range (and reliability at that range). My understanding is that HW4 uses two cameras with the same type of lenses whereas in in HW3 they use three different lenses (long, mid, short),

Lidar has quite a lot lower lower latency than CV+ML and the reliability is unmatched.

They might be able to do something similar to the existing MB DrivePilot L3 at low speeds, but I seriously doubt that they would. There is little upside for Tesla to sell a similar package as the DrivePilot at double the cost AND having to take on liability to be at par with MB.

If we assume that CV gets to near 100% at some point in the future, I still think that entering/exiting tunnels, low sun conditions and oncoming cars will blind the cameras enough for it to be unsafe in these conditions, but perhaps that can be mitigated with ODD boundaries.

Given the specs and analysis I've read of the hw4 radar it looks like another cost-focused move: Tearing-down Tesla’s in-house radar design – Why did they bother? | Ghost

It will be interesting to see if the 3/y refreshes get the radar or not, and if Tesla can use it for autonomy. I doubt it. I think it's too low spec.
I'm seeing Tesla still prioritising unit cost over autonomy at every corner basically. HW4 with radar is probably cheaper fot Tesla than the OG hw3 spec with radar at similar volumes.

edit:
The whole point of paying 8k+ USD for FSD was to be ensured the hardware upgrades for autonomy. That turned out to be another "misunderstanding" from the customers' perspective, and I think the price of FSD will drop, to perhaps half or what it is now in a year or two.


2016: "We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. "


"The validation required for full autonomy will still take some more time, but Musk said on a call that it’s actually already looking like it’ll be at least two times as safe as human driving based on existing testing."

"Musk said in a conference call in August regarding Tesla’s advancements in creating a car with Level 4 autonomous capability that “what we’ve got will blow people’s minds, it blows my mind,” and added that “it’ll come sooner than people think.” He’s certainly delivered with today’s announcement."

This is seven years ago. He's doing the same number SEVEN YEARS later and it's getting pretty stale tbh. And that some people still fall for this just blows MY mind. Tesla should be forced to pay all FSD customers back with interest, instead the circus goes on and new people get scammed every year.
 
Last edited:
  • Like
Reactions: Bladerskb
It's very odd to me that you believe autonomous vehicles just pop into existence immediately achieving 1000-5000 miles between disengagements. Every AV company today started with safety drivers behind the wheel.

Most AV company start with a pretty robust system with high redundancy (sensor cleaning, cameras, radar, lidar, HD maps etc). This redundancy means that their AVs tends to start early on with a pretty decent disengagement rate. For example, in 2015, Waymo was already at 0.64 disengagements per 1000 miles or 1,562 miles per disengagement (see graph below). But 1,562 miles per disengagement is not good enough to remove the safety driver. So yes, even with 1,562 miles per disengagement, Waymo still required a safety driver. AV companies will keep a safety driver until their disengagement rate is even higher. Waymo waited until they were at over 10,000 miles per disengagement before they launched their first service in Chandler.

85b4ca35d9b78bec97d7d1e9577f9033.jpg
 
  • Like
Reactions: spacecoin
Most AV company start with a pretty robust system with high redundancy (sensor cleaning, cameras, radar, lidar, HD maps etc). This redundancy means that their AVs tends to start early on with a pretty decent disengagement rate.

I wouldn't necessarily conflate redundancy with accuracy or performance. HW3 does suffer from poor sensor redundancy, so if a camera is blinded, it would need to pull over; but that doesn't mean that the performance of Tesla's perception is poor. In fact, I think FSD Beta's strongest suit right now is the quality of perception; and their weakest suit is driving policy that utilizes that perception data.

I've never seen a recent version of FSD Beta fail to perceive and render a stop sign. But I have seen the rare cases where it fails to stop for one.
 
I wouldn't necessarily conflate redundancy with accuracy or performance. HW3 does suffer from poor sensor redundancy, so if a camera is blinded, it would need to pull over; but that doesn't mean that the performance of Tesla's perception is poor. In fact, I think FSD Beta's strongest suit right now is the quality of perception; and their weakest suit is driving policy that utilizes that perception data.

I've never seen a recent version of FSD Beta fail to perceive and render a stop sign. But I have seen the rare cases where it fails to stop for one.
HW3 suffers from poor perception, performance and object kinematic estimates. Maybe one of the most obvious and problematic is cross flow traffic. In spite of release notes claiming large % improvements roadway performance isn't improving. Side view perception and processing is seriously lacking as well. There's no way to sugar coat HW3.
 
HW3 suffers from poor perception, performance and object kinematic estimates. Maybe one of the most obvious and problematic is cross flow traffic. In spite of release notes claiming large % improvements roadway performance isn't improving. Side view perception and processing is seriously lacking as well.

How are you differentiating the perception performance from the planning/execution performance? In every FSD Beta video I've watched, especially those that have high-quality visualization outputs, I've never seen a noticeable deviation of the visualized traffic from its real apparent position. If static and dynamic perception were as bad as you say, I think we'd see obvious discrepancies on the visuals.
 
  • Like
Reactions: JB47394
Looks like Cruise stalled due to the fire hose on the road.

We can't see around the corner, but presumably there's also a firetruck with their emergency lights active within view of the Cruise vehicle? The fire hose is so flat I can't imagine it being picked up as an obstacle, but maybe Cruise freezes in place on purpose whenever it sees emergency lights.
 
  • Like
Reactions: diplomat33
We can't see around the corner, but presumably there's also a firetruck with their emergency lights active within view of the Cruise vehicle? The fire hose is so flat I can't imagine it being picked up as an obstacle, but maybe Cruise freezes in place on purpose whenever it sees emergency lights.

Yeah. In any case, I can see how this "stall" could be problematic. The Cruise is just sitting there, impeding traffic.
 
Looks like Cruise stalled due to the fire hose on the road. We see a Waymo go around the stalled Cruise:

There was a previous incident where a Cruise got its window smashed because it was going to run over an active fire hose. The fire department also complained about a case where it did run over it. It's possible Cruise has programmed the car to be more cautious and stop even if it is not sure if the hose is active.
 
  • Like
Reactions: diplomat33
Last Friday Rafaela Vasquez pled guilty to endangerment, making her the first human to be held criminally responsible for a death caused by an autonomous vehicle. Vasquez was initially charged with negligent homicide (facing up to 8 years in prison), while Uber faced no criminal charges at all.

"Like Large Language Models, driving automation is a technology that draws us into a fantasy, thinking a vehicle “can drive itself” after a few minutes of plausible performance."