Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Perhaps 1 week is the new 2 weeks 😂.

1659138160714.png
 
With all due respect, now you've lost me. Tesla could add more sensors to the point of overkill. They could add more compute power. They could also rely on hi-res mapping. Is your point that no vehicle company will achieve L5 autonomy without a ridiculous geofence?

First of all, nobody is suggesting adding sensors to the point of overkill. That is a strawman. I do believe you need adequate sensors and HD maps in order to achieve safe and reliable autonomous driving. And I do not believe 8 cameras that are only 1.2 MP and the current 144 TOPS FSD computer are adequate for safe and reliable autonomous driving. So I do think Tesla needs more sensors, more computing power and better maps than what they currently have but not to the point of overkill.

Second, if you geofence, then your autonomous driving is L4, not L5. So it is literally impossible to achieve L5 with geofencing. Please understand what the levels mean. Put simply, L4 is autonomous driving with some restrictions like geofencing, L5 is autonomous driving with no restrictions. So when Elon promises L5, he is promising a self-driving car that can drive anywhere, any time, in all conditions with no human intervention ever. Nobody has L5.
 
Last edited:
I did not say it was easy, I said it was doable. And there are others that have done it. I believe Blue Origin does it too.
A lot of this comes down to defining both what you want something to do, and constraints.

When both of those are tightly defined Engineers can generally pull off even difficult things given the opportunity to fail, and to try again.

The problem with generalized autonomous driving is there are no constraints. So you end up with a million different variables, and all kinds of things you have no control over.

It's basically an impossible problem to solve unless you scale it down to things you can control, and things that can be dictated by regulations.

This is exactly what MB did to achieve L3 in Germany, and hopefully this year in Cali.

It might not be much, but at least its a starting point. Something they can build from.
 
Let me interrupt this discussion for an important PSA.

I'm now selling real Moon vacations, including 5-star hotels, restaurants, and personalized facilities. Round trip space fight/training included. We're expecting those to start at the end of next year.

The price now is only $12k, but it is sure to go way up very soon. Don't wait.

I now return you to the thread.
 
A lot of this comes down to defining both what you want something to do, and constraints.

When both of those are tightly defined Engineers can generally pull off even difficult things given the opportunity to fail, and to try again.

The problem with generalized autonomous driving is there are no constraints. So you end up with a million different variables, and all kinds of things you have no control over.

It's basically an impossible problem to solve unless you scale it down to things you can control, and things that can be dictated by regulations.

This is exactly what MB did to achieve L3 in Germany, and hopefully this year in Cali.

It might not be much, but at least its a starting point. Something they can build from.
It's remarkably akin to early computer vision attempts... pre-2012, if you wanted a computer to find a car in an image, you'd hand-write some set of feature detectors for bits and pieces of a car. The round shape of the wheels, the brake lights, the contour of the hood and roof, etc. And then sort of sum everything up and tweak it until it has tolerable performance. It will be super brittle and break easily and it may seem like an impossible task without constraining it. But now adays that would be an incredibly foolish approach. Today you'd load up PyTorch and train a CNN to detect cars in an end-to-end fashion, and not fuss over individual features. You'd fuss about the data set and fiddling with network parameters, fixing bugs, but you'd definitely not be concerned with the individual features, you're designing the system in a holistic way.

Whoever comes along with the first fully autonomous car will probably do the same thing, they're definitely not going to fuss over the taxonomy of traffic cones, or the minutiae of lane markings and signs. They're not going to have dozens of discrete planners and state machines, and tree search. They'll show up with the right network architecture, a massive dataset, and just a little bit of C++. Perception, prediction, planning all in one. It'll be like the MuZero of driving cars. And then months later everyone will be able to make their own fully self-driving car in PyTorch by just following some tutorials on a web page. And then all the billions spent at Waymo, Cruise, Zoox, Tesla FSD, whoever else is left, would have been completely obsolete and wasted effort. Just like the first attempts at computer vision are obsolete relics that no one cares about anymore.
 
Last edited:
First of all, nobody is suggesting adding sensors to the point of overkill. That is a strawman. I do believe you need adequate sensors and HD maps in order to achieve safe and reliable autonomous driving. And I do not believe 8 cameras that are only 1.2 MP and the current 144 TOPS FSD computer are adequate for safe and reliable autonomous driving. So I do think Tesla needs more sensors, more computing power and better maps than what they currently have but not to the point of overkill.

Second, if you geofence, then your autonomous driving is L4, not L5. So it is literally impossible to achieve L5 with geofencing. Please understand what the levels mean. Put simply, L4 is autonomous driving with some restrictions like geofencing, L5 is autonomous driving with no restrictions. So when Elon promises L5, he is promising a self-driving car that can drive anywhere, any time, in all conditions with no human intervention ever. Nobody has L5.
How would you classify a completely autonomous vehicle if its autonomy was limited to California?
 
It's remarkably akin to early computer vision attempts... pre-2012, if you wanted a computer to find a car in an image, you'd hand-write some set of feature detectors for bits and pieces of a car. The round shape of the wheels, the brake lights, the contour of the hood and roof, etc. And then sort of sum everything up and tweak it until it has tolerable performance. It will be super brittle and break easily and it may seem like an impossible task without constraining it. But now adays that would be an incredibly foolish approach. Today you'd load up PyTorch and train a CNN to detect cars in an end-to-end fashion, and not fuss over individual features. You'd fuss about the data set and fiddling with network parameters, fixing bugs, but you'd definitely not be concerned with the individual features, you're designing the system in a holistic way.

Whoever comes along with the first fully autonomous car will probably do the same thing, they're definitely not going to fuss over the taxonomy of traffic cones, or the minutiae of lane markings and signs. They're not going to have dozens of discrete planners and state machines, and tree search. They'll show up with the right network architecture, a massive dataset, and just a little bit of C++. Perception, prediction, planning all in one. It'll be like the MuZero of driving cars. And then months later everyone will be able to make their own fully self-driving car in PyTorch by just following some tutorials on a web page. And then all the billions spent at Waymo, Cruise, Zoox, Tesla FSD, whoever else is left, would have been completely obsolete and wasted effort. Just like the first attempts at computer vision are obsolete relics that no one cares about anymore.

You seem to be describing end-to-end learning. Yes, there are companies that are trying this approach but it has its own problems. One problem is it is hard to troubleshoot. If your car does the wrong thing, you have no idea why. You basically have to retrain the entire NN until it handles that case correctly and then hope you did not introduce a new mistake.

Nobody is just going to "show up with the right network architecture" and then months later everybody can build their own self-driving car by following a tutorial on a webpage. IMO, that is ridiculously naive. If the end-to-end approach does work, it will require a truly massive NN. We are talking a true computer "brain", a single large deep NN that can understand the world and reason how to drive in all the billions of cases. If it is even possible, it will take many years of work, massive data and probably developing new AI. It certainly won't be easy to solve FSD with end-to-end learning. The fact is that there is no shortcut to solving FSD. I have no doubt that many decades from now when FSD is solved, we will laugh at how primitive FSD was in 2022. But the work of Waymo, Cruise, Zoox, Tesla etc won't be wasted. Their work will have helped develop autonomous driving.
 
I was thinking (incorrectly) that L4 didn't require geofencing, just basically human takeover. Looks like that should/could change at some point, maybe in "two weeks" or by the "end of the year".

View attachment 834539

L4 does not require any human takeover. It is actually mentioned in the SAE definition:

The sustained and ODD-specific performance by an ADS of the entire DDT and DDT fallback without any expectation that a user will need to intervene.

In layman's terms, L4 is a system that can do all the driving without any human intervention but only in a very specific ODD.

L4 does not specifically require geofencing. Any limit on the ODD would be L4. So theoretically, you could have L4 that is not geofenced but limited in some other way, like speed or weather. So geofencing implies L4 but L4 does not necessarily imply geofencing. But geofencing is the dominant form of L4 today, so we have come to associate L4 with geofencing.

There are a couple reasons why geofencing makes sense for L4 and why it works so well for ridesharing as your quote says. L4 requires no human intervention. it is easier to achieve a low disengagement rate in a geofenced area than in a non-geofenced area. It is also easier to test for edge cases in a geofenced area. And if you know your disengagement rate is low enough in a geofenced area, then you know you will able to do complete trips in that geofenced area safely without a safety driver. At that point, you can have a driverless robotaxi pick customers up, drive them say 5-8 miles and drop them off safely.

The challenge with non-geofenced L4 is that the vehicle might need to pull over and not complete the entire trip on its own. L4 cannot require human intervention so it needs to pull over on its own if it encounters a problem. So for example, you could have L4 that works everywhere in the US but is restricted to only good weather. But if you are on a long trip and suddenly the weather turns bad, the L4 would pull over. So the L4 would not be able to complete the entire trip on its own. Obviously, autonomous driving that cannot complete an entire trip on its own because it cannot handle some conditions, is not really ideal. So I think, in practice, geofencing will likely to be the main form of L4.
 
Last edited:
I meant "just basically [allows] human takeover";) as in L4 MUST have steering and pedals in case needed, and you can manually drive the car. In L5 they are not required and NEVER needed.

Nope. It is a bit more complicated than that. L4 does not need steering wheel or pedals if it is geofenced. The Zoox and Cruise Origin are examples of this. If the AV is designed to only do ride-hailing in a geofenced area that you know it can handle safely without any human intervention, then you don't need a steering wheel or pedals. And you can use remote assistance for cases where human intervention is required. If the AV is designed to be a consumer car, then you would probably include a steering wheel and pedal so that the human has a way of controlling the car outside the L4 ODD or when the human just wants to drive manually. Likewise, a L5 consumer car can have a steering wheel and pedals for if the human wants to drive manually. So while a steering wheel and pedals are not required for L5, they could still be an option. That is why it is simplistic when people try to define L4 or L5 based solely on whether it has a steering wheel or pedals. Both L4 and L5 can have or not have a steering wheel/pedals depending on the specific application.
 
Last edited:
  • Helpful
Reactions: wknickless
You seem to be describing end-to-end learning.
Yeah, it’s a fantasy that is being described.

Solving self-driving is not the same as running some images through PyTorch so you can feel good about the state of AI, or getting some cool pictures out of DALL•E.

Driving is not the same as creating abstract art.

It really is a very difficult trap to avoid, I guess. The man is right.
 
  • Like
Reactions: diplomat33
The L4 podcars (Cruise Origin, Zoox whateveritscalled, etc.) don't have steering wheels. Before the Origin GM pitched a L4 version of the Bolt with no steering wheel.
Let me amend my horrendous mistake.

...as in L4 MUST have steering and pedals in case needed OR be accessible by a remote control team that can take over and drive the car.

I was more talking about personal cars than commercial taxies. VERY doubtful that any manufacture would sell L4 cars to consumers without controls. Since this would require them to have teams of remote operators at the ready 24/7. Also extremely unlikely manufactures would want the liability involved in remote driving personally owned cars much less the costs involved that would have to be passed onto the owner.
 
Last edited:
  • Like
Reactions: Silicon Desert
...as in L4 MUST have steering and pedals in case needed OR be accessible by a remote control team that can take over and drive the car.
Waymo makes it very clear their remote monitors cannot drive the cars.

You're trying to create a L4/L5 difference that doesn't exist. L5 is just L4 without any restrictions based on geography or conditions. A L4 car could have driver controls which allow a human to drive it outside the ODD, but it's not required.

True L5 doesn't really even exist. Every AV will have some restrictions. But even a "Near L5" car could have a steering wheel for owners who enjoy driving once in a while. Why buy a L5 Plaid if you can't take it on Nurburgring, or test your skills on the drag strip?

On the other hand, many would buy a L4 car with no steering wheel even if it can't ford streams or drive through more than 8 inches of snow. After all, many buy cars today with steering wheels that can't do those things. Robotaxi L4s generally won't have steering wheels, though. The last thing they want is some 12 year old jumping in front and taking it on a joyride.
 
  • Like
Reactions: momo3605