Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD rewrite will go out on Oct 20 to limited beta

This site may earn commission on affiliate links.
Most of y'all seem to be confused about level 3 like me before diplomat33 and some others educated me.

With L3, you can define your own operational domain. For example, the following is level 3 (similar to the magical Mercedes level 3):

One lane driving on highway only
Under 25 mph
Requires follow car
Clear lane markings only
Not rainy or snowy weather
5 seconds to take over
If car stops for 10 seconds, driver has 5 seconds to take over

DRIVE PILOT | Daimler
 
It also seems like a fairly simple situation that the FSD beta would have had no problems maneuvering itself.
I bet the Cruise vehicle could too, 99.9% of the time. Obviously with a safety driver in the vehicle you let the vehicle decide what to do but without one they've got to phone home to achieve safer than human operation.
What if the construction worker is giving hand directions? Lots of possible problems that could be missed by the software.
 
I bet the Cruise vehicle could too, 99.9% of the time. Obviously with a safety driver in the vehicle you let the vehicle decide what to do but without one they've got to phone home to achieve safer than human operation.

But the Cruise vehicle did the stupidest thing. It went into the oncoming traffic lane, then figured it doesn't know what to do, so it stopped there to phone home, lol!
 
Most of y'all seem to be confused about level 3 like me before diplomat33 and some others educated me.

With L3, you can define your own operational domain. For example, the following is level 3 (similar to the magical Mercedes level 3):

One lane driving on highway only
Under 25 mph
Requires follow car
Clear lane markings only
Not rainy or snowy weather
5 seconds to take over
If car stops for 10 seconds, driver has 5 seconds to take over

DRIVE PILOT | Daimler


You mean the same Mercedes that just announced it was abandoning its efforts at autonomous driving? :)

But yes- it's possible to define an L3 operational domain so narrow nobody will actually care you're "developing" it for a while until you give up on it.

Amusingly, if they could have added one feature, it would've been L4 technically- "Can pull over safely on its own"

L4 can fail safely without need for a human, ever, to be safe. L3 can't.
 
You mean the same Mercedes that just announced it was abandoning its efforts at autonomous driving? :)

But yes- it's possible to define an L3 operational domain so narrow nobody will actually care you're "developing" it for a while until you give up on it.

Amusingly, if they could have added one feature, it would've been L4 technically- "Can pull over safely on its own"

Almost all the autonomy developers are hooked on publicity stunts, very little substance. They don't actually believe autonomy is coming soon.
 
But the Cruise vehicle did the stupidest thing. It went into the oncoming traffic lane, then figured it doesn't know what to do, so it stopped there to phone home, lol!
Maybe, maybe it was trying to get a good view of what was going on.
Computers are stupid, that's why the problem is so hard! I'm excited to see how their deployment goes, it's a way more difficult area than Waymo is attempting.
 
Most of y'all seem to be confused about level 3 like me before diplomat33 and some others educated me.

With L3, you can define your own operational domain. For example, the following is level 3 (similar to the magical Mercedes level 3):

One lane driving on highway only
Under 25 mph
Requires follow car
Clear lane markings only
Not rainy or snowy weather
5 seconds to take over
If car stops for 10 seconds, driver has 5 seconds to take over

DRIVE PILOT | Daimler

Don't you mean the magical Audi A8 L3?

Speaking of the A8 I don't know if it ever actually got released in Germany with L3 enabled. I haven't heard much about it since it was supposedly going to be the first L3 vehicle.

The other issue with L3 is regulatory, and its likely that regulatory will push additional requirements on it. For example they'll likely force driver monitoring so make sure the driver is responsive and isn't falling asleep.

This is precisely why I think its pointless to talk about L3. I find it odd that it comes up so much on a Tesla forum when no one from Elon/Tesla has ever mentioned targeting L3 in any shape or form.
 
Maybe, maybe it was trying to get a good view of what was going on.
Computers are stupid, that's why the problem is so hard! I'm excited to see how their deployment goes, it's a way more difficult area than Waymo is attempting.

If you look at what happened in more detail, you'd see how stupid the Cruise driving policy / path planning is. The car got confused NOT b/c it recognized a construction zone or anything like that. If you look at the path, it got confused b/c a car was in its intended path and it couldn't figure out that it needed to go around it. So what the remote operator did was draw a path around the stopped car... Either that or the Cruise visualization sucks.
 
Don't you mean the magical Audi A8 L3?

Speaking of the A8 I don't know if it ever actually got released in Germany with L3 enabled. I haven't heard much about it since it was supposedly going to be the first L3 vehicle.

The other issue with L3 is regulatory, and its likely that regulatory will push additional requirements on it. For example they'll likely force driver monitoring so make sure the driver is responsive and isn't falling asleep.

This is precisely why I think its pointless to talk about L3. I find it odd that it comes up so much on a Tesla forum when no one from Elon/Tesla has ever mentioned targeting L3 in any shape or form.
The Audi system was never released anywhere. As far as I know Mercedes and Hyundai still plan on releasing systems. Not sure if Mercedes recent cancelling of their autonomous vehicle program changes that.
I think L3 comes up sometimes because people have no idea what it is and assume that there is some sort of progression between SAE automation levels. To me it seems like the only plausible thing that can be achieved with the existing hardware and California owners would really like an interstate highway L3 system. I agree that there is zero evidence that Tesla has any interest in it. If other car companies ever do it then maybe they'll change their minds.
 
If you look at what happened in more detail, you'd see how stupid the Cruise driving policy / path planning is. The car got confused NOT b/c it recognized a construction zone or anything like that. If you look at the path, it got confused b/c a car was in its intended path and it couldn't figure out that it needed to go around it. So what the remote operator did was draw a path around the stopped car... Either that or the Cruise visualization sucks.
Agreed, the path planning looks like it wants to go behind the truck but that would be the first choice of a human driver as well. You can see right after that the path changes to a faint line going around the truck. I assume the faint line means that it has low confidence and it stays that way for many frames (this is 5x speed). Then when the remote assistant confirms it it changes to a bright line.
Screen Shot 2020-10-27 at 8.32.29 PM.png

Screen Shot 2020-10-27 at 8.30.09 PM.png

Screen Shot 2020-10-27 at 8.32.53 PM.png
 
In that scenario, it could be argued that L5 isn’t “anything a human can do”
It seems to me that people are no longer defining L5 as human capability in any situation

The problem with L5 is what is human capability anyways?

Like I can drive in adverse weather conditions better than some other drivers, but not as well as some really good drivers.

So who's the human driver? Some average? If its some average than is it the average for the nation? That would make developing an autonomous car for the German market a fair bit more difficult than US as they have better drivers.

The other problem is safety thresholds. As a human driver my safety varies as with my familiarity. I'm not a particularly safe driver near my house (this is pretty common), but I'm also not a particularly safe driver in cities that I'm unfamiliar with. I'm not because sometimes I miss visual clues, and so I find myself cursing myself for not seeing something. I'm certainly not alone in this as I see it fairly often especially on vehicles with out of state plates.

Is safety thresholds allowed to change for autonomous vehicles?

Ultimately I believe L5 is a fairly tale based on a fairy tale of human drivers.

I like L4 because it allows necessary restrictions to be placed on it.

Things like:

Requirement to always be connected to the mothership (for remote take over if necessary)
Requiring redundancy for path planning maps so this means HD maps plus on-the-fly visualization maps. This way the two things can cross compare, and can tell the mothership any time the two aren't in agreement.
Allowing regions to fix infrastructure problems, and unify signage before allowing autonomous vehicles
Requiring V2X to allow for traffic improvements as autonomous driving increases

L2 is not sustainable because as the system gets better the drivers will get careless, and accidents will happen.
L3 doesn't offer much room for growth.
L4 is where we want to be, but we simply don't have the collaboration between companies, cities, states, and the federal government to get the ball rolling.

My hope is that 2021 will mark a turning point away from L2, L3, and L5 to focus solely on the growth and acceptance of L4.
 
I assume the faint line means that it has low confidence and it stays that way for many frames (this is 5x speed). Then when the remote assistant confirms it it changes to a bright line.

No, Voigt explains that the remote operator did "2 clicks" to make a new path for the car. Voigt says that the remote operator created a new path around the car. If he wanted better publicity, he would have said there were two paths, the operator simply selected the one around the car...
 
  • Like
Reactions: hh83917
No, Voigt explains that the remote operator did "2 clicks" to make a new path for the car. Voigt says that the remote operator created a new path around the car. If he wanted better publicity, he would have said there were two paths, the operator simply selected the one around the car...
My point was that the car did choose the correct path. Neither of us know how confident it was in that path or what the threshold is for it to ask for assistance. I didn’t mean to speculate on the user interface for the remote assistant.
Supposedly they’re going to start driving passengers around without safety drivers. If the system works then it works. We will see! Exciting times. When do you expect Tesla to do driverless operation in San Francisco?
 
  • Like
Reactions: hh83917
When do you expect Tesla to do driverless operation in San Francisco?

Driverless? 3-4 years. Who knows, who cares really.

What Tesla has done / is doing is make their cars infinitely more attractive than every other car mfg'er. Like Elon said, why would you buy a horse buggy?

Even if Waymo / whoever is able to get a profitable robotaxi 5 years ahead of Tesla, they'd still lose, because by the time they ramp up their service (Waymo will be able to make ~10 to 20k cars a year max...IMO), Tesla already has 5 million robotaxis.

If we start to see lethal issues / bugs with the FSD beta, then we'd know Tesla won't win, but so far, have we seen that? I don't think so.

It's really beginning to look like a difficult software problem rather than a sensor problem, as Elon said.
 
Driverless? 3-4 years. Who knows, who cares really.

What Tesla has done / is doing is make their cars infinitely more attractive than every other car mfg'er. Like Elon said, why would you buy a horse buggy?

Even if Waymo / whoever is able to get a profitable robotaxi 5 years ahead of Tesla, they'd still lose, because by the time they ramp up their service (they'll be able to make ~10 to 20k cars a year max...IMO), Tesla already potentially has 5 million robotaxis.

If we start to see lethal issues / bugs with the FSD beta, then we'd know Tesla won't win, but so far, have we seen that? I don't think so.
Haha. I think a lot of people care. I want my car to drive me around while I sleep or post on TMC. :p Or drop me off and go park.
I’ve watched a bunch of FSD videos and the reason there are no issues is because there are alert humans taking over when the system screws up. I worry that it will just end up being a “viral” feature like smart summon.
 
The problem with L5 is what is human capability anyways?

Like I can drive in adverse weather conditions better than some other drivers, but not as well as some really good drivers.

So who's the human driver? Some average? If its some average than is it the average for the nation? That would make developing an autonomous car for the German market a fair bit more difficult than US as they have better drivers.
Scenario: Semi Trailer carrying hundreds of cattle veers to avoid car & overturns on freeway, blocking the entire freeway. Cattle are running aimlessly everywhere, several dead. Truck has crushed a couple of cars, a couple more wedged underneath. People trapped in cars.
Emergency services arrive with jaws of life, ambulances attend, crane required to get semi upright, etc, etc, etc
To clear the freeway takes 6 hours. It’s a very hot day.

Thousands of cars have banked up & because there is a concrete dividing barrier, the only practical solution is for the thousands of cars to reverse 2 miles back to the previous freeway exit. A human can quite easily do this, despite how time consuming it is
Could a Level 4 car at the back of the queue & closest to the accident do this, even when the Level 4 car is operating well within its geofenced area? How would it recognise the police hand signals/directions to reverse? How would it navigate & coordinate with all the other cars?
Would this scenario not require a Level 5 capable car?
 
Hi. I’m sorry to jump in so late in this conversation. But, has someone mentioned the delta between the types of miles probably driven in these cases? That the Autopilot miles are almost certainly almost entirely on highways, whereas the NHTSA numbers almost certainly include (probably more accident prone, per mile) city miles? And, that Tesla drivers are (by income bracket alone) already probably among the safest drivers around, regardless (as insurance actuarial rates would imply)?

Don’t get me wrong, I’m a big AP and FSD believer/owner. But, there’s a little slight of hand going on in Tesla’s marketing...

I’ve seen this assertion made many times without any obvious proof. Although it definitely is a possibility, Elon has also once said that it’s true “no matter how you slice and dice it” meaning it’s still safer when normalizing for all the obvious variables. Also, if you read the fine print about how Tesla determines what is an accident, it is quite conservative. The reporting rate for Tesla is something approaching 100% while NHTSA can only gather data via accident reports. If a Tesla is rear-ended it is considered an accident. If autopilot disengages 5 seconds before an accident, they also count it as an accident.

I also believe the Autopilot team would refuse to push out anything knowingly harmful. Stuart Bowers, the previous autopilot head also gave indications that safety actually was higher on autopilot at autonomy day.
 
The other issue with L3 is regulatory, and its likely that regulatory will push additional requirements on it. For example they'll likely force driver monitoring so make sure the driver is responsive and isn't falling asleep.

I agree for the most part. Save for a driverless L4 geofenced service, the monitoring of the driver via camera or nags for alertness will likely be mandatory on any partially-autonomous system in the near future.

As for what they're targeting? If we take just Elon's word for it, it's ultimately L5.
 
Scenario: Semi Trailer carrying hundreds of cattle veers to avoid car & overturns on freeway, blocking the entire freeway. Cattle are running aimlessly everywhere, several dead. Truck has crushed a couple of cars, a couple more wedged underneath. People trapped in cars.
Emergency services arrive with jaws of life, ambulances attend, crane required to get semi upright, etc, etc, etc
To clear the freeway takes 6 hours. It’s a very hot day.

Thousands of cars have banked up & because there is a concrete dividing barrier, the only practical solution is for the thousands of cars to reverse 2 miles back to the previous freeway exit. A human can quite easily do this, despite how time consuming it is
Could a Level 4 car at the back of the queue & closest to the accident do this, even when the Level 4 car is operating well within its geofenced area? How would it recognise the police hand signals/directions to reverse? How would it navigate & coordinate with all the other cars?
Would this scenario not require a Level 5 capable car?

If you look at my list of requirements that I would like for a L4 infrastructure you'll see that I have what's necessary for those situation.

It's connected to the mothership so remote takeover can happen. The car can basically call home, and ask for assistance.

It has V2X capabilities so the cop can communicate instructions not to just the autonomous car, but all the other cars that are equipped with V2X. In addition to this the geofencing allows the cop to have the training necessary on how to communicate with the car. Whether its through digital means or through hand communication.

So I would argue this exact type of scenario is why L4 makes more sense to me than L5.

I simply don't see an L5 vehicle being able to handle situations of this nature.

Humans are also relying more, and more on maps/digital information.

Like yesterday I needed to navigate from point A to point B, and Apple Maps was telling me it would take way longer than I remember it saying the night before when I checked it before going to sleep. I needed to be somewhere at a specific time so I planned my sleep schedule around it with a nice buffer. The next day I checked again just before leaving, and it was suddenly much longer and the distance seemed to be quite a bit more.

I had to leave quickly to make it, and while driving I realized it was taking me a different way than I expected so I pulled over. While I was pulled over I realized that Apple maps though the road was closed due to construction so hence the longer route. So I used Google Maps, and it simply said there was a small delay.

The delay was pretty much non-existent. All they did was closed one lane, and traffic was light so I didn't have to wait long.

Why Apple Maps thought it was closed I don't know. But, we absolutely have to have much better navigational system and notices.

Lots of these types of details are really outside of the control over vehicles/drivers.

The infrastructure we rely on (navigation, road construction crews, etc) need to be a lot better.

L5 is just a fantasy designed to convince people we don't have serious work to do in getting this all to work.
 
  • Disagree
Reactions: mikes_fsd