Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Firmware 9 in August will start rolling out full self-driving features!!!

This site may earn commission on affiliate links.
Actually, I think Waymo is close to that.

No they aren't. Waymo is close to level 4 autonomy, meaning that within some defined operating envelope a driver is not required.

The whole 5-level thing isn't actually referenced much anymore by people actually doing the work, but when this system was defined, level 5 meant the car can operate without a driver under any conditions that a human driver could have operated the car.

So L5 means not only can the car drive itself without a safety driver in Manhattan, it can drive in Manhattan on Sep 11, 2001. Nobody is anywhere near L5. L5 is so far beyond L4, and has so little commercial utility, that nobody is really even working on what it takes to move from L4 to L5. They're thinking about expanding the L4 operating envelope into commercially valuable domains (e.g., Manhattan not on 9/11/2001, which is already beyond what existing systems can do).

Waymo is testing a prototype in places like Phoenix, where streets are wide, well-lit, well-marked, traffic is generally sane (compared to, say, Manhattan) and it hardly ever rains. This is an L4 system with a very limited operating envelope.

Back to Tesla, we should not let them off the hook. I've seen a lot of people in this thread talking about how "maybe recognizing stop signs is an FSD feature." EAP includes smart summon and on-ramp to off-ramp, so basically it implies L3 highway autonomy. If FSD means anything, it means driving without a human sitting in the driver's seat on local roads. Elon promised that you could get out your phone, summon your car, and it would come to you, wherever it was and wherever you are, including driving across the country to get to you -- without a human in the driver's seat.

Now, I'll be impressed as hell if they can do local roads in any meaningful way with a human in the seat but where that human doesn't have to pay attention and be ready to take over at any moment (this would be L3 on local roads btw), or even deliver on Smart Summon, or the car parking itself in a nice open, spacious, well-marked parking lot without you being within 10 feet of it watching it like a hawk with your finger on the button. That is not even half of what was promised with FSD, but does anybody really think it's going to happen with the current hardware?

Another way of putting this is that for FSD to mean anything, this whole BS about it being a driver assist system where the driver is always responsible for the actions of the car needs to go away.

Which is to say, it's not going to happen with the current sensor suite or computing platform.
 
Actually, I think Waymo is close to that.
Unfortunately I disagree.

NO one has a system that is close to level 5 autonomy.

There is a lot more than technology that has to exist to produce level 5 autonomy. As with any standard programing job....programing what to do when everything goes right consists of 5% of the programing. Level 5 programing concerning perfect conditions is EXTREMELY simple.

Driving strait down a street and even making turns is tremendously easy IF no other people or cars or animals or potholes whatever happens.

Take for instance the question of morality. Lets say.....a level 5 autonomous car is traveling down the street at 45 to 50 MPH. A young girl playing chases her ball into the street. There is oncoming traffic on this 2 lane street and it is impossible to stop to avoid hitting either the girl or oncoming traffic or having the car sacrifice itself and its passengers by running off the road.

What is the moral decision that should be made by the programmers? Change the scenario to a dog running out into the street....what is the moral decision then?

Waymo is nowhere near making and implementing level 5 morality decisions....nor other level 5 decisions when "Things don't go right".


With that in mind.....there is no Insurance Company willing to insure level 5 driving either.
 
Take for instance the question of morality. Lets say.....a level 5 autonomous car is traveling down the street at 45 to 50 MPH. A young girl playing chases her ball into the street. There is oncoming traffic on this 2 lane street and it is impossible to stop to avoid hitting either the girl or oncoming traffic or having the car sacrifice itself and its passengers by running off the road.

This is not the hard part about L5. An L4 system has the same problem.

What is the moral decision that should be made by the programmers? Change the scenario to a dog running out into the street....what is the moral decision then?

What is the decision made by humans in these situations? Humans are incapable of thoughtfully making difficult moral decisions in a tenth of a second and then executing on that decision perfectly. Computers are also incapable of doing that, for very different reasons (given today's computers). Computers and humans will do more or less the same thing in that situation -- try not to hit anything, and fail in that attempt in some way that is not carefully considered or planned out. Most people will do something very wrong in this situation -- even if they make the right moral decision, they will probably fail to execute their decision correctly, because its hard.

There is debate about that in my opinion. If you don't have a steering wheel...then there is full autonomy....no matter if there is a person in the car or not.

Level 4 is useless and makes no sense. How can there be any interaction by a human when there are no pedals nor steering wheel.

Path to Autonomy: Self-Driving Car Levels 0 to 5 Explained | Feature | Car and Driver

What exactly is useless about L4? L4 means that under defined conditions the car can operate without driver involvement. (This has nothing to do with whether there's a steering wheel and pedals or not.) L4 autonomy enables autonomous taxi/ride sharing services. L4 allows you to read your email during your morning commute or watch a movie during your road trip. What exactly is useless about any of that?
 
That issue is better in latest Autopilot software rolling out now & fully fixed in August update as part of our long-awaited Tesla Version 9. To date, Autopilot resources have rightly focused entirely on safety. With V9, we will begin to enable full self-driving features.

I don't get what Elon is saying in this statement regarding AP? It seems to me as if it is missing something, like this:

To date, Autopilot resources have rightly focused entirely on safety. With V9, we will begin to enable full self-driving features, without a focus entirely on safety.

Otherwise, why even mention the focus has been safety unless it has some relationship to FSD? I think what he is really trying to tell us that the focus on safety on AP, especially in light of the deaths, has delayed FSD. But he never likes to admit a delay for any reason so he tries to turn it into a positive -- with the safety part -- but instead it comes across as quite an odd statement to me.
 
Last edited:
This is not the hard part about L5. An L4 system has the same problem.
What? Just because both Level 4 and 5 have the same issue ( of which I discard level 4 as legitimate) doesn't remove the point that ITS very difficult to account for when FSD has to deal with the real world and its surprises.


What is the decision made by humans in these situations? Humans are incapable of thoughtfully making difficult moral decisions in a tenth of a second and then executing on that decision perfectly. Computers are also incapable of doing that, for very different reasons (given today's computers). Computers and humans will do more or less the same thing in that situation -- try not to hit anything, and fail in that attempt in some way that is not carefully considered or planned out. Most people will do something very wrong in this situation -- even if they make the right moral decision, they will probably fail to execute their decision correctly, because its hard.

Humans do surprisingly well in quick non-thinking decision making. When people trip over something the body ( without thinking ) does a remarkable job in preserving itself. Same holds true for driving and just about ALL surprising ( non thinking ) situations. Place a little girl or boy in front of a speeding car on one side and a dog on the other side...… Driving simulators have shown that drivers steer away from other humans in 99.9% of all situations ( doesn't matter what your background or race or anything ). 79% of the time the animals were steered away from. 80% of the time drivers steer off the road into ditches....( without thinking ).

100% of the time human drivers slammed on the brakes when the road disappears whether it be driving over a bridge where half if it was missing or when the road ahead is under 5 feet of water. etc There are 100's of scenarios that FSD will fail in this situation.
Automation failed 89% of the time


What exactly is useless about L4? L4 means that under defined conditions the car can operate without driver involvement. (This has nothing to do with whether there's a steering wheel and pedals or not.) L4 autonomy enables autonomous taxi/ride sharing services. L4 allows you to read your email during your morning commute or watch a movie during your road trip. What exactly is useless about any of that?


What's useless about level 4 is that there is very little a person can do to avert a situation if necessary if a rare situation ( as defined by level 4's definition ) occurs. It makes no difference if a human is online or not because there is nothing that a human can do.- Which is the exact same situation as Level 5.
I've always thought this....and last week it was stated by Elon and the Tech Chief of Volvo. Both are going to skip level 4 on the way to level 5.
 
What's useless about level 4 is that there is very little a person can do to avert a situation if necessary if a rare situation ( as defined by level 4's definition ) occurs. It makes no difference if a human is online or not because there is nothing that a human can do.- Which is the exact same situation as Level 5.
I've always thought this....and last week it was stated by Elon and the Tech Chief of Volvo. Both are going to skip level 4 on the way to level 5.
Level 4 might be useful for a completely unmanned car though as a bridge before level 5 - think unmanned freight or couriers. On the other hand, get level 5 right and there's no need for a level 4...
 
I don't get what Elon is saying in this statement regarding AP? It seems to me as if it is missing something, like this:

To date, Autopilot resources have rightly focused entirely on safety. With V9, we will begin to enable full self-driving features, without a focus entirely on safety.

Otherwise, why even mention the focus has been safety unless it has some relationship to FSD? I think what he is really trying to tell us that the focus on safety on AP, especially in light of the deaths, has delayed FSD. But he never likes to admit a delay for any reason so he tries to turn it into a positive -- with the safety part -- but instead it comes across as quite an odd statement to me.


Its much simpler that what you stated.

Tesla can't just roll out FSD if they wanted to.

You can't insure a FSD car yet. There is no insurance company that will insure a driverless car. The rules of liability insurance haven't been defined. The manufacturer of FSD isn't liable....just like they aren't liable for EAP. The driver will ALWAYS be responsible for what his/her car does. It doesn't matter where the technology is.

FSD is not approved by the US govt. yet. It doesn't matter where the technology is.

That does not mean that companies such as Tesla should stop innovating...….
 
Level 4 might be useful for a completely unmanned car though as a bridge before level 5 - think unmanned freight or couriers. On the other hand, get level 5 right and there's no need for a level 4...
Now you are changing the definition of Level 4. Level 4 is manned. Period. Anytime you speak of unmanned you leave the level 4 discussion and enter level 5. The levels are already defined.
 
Its much simpler that what you stated.

Tesla can't just roll out FSD if they wanted to.

You can't insure a FSD car yet. There is no insurance company that will insure a driverless car. The rules of liability insurance haven't been defined. The manufacturer of FSD isn't liable....just like they aren't liable for EAP. The driver will ALWAYS be responsible for what his/her car does. It doesn't matter where the technology is.

FSD is not approved by the US govt. yet. It doesn't matter where the technology is.

That does not mean that companies such as Tesla should stop innovating...….
How does Waymo or Cruze operate if you can't insure the vehicles? I cannot imagine they are running these trials without insurance. Or are you saying it would be cost prohibitive?
 
Unfortunately I disagree.

NO one has a system that is close to level 5 autonomy.

There is a lot more than technology that has to exist to produce level 5 autonomy. As with any standard programing job....programing what to do when everything goes right consists of 5% of the programing. Level 5 programing concerning perfect conditions is EXTREMELY simple.

Driving strait down a street and even making turns is tremendously easy IF no other people or cars or animals or potholes whatever happens.

Take for instance the question of morality. Lets say.....a level 5 autonomous car is traveling down the street at 45 to 50 MPH. A young girl playing chases her ball into the street. There is oncoming traffic on this 2 lane street and it is impossible to stop to avoid hitting either the girl or oncoming traffic or having the car sacrifice itself and its passengers by running off the road.

What is the moral decision that should be made by the programmers? Change the scenario to a dog running out into the street....what is the moral decision then?

Waymo is nowhere near making and implementing level 5 morality decisions....nor other level 5 decisions when "Things don't go right".


With that in mind.....there is no Insurance Company willing to insure level 5 driving either.

You really have no idea what you are talking about.
 
How does Waymo or Cruze operate if you can't insure the vehicles? I cannot imagine they are running these trials without insurance. Or are you saying it would be cost prohibitive?
I'm saying that FSD 5 can't be insured.

Its unlawful for a company or a person to uninsured a vehicle. Of course you can put anything out there on the road that you want if you can self insure yourself or your company. All that means is that you can pay for any liability that you incur.

Google paid of the nose ( settled out of court ) for the accident their driverless car caused. They can afford it so....its not an issue and didn't make the real news.

https://www.usatoday.com/story/tech...olved-crash-arizona-driver-injured/582446002/

Can a regular person do that?.....NOT me.
 
Last edited:
You really have no idea what you are talking about.

I absolutely know exactly what I'm talking about. Prove me wrong. Or do you just want to argue?

Here are the Levels for FSD. <-----just as I stated.

Level 0 _ No Automation
System capability: None. • Driver involvement: The human at the wheel steers, brakes, accelerates, and negotiates traffic. • Examples: A 1967 Porsche 911, a 2018 Kia Rio.

Level 1 _ Driver Assistance
System capability: Under certain conditions, the car controls either the steering or the vehicle speed, but not both simultaneously. • Driver involvement: The driver performs all other aspects of driving and has full responsibility for monitoring the road and taking over if the assistance system fails to act appropriately. • Example: Adaptive cruise control.

Level 2 _ Partial Automation
System capability: The car can steer, accelerate, and brake in certain circumstances. • Driver involvement: Tactical maneuvers such as responding to traffic signals or changing lanes largely fall to the driver, as does scanning for hazards. The driver may have to keep a hand on the wheel as a proxy for paying attention. • Examples: Audi Traffic Jam Assist, Cadillac Super Cruise, Mercedes-Benz Driver Assistance Systems, Tesla Autopilot, Volvo Pilot Assist.

Level 3 _ Conditional Automation
System capability: In the right conditions, the car can manage most aspects of driving, including monitoring the environment. The system prompts the driver to intervene when it encounters a scenario it can’t navigate. • Driver involvement: The driver must be available to take over at any time. • Example: Audi Traffic Jam Pilot.

Level 4 _ High Automation
System capability: The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. • Driver involvement: In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway. • Example: Google’s now-defunct Firefly pod-car prototype, which had neither pedals nor a steering wheel and was restricted to a top speed of 25 mph.

Level 5 _ Full Automation
System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination. • Example: None yet, but Waymo—formerly Google’s driverless-car project—is now using a fleet of 600 Chrysler Pacifica hybrids to develop its Level 5 tech for production.
 
  • Helpful
Reactions: TaoJones
Trov insured Waymo
Not for FSD.

Waymo has not put FSD out there yet. People are still in those cars. Waymo FSD is still in development.

Waymo still has people in their cars - ON PURPOSE.

What Happens When a Self-driving Car Is at Fault?

Read the last paragraph!!!!!!! They are settling these issues out of court ( self insuring ) I would to.

The U.S. legal system has yet to be truly tested by a self-driving car crash. Every incident involving an autonomous vehicle in the U.S. to date—and there have been few—has been settled out of court. Just one proper lawsuit has been filed so far, stemming from a collision involving a self-driving car powered by GM-owned Cruise Automation and a motorcyclist in San Francisco. But even that case appears destined for a settlement.
 
Last edited: