Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Profound progress towards FSD

This site may earn commission on affiliate links.
Computer vision needs to be at least as good at seeing stuff as human central vision. Just having some advantages over human vision like being 360 degree or not getting tired or distracted is not enough. What good is it if your robotaxi can see 360 degrees and never gets tired if it hits a parked car on the side of the road or hits a white truck crossing in front because the camera vision did not correctly recognize the objects? So yes, camera vision needs to be at least as good as human vision at detecting stuff.

No, I am not expecting perfection. But I am expecting autonomous cars to be much safer than human drivers.

I think you vastly overestimate how safe human drivers are. People crash all the time when none of the corner cases you worry about exist. Self driving tech is already better than humans but it is being compared to perfection and is of course lightyears away from that. When you look at the size of the gap to get it perfect, it seems insane to consider using it. When you look at how low the human bar is and that value it currently provides, you'd be a fool not to embrace it. Hence, the dilemma.
 
I think you vastly overestimate how safe human drivers are. People crash all the time when none of the corner cases you worry about exist. Self driving tech is already better than humans but it is being compared to perfection and is of course lightyears away from that. When you look at the size of the gap to get it perfect, it seems insane to consider using it. When you look at how low the human bar is and that value it currently provides, you'd be a fool not to embrace it. Hence, the dilemma.

I am just going by the stats.

There are 6M accidents per year.
50+ Car Accident Statistics in the U.S. & Worldwide.

US drivers did a total of 3.22 trillion miles in 2015.
Record Number Of Miles Driven In U.S. Last Year

If I did the math right, that is about 1 accident per 536,666 miles.

The death rate is about 1 death per 100M miles.
Fatality Facts 2018: State by state.

Now, I realize that I am not comparing same years but it at least gives us a ballpark number.

I think you are lowering the bar too low.

I am not saying that autonomous cars should never crash. I am simply asking for autonomous driving to be several times safer than humans. Is that really too much to ask? That is not asking for perfection.

Also, the Tesla safety report is misleading because AP is used mostly on highways where accidents happen less. Also, AP is not full self-driving. The human driver is intervening and preventing AP from crashing most of the time. So the safety report does not reflect how safe AP would be without a human driver. We cannot look at the report and assume that Tesla's self-driving is already safer than humans.
 
I am just going by the stats.

There are 6M accidents per year.
50+ Car Accident Statistics in the U.S. & Worldwide.

US drivers did a total of 3.22 trillion miles in 2015.
Record Number Of Miles Driven In U.S. Last Year

If I did the math right, that is about 1 accident per 536,666 miles.

The death rate is about 1 death per 100M miles.
Fatality Facts 2018: State by state.

Now, I realize that I am not comparing same years but it at least gives us a ballpark number.

I think you are lowering the bar too low.

I am not saying that autonomous cars should never crash. I am simply asking for autonomous driving to be several times safer than humans. Is that really too much to ask? That is not asking for perfection.

Also, the Tesla safety report is misleading because AP is used mostly on highways where accidents happen less. Also, AP is not full self-driving. The human driver is intervening and preventing AP from crashing most of the time. So the safety report does not reflect how safe AP would be without a human driver. We cannot look at the report and assume that Tesla's self-driving is already safer than humans.

Chatting with you has actually confirmed my confidence in Tesla FSD. You have defended your position well. Clearly it is a little subjective right now - we don't have long to wait to find out if the bulls are right.
 
Also, the Tesla safety report is misleading because AP is used mostly on highways where accidents happen less.

Can't remember if Tesla include TACC in their report or not. TACC probably gets more use in urban environments where speed limits are more rigorously policed.

The one thing that Tesla should do is to have a stat for accidents that happen shortly after AP has disengaged. Say 5 seconds. This would give us a clearer idea of AP safety and the efficacy of the modal switch for the driver. For example: AP drives the car towards a tree at 50mph, and the driver takes over too late to stop the car from hitting the tree. Another example: AP warning "take over immediately" part way through a sharp turn on a cresting hill. 1 second later, as the driver starts to take back control, the car hits oncoming traffic.

At the moment these stats are probably buried in the "without AP but with active safety" number.
 
  • Love
Reactions: Battpower
shortly after AP has disengaged.

AP warning "take over immediately" part way through a sharp turn on a cresting hill. 1 second later, as the driver starts to take back control, the car hits oncoming traffic.

I am very aware of these 'moments' where the car has gone piling into a situation based on a bunch of 'false bravado' then hands over to me with a 'get out of that then!'

Especially the dropping from AP to TACC when I have had to apply corrective steering . If the car lost the plot enough to need my intervention, what makes it safe to keep TACC engaged and for the car to maintain a high speed? I guess on a dual carriageway you don't need any more sudden unexpected braking than there already is, but at lower speeds on smaller (non-city) roads it is likely that keeping TACC engaged would be less safe.
 
  • Funny
Reactions: J1mbo
I am very aware of these 'moments' where the car has gone piling into a situation based on a bunch of 'false bravado' then hands over to me with a 'get out of that then!'

Especially the dropping from AP to TACC when I have had to apply corrective steering . If the car lost the plot enough to need my intervention, what makes it safe to keep TACC engaged and for the car to maintain a high speed? I guess on a dual carriageway you don't need any more sudden unexpected braking than there already is, but at lower speeds on smaller (non-city) roads it is likely that keeping TACC engaged would be less safe.

My favourite is commiting to a sharpish turn with TACC active, when the car arbitarily decides that you are going too fast for the turn and brakes for you.

This happened to me yesterday in a 20mph(!) limit. There are very few turns that 20mph is too fast for, and this wasn't one of them.

The car following me certainly had a wake up call.

(TACC in a 20 mph zone because .. well, staying at 20mph, mile after mile, on mostly-empty roads that were designed for 50mph is harder than it sounds).
 
To sum up this thread:

1) "turning at intersections and autosteer on narrow streets" is in alpha development.
2) Elon estimates the rewrite with does 4D is "2-4 months" away
3) Elon also said that the latest alpha software can almost do his commute, which includes construction zones, without any disengagements.
4) Elon thinks that Tesla is "very close" to L5.

Some see it as proof that Tesla is making mind blowing progress on FSD. Others are skeptical on the timeline of when we will get these features and think Tesla is still very far from actual L5.

Sorry if this was answered later in the thread and I missed it, but did Elon say they are close to "L5", or that they are close to "FSD"? I don't think it's correct to assume that "feature complete FSD" =L5.

Also, I'm not sure if I agree that a disengagement rate of a few hundred miles between disengagements is a bad thing. Tesla is selling cars direct to consumers, and I would wager that they could be very successful with a L3 system with a high disengagement rate.
 
Also, I'm not sure if I agree that a disengagement rate of a few hundred miles between disengagements is a bad thing. Tesla is selling cars direct to consumers, and I would wager that they could be very successful with a L3 system with a high disengagement rate.
A L3 vehicle has to notify the user when to take over. A system initiated disengagement rate of once every few hundred miles might be ok but if the system knows that it needs to disengage ten seconds in advance shouldn't it be able to handle the situation itself? This is why a Level 3 system isn't that much easier than Level 4.
 
Sorry if this was answered later in the thread and I missed it, but did Elon say they are close to "L5", or that they are close to "FSD"? I don't think it's correct to assume that "feature complete FSD" =L5.

Yes, he said that they are close to L5. Here are two relevant quotes:

"I’m extremely confident that level 5 or essentially complete autonomy will happen and I think will happen very quickly,” Musk said in remarks made via a video message at the opening of Shanghai’s annual World Artificial Intelligence Conference (WAIC).

“I remain confident that we will have the basic functionality for level 5 autonomy complete this year.”
"
Tesla 'very close' to level 5 autonomous driving technology, Musk says

Also, I'm not sure if I agree that a disengagement rate of a few hundred miles between disengagements is a bad thing. Tesla is selling cars direct to consumers, and I would wager that they could be very successful with a L3 system with a high disengagement rate.

The reason I think it is a bad thing is because it implies that your self-driving is not very good. After all, the basic idea of self-driving is that the car is able to drive without any human input. Thus, every disengagement potentially indicates when your car failed at being self-driving. You obviously want your self-driving car to need human input as infrequently as possible.

To give you a point of comparison, Waymo and Cruise have a disengagement rate of 1 per 10,000 miles. So 1 per a few huindred miles would be much less reliable self-driving.

Now it is worth noting that not all disengagements are caused by the same thing. You can have disengagements that are not safety related. For example, the safety driver did not like what the autonomous car was doing but there was no safety issue. And you can have safety related disengagements where if the safety driver had not intervened, there was a high probability of an accident. The Waymo/Cruise rate includes all disengagements, both safety and non safety related.

It is also worth comparing the safety disengagement rate to the rate of accidents by human drivers. On average, human drivers in the US have a car accident, 1 every ~533,666 miles. So if we expect our self-driving to be as safe as human drivers, we would expect an autonomous car to only have a safety disengagement approximately every 533,666 miles as well.

Again, that gives us an idea that if an autonomous car is having a safety related disengagement every few hundred miles, then it is several thousands times less safe than a human driver. Therefore, it is a very bad disengagement rate.

Lastly, SAE L3 means that the car is fully self-driving in some conditions but must notify the driver in advance when the driver must take over. So L3 assumes that any disengagements are predictable since the car has to know in advance that they are going to be needed. So L3 cannot have surprise disengagements where the safety driver had to jump in at the last second to intervene. So if your autonomous car is having surprise disengagements every few hundred miles, then it is not Level 3.
 
Yes, he said that they are close to L5. Here are two relevant quotes:

"I’m extremely confident that level 5 or essentially complete autonomy will happen and I think will happen very quickly,” Musk said in remarks made via a video message at the opening of Shanghai’s annual World Artificial Intelligence Conference (WAIC).

“I remain confident that we will have the basic functionality for level 5 autonomy complete this year.”
"
Tesla 'very close' to level 5 autonomous driving technology, Musk says



The reason I think it is a bad thing is because it implies that your self-driving is not very good. After all, the basic idea of self-driving is that the car is able to drive without any human input. Thus, every disengagement potentially indicates when your car failed at being self-driving. You obviously want your self-driving car to need human input as infrequently as possible.

To give you a point of comparison, Waymo and Cruise have a disengagement rate of 1 per 10,000 miles. So 1 per a few huindred miles would be much less reliable self-driving.

Now it is worth noting that not all disengagements are caused by the same thing. You can have disengagements that are not safety related. For example, the safety driver did not like what the autonomous car was doing but there was no safety issue. And you can have safety related disengagements where if the safety driver had not intervened, there was a high probability of an accident. The Waymo/Cruise rate includes all disengagements, both safety and non safety related.

It is also worth comparing the safety disengagement rate to the rate of accidents by human drivers. On average, human drivers in the US have a car accident, 1 every ~533,666 miles. So if we expect our self-driving to be as safe as human drivers, we would expect an autonomous car to only have a safety disengagement approximately every 533,666 miles as well.

Again, that gives us an idea that if an autonomous car is having a safety related disengagement every few hundred miles, then it is several thousands times less safe than a human driver. Therefore, it is a very bad disengagement rate.

Lastly, SAE L3 means that the car is fully self-driving in some conditions but must notify the driver in advance when the driver must take over. So L3 assumes that any disengagements are predictable since the car has to know in advance that they are going to be needed. So L3 cannot have surprise disengagements where the safety driver had to jump in at the last second to intervene. So if your autonomous car is having surprise disengagements every few hundred miles, then it is not Level 3.

Yeah those are good points. One of the key metrics to me is just how "surprise" those disengagements end up being. The car not having the right field of view to make an unprotected left at a stop sign and making the driver take over is no big deal, but obviously misjudging the appropriate speed on a curve on a two lane country road and making the driver take over with no notice is a huge deal.

I know you are much closer to the technology and industry than me, but personally I think L5 is really important for commercial applications, and not so important for consumer applications. Waymo's autonomous tech is impressive, but we won't see it in a consumer passenger vehicle any time soon. I think it would be good enough for the vast majority of customers if we had a really solid L2 (think SuperCruise but for a wider variety of roads) or L3 that was smart enough to give 10-20 seconds warning before it encountered a situation it couldn't handle.

Like you pointed out, the devil is in the details and the manner of disengagement matters as much or more than the rate. Safety related disengagements would be fine for L3 as long as there is enough notice to the driver.
 
Yes, he said that they are close to L5. Here are two relevant quotes:

"I’m extremely confident that level 5 or essentially complete autonomy will happen and I think will happen very quickly,” Musk said in remarks made via a video message at the opening of Shanghai’s annual World Artificial Intelligence Conference (WAIC).

“I remain confident that we will have the basic functionality for level 5 autonomy complete this year.”
"
Tesla 'very close' to level 5 autonomous driving technology, Musk says



The reason I think it is a bad thing is because it implies that your self-driving is not very good. After all, the basic idea of self-driving is that the car is able to drive without any human input. Thus, every disengagement potentially indicates when your car failed at being self-driving. You obviously want your self-driving car to need human input as infrequently as possible.

To give you a point of comparison, Waymo and Cruise have a disengagement rate of 1 per 10,000 miles. So 1 per a few huindred miles would be much less reliable self-driving.

Now it is worth noting that not all disengagements are caused by the same thing. You can have disengagements that are not safety related. For example, the safety driver did not like what the autonomous car was doing but there was no safety issue. And you can have safety related disengagements where if the safety driver had not intervened, there was a high probability of an accident. The Waymo/Cruise rate includes all disengagements, both safety and non safety related.

It is also worth comparing the safety disengagement rate to the rate of accidents by human drivers. On average, human drivers in the US have a car accident, 1 every ~533,666 miles. So if we expect our self-driving to be as safe as human drivers, we would expect an autonomous car to only have a safety disengagement approximately every 533,666 miles as well.

Again, that gives us an idea that if an autonomous car is having a safety related disengagement every few hundred miles, then it is several thousands times less safe than a human driver. Therefore, it is a very bad disengagement rate.

Lastly, SAE L3 means that the car is fully self-driving in some conditions but must notify the driver in advance when the driver must take over. So L3 assumes that any disengagements are predictable since the car has to know in advance that they are going to be needed. So L3 cannot have surprise disengagements where the safety driver had to jump in at the last second to intervene. So if your autonomous car is having surprise disengagements every few hundred miles, then it is not Level 3.

Also I meant to add that I think the comparison of safety related disengagement rates to driver accident rates is only a fair comparison for an L5 system. I think that's admirable as a long-term goal, but we wouldn't need anything near that for L3 or possibly L4 systems, unless you are assuming that every disengagement would cause an accident!
 
A L3 vehicle has to notify the user when to take over. A system initiated disengagement rate of once every few hundred miles might be ok but if the system knows that it needs to disengage ten seconds in advance shouldn't it be able to handle the situation itself? This is why a Level 3 system isn't that much easier than Level 4.

I guess that depends on the situation. People have mentioned glare from the sun on the cameras as one instance that the Tesla sensor suite might have a hard time handling. I think it would be valid for the car to say "take over now, sensor inputs have too low resolution". It could also be situational, where the car knows it has a higher probability of screwing up certain situations like unprotected left turns on roads with high curvature, or successfully navigating toll booths. To me there would be no shame in having the driver take control in these situations with some reasonable amount of notice.
 
I don't think that anyone would disagree that generalised L3 is essentially impossible. It is difficult enough for humans to accurately predict what will happen within the next 10 seconds in a chaotic environment. Bounded L3 (i.e. highway driving only) is probably the limit of the technology.

L4 and L5 are different in that there is no expectation of a handover to a human drive. The car can just stop (ideally pulling over to the side of the road first) when something happens that it cannot handle.
 
  • Informative
Reactions: pilotSteve
I know you are much closer to the technology and industry than me, but personally I think L5 is really important for commercial applications, and not so important for consumer applications. Waymo's autonomous tech is impressive, but we won't see it in a consumer passenger vehicle any time soon. I think it would be good enough for the vast majority of customers if we had a really solid L2 (think SuperCruise but for a wider variety of roads) or L3 that was smart enough to give 10-20 seconds warning before it encountered a situation it couldn't handle.

Actually, you only need L4 for most commercial applications. For example, you can have L4 trucks deliver your products from a warehouse to a distribution center. The truck can be geofenced since it only needs to drive that route between the warehouse and the distribution center. So L4 is fine for that. You don't need L5. Honestly, we probably don't really need L5 at all at this point. The only reason you would really need L5 is if you wanted consumers to have the same freedom to drive anywhere, anytime but have the car do the entire trip autonomously. And of course, if we can get to safe L5 then there would be a strong incentive to do that for safety reasons. Safe L5 would save a lot of lives by preventing a lot of the needless car accidents we see on the road today. But trucking can be L4. City ride-hailing can be L4. L4 will work for most commercial and consumer applications.

So far we are seeing consumer applications at L2 and L3 and commercial applications at L4. We see systems like SuperCruise that can do highway L2 hands-free. We also see L3 that only works in stop and go traffic on the highway. So we are seeing partial automation designed to add more convenience and safety to the driver. We see autonomous trucks and ride-hailing robotaxis at L4. You save a lot of money if you don't have to pay a driver. So there is a business incentive to get to L4 since it does not require a driver at all.

Also I meant to add that I think the comparison of safety related disengagement rates to driver accident rates is only a fair comparison for an L5 system. I think that's admirable as a long-term goal, but we wouldn't need anything near that for L3 or possibly L4 systems, unless you are assuming that every disengagement would cause an accident!

Yes, I would agree that 1 safety disengagement per 533,666 miles only makes sense for L5. You want to compare apples to apples. It does not make sense to compare a disengagement rate from a car driving in limited conditions to an accident rate taken from human drivers driving everywhere and at all times of day and night.

However, you still want your autonomous car to be safer than humans. The big difference is that the types of safety issues and frequency of safety issues will vary based on location, time of day etc.. For example, it is obviously easier to drive safely on an empty highway than it is to drive safely in a busy, complex urban setting. So the safety challenges will be different between L4 and L5 and between different L4 ODD.

No, not every disengagement would cause an accident. We need to remember that disengagements happen at the discretion of the safety driver. So a safety driver could disengage for a variety of reasons. Not all disengagements are caused by a safety problem. That is why I made a distinction between safety disengagements and non safety disengagements.

I highly recommend you read this blog by Cruise founder and CTO, Kyle Vogt. I think you will find it enlightening. He discusses disengagements in more detail:
The Disengagement Myth

He specifies 4 types of disengagements:
1) Naturally occurring situations requiring urgent attention
2) Driver caution, judgement, or preference
3) Courtesy to other road users
4) True AV limitations or errors

Bottom line is that you need to distinguish between disengagements that were necessary and need to be fixed (ie safety or problem to other drivers) and disengagements that were optional and don't need to be fixed (ie convenience or preference). When testing in the real world, you obviously want the safety driver to be cautious and disengage to avoid potential accidents but you also need to test to see if the car actually could have handle that situation correctly on its own. Vogt talks about the need to use simulations to safely test what the car would have done if the safety driver had not disengaged. That's an effective way to test if the disengagement was really necessary or not. For example, maybe an overly cautious safety driver disengaged because he was concerned about a potential accident but in reality the autonomous car would have avoided the accident just fine on its own. That's a disengagement that we can dismiss since it was not really necessary.

Ultimately, I think every company needs to examine their internal safety data and make a determination whether it is good enough for deployment.
 
Actually, you only need L4 for most commercial applications. For example, you can have L4 trucks deliver your products from a warehouse to a distribution center. The truck can be geofenced since it only needs to drive that route between the warehouse and the distribution center. So L4 is fine for that. You don't need L5. Honestly, we probably don't really need L5 at all at this point. The only reason you would really need L5 is if you wanted consumers to have the same freedom to drive anywhere, anytime but have the car do the entire trip autonomously. And of course, if we can get to safe L5 then there would be a strong incentive to do that for safety reasons. Safe L5 would save a lot of lives by preventing a lot of the needless car accidents we see on the road today. But trucking can be L4. City ride-hailing can be L4. L4 will work for most commercial and consumer applications.

And just hope that no roads get closed or diverted on your L4 route ever? Or maybe have a guy with a little red flag walking the route where it changes outside of the L4 boundary
 
I guess that depends on the situation. People have mentioned glare from the sun on the cameras as one instance that the Tesla sensor suite might have a hard time handling. I think it would be valid for the car to say "take over now, sensor inputs have too low resolution". It could also be situational, where the car knows it has a higher probability of screwing up certain situations like unprotected left turns on roads with high curvature, or successfully navigating toll booths. To me there would be no shame in having the driver take control in these situations with some reasonable amount of notice.

Keep in mind that L3 means that the driver does not need to pay attention to the road at all while the L3 is engaged. So the driver can read a book or watch a movie while the L3 car is driving. That is why L3 must give the driver advance notice to take over because the driver needs time to go from not paying attention to the road at all back to fully engaged again.

So yes, L3 disengagements can be situational or geographical like exiting a freeway or approaching a toll booth since those can be predicted well in advance. But I don't think L3 can handle a case like sun glare blinding the cameras because that is too sudden. Could a driver that is reading a book, while the L3 is driving, be able to successfully take over in a case like that? Probably not.

And just hope that no roads get closed or diverted on your L4 route ever? Or maybe have a guy with a little red flag walking the route where it changes outside of the L4 boundary

No. L4 is expected to handle construction zones or road closures. Also, the geofenced area would not be just one single route. Don't be silly. The goefenced area would be an area around the warehouse and the distribution center. So the L4 truck could take a detour and still remain inside the geofenced area.
 
How do you know?
Where does it say that the geo-fenced area cannot be a single route?

Geofenced can be any area you want. So sure, in theory, it could be a single route. But in the case of a delivery truck, it would not be practical to geofence to just a single route. So common sense says that the geofenced area would probably not be a single route.
 
But in the case of a delivery truck, it would not be practical to geofence to just a single route.
So, the delivery company will have a say in what routes some L4 provides?
What happens if the production facility is 100+ miles away from the main DC, and a portion of the drive has only one highway mapped and geo-fenced?

I think you are making a lot of assumptions and stating them as fact.
What is practical for a particular delivery driver or route is not guaranteed to be the focus of the self driving solution provider.