Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What does your timeline look like for driverless vehicles?

This site may earn commission on affiliate links.
Thanks for the video! I notice those videos show way more ahead than any camera sensor ever read. This means they are using HD maps. Do you happen to know if this is generalized L4, or constrained to places where they have very up to date maps collected by another device? What about the path planning- are these hand laid out routes?

The part where it goes around an ambulance is fascinating- it waits for other cars to go around first, and seems to "learn" from that, but that is also an interesting behavior for the end goal of not having human driven cars around.

But the vision-only can do everything and the radar/lidar can do everything except traffic lights.
How does the radar/lidar system read lane lines or handle a detour sign with an arrow on it?

I bring all of this up because it points out how two supposedly independent systems really aren't, so you really don't expect that much improving in failure rates from combinations.
 
How does the radar/lidar system read lane lines or handle a detour sign with an arrow on it?

I bring all of this up because it points out how two supposedly independent systems really aren't, so you really don't expect that much improving in failure rates from combinations.
Lidar can see signs and lane lines because it also measures reflectivity.
I agree that the whole idea is silly because I doubt that perception is the primary source of failures anyway.
 
Lidar can see signs and lane lines because it also measures reflectivity.
I agree that the whole idea is silly because I doubt that perception is the primary source of failures anyway.
The timeline for Tesla driverless vehicles is when the multiple warnings in the manual that contain red triangles and this comment is removed:
"Remain alert at all times and be prepared to take immediate action. Failure to do so can cause damage, injury or death."
2040?
 
Thanks for the video! I notice those videos show way more ahead than any camera sensor ever read. This means they are using HD maps.

Yes, Mobileye uses special maps. Although Mobileye calls them "AV maps", instead of "HD maps". We had a whole debate in the other thread about whether Mobileye's maps are really "HD" or not. They are detailed and precise but probably not to the level of say a Waymo HD map and they are not built with lidar. Some argue that they are a class of maps called medium definition maps or MD maps because they are in between standard maps and HD maps.

Do you happen to know if this is generalized L4, or constrained to places where they have very up to date maps collected by another device?

Mobileye crowdsources all their maps using the front camera in the fleet of millions of mobileye powered consumer cars already on the road. So Mobileye has built these maps for large areas of the US and Europe already, basically any road where a mobileye powered consumer car with a front camera sold in the last decade or two has driven.

This shows where Mobileye has mapped by 2020:

mobileyemapping.png



Yes, the L4 requires these special maps but Mobileye considers to be generalized L4 since it works in such large areas.

What about the path planning- are these hand laid out routes?

No, the path planning is done by the car. The routes are not hand laid.

How does the radar/lidar system read lane lines or handle a detour sign with an arrow on it?

Lidar can read lane lines and signs because it measures reflectivity. The lasers reflect differently off of the asphalt of the road or the paint of lane lines.

I bring all of this up because it points out how two supposedly independent systems really aren't, so you really don't expect that much improving in failure rates from combinations.

Again, nobody said that the two systems are fully independent. Mobileye says that they are nearly independent. The cameras and radar/lidar are independent for everything except traffic lights. So we can still expect a huge improvement of failure rates by combining them.

I agree that the whole idea is silly because I doubt that perception is the primary source of failures anyway.

Well, you still need reliable perception in order to do reliable autonomous driving. "True redundancy" is Mobileye's approach to try to achieve reliable perception. But don't forget Mobileye's RSS driving policy that is designed to minimize non-perception failures like bad planning.
 
Last edited:
You made my point for me by immediately referencing a level. Be more creative please.
"I think people just need to think out of the box and not adhere so rigidly to the autonomous driving levels."
I'm not the one that described something entirely within the box! :p
Though the main complaint around here is that the boxes are too big so I am curious what would be outside the box?
 
  • Like
Reactions: gearchruncher
You made my point for me by immediately referencing a level. Be more creative please.
"I think people just need to think out of the box and not adhere so rigidly to the autonomous driving levels."

I don't think that is possible. Any automated driving system will necessarily correspond to a level since the SAE levels is how we classify all automated driving systems.

In fact, you thought you were being creative by coming up with a new "hybrid" type system where there is a driver but the driver can text or zoom in some cases. @Daniel in SD just showed that your idea is not new at all. It is already a well defined type of autonomous driving, called "SAE level 3". And some companies like Audi and Mercedes are already planning to roll out your system.
 
Last edited:
I don't think that is possible. Any automated driving system will necessarily correspond to a level since the SAE levels is how we classify all automated driving systems.

In fact, you thought you were being creative by coming up with a new "hybrid" type system where there is a driver but the driver can text or zoom in some cases. @Daniel in SD just showed that your idea is not new at all. It is already a well defined type of autonomous driving, called "SAE level 3". And some companies like Audi and Mercedes are already planning to roll out your system.
We are very early in this long journey. To say it's not possible is a bit preamature IMO. My bet is that over time the definitions will get adjusted as we learn more about the technologies. Why isn't that possible?
 
We are very early in this long journey. To say it's not possible is a bit preamature IMO. My bet is that over time the definitions will get adjusted as we learn more about the technologies. Why isn't that possible?

Oh I agree definitions will get adjusted. In fact the SAE J3016 that defines the levels has gone through several revisions. But I don't think the levels will get radically changed. I don't think we will come up with some new type of autonomous driving that the SAE levels missed. Why? Because the SAE levels cover the entire range of automated driving from manual driving (Level 0) to fully autonomous everywhere (Level 5). I think they cover every "type" of autonomous driving.

Put simply:
Level 0 covers driving with no automation.
Level 1 covers driving with automation in 1 dimension.
Level 2 covers driving with automation in 2 dimensions.
Level 3 covers autonomous driving with a driver as a back-up.
Level 4 covers autonomous driving in limited conditions with no driver back-up.
Level 5 covers autonomous driving in all conditions with no driver back-up.

And yes, technologies will change. But the levels don't depend on technology. I think any autonomous driving we deploy will still fall in one of those levels.
 
Last edited:
Level 3 covers autonomous driving with a driver as a back-up.
That's Level 3...

And a L3 Honda Legend can go and hit a truck with no warning and will not satisfy what OP was looking for.

Stupid levels.

Show me the failure rate (Mean Distance Between Failures - MDBF). I'll trust the system when its shown to be 5x to 10x better than humans in the ODD with real world testing.

BTW, does anyone know what the limousine accident rates are ?
 
And a L3 Honda Legend can go and hit a truck with no warning and will not satisfy what OP was looking for.

Stupid levels.

Show me the failure rate (Mean Distance Between Failures - MDBF). I'll trust the system when its shown to be 5x to 10x better than humans in the ODD with real world testing.

BTW, does anyone know what the limousine accident rates are ?
You’re saying the box is too big. I don’t think it’s thinking outside the box to say that driving automation should be safe.
 
You’re saying the box is too big. I don’t think it’s thinking outside the box to say that driving automation should be safe.
Yes, its not "outside" the box and it is very basic.

Yet, SAE had nothing to say about it. Nor does the UN standard that companies like Merc, Volvo, Honda use to declare themselves Level 3 certified.

That is why I call the levels "Stupid" - as in of much lower than median intelligence.
 
Show me the failure rate (Mean Distance Between Failures - MDBF). I'll trust the system when its shown to be 5x to 10x better than humans in the ODD with real world testing.
I agree this would be awesome to have. The problem is that a "failure" is an "collision". You don't ethically determine accident rates by going out into real world testing and seeing how often it hits a completely uninvolved civilian. Your concern for trust in a system should be just as great sitting in the car next to the one being autonomously driven as being inside it.

So we put safety drivers in. Now they take over before the collision, and we blame them if a collision happens. So what's our rate without the human? We have no idea.

By not letting the drivers allow a collision to happen, we also don't get any severity of collision data. What if autonomy is good at saving lives, but actually causes WAY more property damage? Like Fatalities move from 1:90M miles to 1:500M miles, but in the process, we have $1T a year in "fender benders" and non-fatal injuries? What do we even mean by "better than humans at the ODD" when their is not just one singular outcome to a collision?

This is why we keep coming back to intervention rate. I'm at the "show me that the human backup didn't decide to intervene more than 1:1M miles across a statistically valid sample of at least 100M miles and all 4 seasons covering the area you will release in, and then let's talk about real world rate testing"
 
I agree this would be awesome to have. The problem is that a "failure" is an "collision". You don't ethically determine accident rates by going out into real world testing and seeing how often it hits a completely uninvolved civilian.
Thats why you put safety drivers and analyze disengagements. Duh.

The UN standard for L3 driving in highways (low speed, traffic jam ...) literally says you need to test for a minimum of "5 minutes" .... not millions of miles.

Levels are stupid.
 
Yes, its not "outside" the box and it is very basic.

Yet, SAE had nothing to say about it. Nor does the UN standard that companies like Merc, Volvo, Honda use to declare themselves Level 3 certified.

That is why I call the levels "Stupid" - as in of much lower than median intelligence.
Yep, not in the scope and I can see why. Measuring and defining safety is very difficult and somewhat cultural.
 
Yep, not in the scope and I can see why. Measuring and defining safety is very difficult and somewhat cultural.
Stop with the "cultural" BS. Lot of other standards have safety built in - including tolerances allowed, MTBF etc.

The fact is the standards are setup in such a way that the "test" can be done by a 3rd party in a day. That is what thinking "inside the box" looks like.

What is stupid is not just the levels - but defending them as well.
 
Show me the failure rate (Mean Distance Between Failures - MDBF). I'll trust the system when its shown to be 5x to 10x better than humans in the ODD with real world testing.
Thats why you put safety drivers and analyze disengagements. Duh.
Thanks for the detailed take on my post which never considered that! ;)
So we put safety drivers in. Now they take over before the collision, and we blame them if a collision happens. So what's our rate without the human? We have no idea.

By not letting the drivers allow a collision to happen, we also don't get any severity of collision data. What if autonomy is good at saving lives, but actually causes WAY more property damage? Like Fatalities move from 1:90M miles to 1:500M miles, but in the process, we have $1T a year in "fender benders" and non-fatal injuries? What do we even mean by "better than humans at the ODD" when their is not just one singular outcome to a collision?