Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Can level 5 Autonomy be achieved with Hardware suite 2.0?

This site may earn commission on affiliate links.
I think it's consumers that get confused about the performance.

Elon has always maintained that he judges self driving compared to a human. He's looking for 2-10 times better than the average human.
With average humans, we get into accidents with the slightest things. wind, rain, snow, texts on a phone, nose picking, daydreaming, french fries, etc.
THIS.

It doesn't have to be perfect. It just has to be at least 2x as good as a human.

@Reeler, did you think a computer would take down the Go world champion in 2016? That's 10 years sooner than some experts thought it would happen considering the game requires creativity and even the experts can barely keep score as the game is in progress, not to mention there are more possible moves in Go than atoms in the universe.
 
  • Like
Reactions: pilotSteve
Go is a very confined rule set with no spurious events. Typical humans are beat in these games with brute force approaches. Even with a Cray supercomputer from Nvidia, brute force approaches don't work well in the real world.

Don't get me wrong, I WANT TO BELIEVE. BUT, no koolaide mustache here.
 
Go is a very confined rule set with no spurious events. Typical humans are beat in these games with brute force approaches. Even with a Cray supercomputer from Nvidia, brute force approaches don't work well in the real world.
But that's just the point; there are 10^170 moves in Go. There are more possible moves than atoms in the universe. It's a problem space that's completely immune to brute force, but was beaten in large part by deep neural networks. That's why it was such a landmark achievement.

It's not a perfect analog to self-driving, but these narrow AI challenges are falling quickly. Self-driving is narrow AI with many edge cases.
 
I agree with the original post, since it may very well be that the closer full autonomy becomes a reality, so may be the recognition that AP 2.0 won't fully cut it. One element used in other autonomous vehicles is LIDAR, which provides an image of the surrounding environment very differently than just visual, sonar, and radar data acquisition provides. It could be 3.0 will be required to achieve the goal.

As you probably know, LIDAR is bulky and needs expensive rotating cameras. Its advantage is that it can see non metallic objects, such as pedestrians.

As you may know, Tesla has found that using a phased signal from the existing radar, and with the other cameras they are able to do the same as LIDAR with far less bulk and expense. Tesla says it can do level 5 autonomy at this time with their present level (v. 2.0) of hardware.

Of course, there will be improvements. And they will be over the air, just like now.
 
  • Like
Reactions: jbcarioca
As you probably know, LIDAR is bulky and needs expensive rotating cameras. Its advantage is that it can see non metallic objects, such as pedestrians.

As you may know, Tesla has found that using a phased signal from the existing radar, and with the other cameras they are able to do the same as LIDAR with far less bulk and expense. Tesla says it can do level 5 autonomy at this time with their present level (v. 2.0) of hardware.

Of course, there will be improvements. And they will be over the air, just like now.

You know that LIDAR prices are falling rapidly.
 
Go is a very confined rule set with no spurious events. Typical humans are beat in these games with brute force approaches. Even with a Cray supercomputer from Nvidia, brute force approaches don't work well in the real world.

Don't get me wrong, I WANT TO BELIEVE. BUT, no koolaide mustache here.
Everything about this comment is wrong as commented by 3Victoria and Alketi.

Also a Cray is built by Cray which may or may not use Nvidia cards.

AlphaGo is nowhere near the faster supercomputer on Earth (soon to be Summit [IBM/Nvidia] at ORNL) and neither could brute force the game of Go.

The driving task is much, much simpler. Stay on the road, follow the map, obey signs, don't hit stuff. You can think up corner cases all day long, but once they are seen a few times they can be trained into the networks and they are no longer corner cases. Very quickly with even 1 billion miles of data you can very quickly cover more edge cases than many people will encounter in their entire lifetimes.

Let's think of it this way... more data = more experience = better self-driving.

on average let's say people drive 13,000 miles per year. Let's also say Tesla only sells 500,000 cars per year, every year without increasing.

This gives 13,000mi * 500,000 cars * Sum(1+2+3...+N) = 13,000 * 500,000 * N(N+1) / 2
So... let's give it N = 5 years starting with 0 miles... at the end of five years there will be 97.5 billion miles of training data.

If I as a human being drove 13,000 miles per year it'd take me 7.5 million years to accumulate that much experience..
 
If I as a human being drove 13,000 miles per year it'd take me 7.5 million years to accumulate that much experience..
While i agree on the conclusion ( that a computer can gain a lot more experience ) and that a computer tend to respond with a good degree of precision, the "it would take me 7.5 millions years" it's completely wrong.
We learn a lot faster than a deep network, we need a lot less data to learn from and we can extrapolate more data from the same input.
Just consider a teen-ager, he can learn to drive in just a couple of month travelling less than 1.000km, a computer with 1.000km can't even get out of the garage.
But, as said, with some billion of miles, a computer can have the same experience as a teen-ager, but he will never have to re-learn so it will continue to get better, and better, and better, at what speed it will get better? we'll see
A computer would need billions of billions of data? so be it, it's just a matter of time.
 
  • Like
Reactions: vrykolas
As you may know, Tesla has found that using a phased signal from the existing radar, and with the other cameras they are able to do the same as LIDAR with far less bulk and expense..

No, they are not able to do the same as LIDAR with Tesla's existing radar.

But that is beside the point, because Tesla is going to use cameras and image recognition.
 
Last edited:
No, they are not able to do the same as LIDAR with Tesla's existing radar.
But that is beside the point, because Tesla is going to use cameras and image recognition.
As far as 3D mapping goes it is relatively the same besides lower resolution but gaining the ability to see through rain, fog, snow, even people.

Does anyone know anything LIDAR can concretely provide that the camera/radar/ultrasonic combo can't? (that you actually need for driving)
 
As far as 3D mapping goes it is relatively the same besides lower resolution but gaining the ability to see through rain, fog, snow, even people.

Does anyone know anything LIDAR can concretely provide that the camera/radar/ultrasonic combo can't? (that you actually need for driving)

I believe Lidar is superior to radar if you want to see something against terrain clutter.

But Tesla's cameras with image recognition should do the same as LIDAR, but with cheaper.
 
The media keep saying level 5, but it seems Elon only mentioned that was a possibility with the new hardware (not a hard promise). Tesla in written text promises "self-driving" which may not necessarily be level 5 (however you want to define level 5).

I'm not even sure if Elon said level 5 specifically or if reporters interpreted that full self-driving = level 5?

 
I believe Lidar is superior to radar if you want to see something against terrain clutter.

But Tesla's cameras with image recognition should do the same as LIDAR, but with cheaper.

If you're operating in a restricted service area (as I think all L5 will be for a number of years) you can have a detailed map of everything created by high performance LIDAR recently that's shared by a Tesla fleet that uses cameras, RADAR and ultrasonics. The fleet would also share data.

L5 Mobility App fleets like Uber fleets will have restricted service areas that can be premapped to high precision frequently. Other sensors place the vehicle within this data set and detect things that aren't persistent features.
 
If you're operating in a restricted service area (as I think all L5 will be for a number of years) you can have a detailed map of everything created by high performance LIDAR recently that's shared by a Tesla fleet that uses cameras, RADAR and ultrasonics. The fleet would also share data.

L5 Mobility App fleets like Uber fleets will have restricted service areas that can be premapped to high precision frequently. Other sensors place the vehicle within this data set and detect things that aren't persistent features.
This is not generally feasible as high res maps need to be constantly updated and would require a large amount of data to constantly share. If every car was equipped with LIDAR and connected to the future SpaceX satellite network then it'd be easily doable.

Right now the focus is on high res maps constantly updated using hardware sensors in every car. It's much easier to crowdsource the data. There's no need for LIDAR maps.

Radar can still get pretty fine grained:
 
Let's not overthink this. Autopark doesn't even work on the Teslas reliably. Everyone is talking like L5 will happen with this sensor suite.

Autopark is a relatively constrained problem of recognizing an empty spot and then navigating into it. Total fail of an implementation by Tesla.

Anyone thinking this crew at Tesla will get us to L5 is smoking crack.
 
  • Like
Reactions: electracity
Let's not overthink this. Autopark doesn't even work on the Teslas reliably. Everyone is talking like L5 will happen with this sensor suite.

Autopark is a relatively constrained problem of recognizing an empty spot and then navigating into it. Total fail of an implementation by Tesla.

Anyone thinking this crew at Tesla will get us to L5 is smoking crack.

Autopark is an extremely constrained subset of autonomous driving.
 
Let's not overthink this. Autopark doesn't even work on the Teslas reliably. Everyone is talking like L5 will happen with this sensor suite.

Autopark is a relatively constrained problem of recognizing an empty spot and then navigating into it. Total fail of an implementation by Tesla.

Anyone thinking this crew at Tesla will get us to L5 is smoking crack.

But nobody has seen Autopark on an AP2 car yet, which is what would be L5.