Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
This is all very interesting, but then why is Waymo (and, I think, all other autonomous car developers) using lidar? Tesla seems to be the only company trying to do it without it.

The short answer is that "camera only" is not yet good enough to do safe FSD. Our computer vision is not reliable or accurate enough yet. So companies like Waymo and Cruise include lidar to increase the reliability and accuracy of perception to get to safe FSD. Elon thinks that computer vision can be good enough with more machine learning so he is pressing ahead with trying to solve computer vision with machine learning so that he won't need lidar.
 
The short answer is that "camera only" is not yet good enough to do safe FSD. Our computer vision is not reliable or accurate enough yet. So companies like Waymo and Cruise include lidar to increase the reliability and accuracy of perception to get to safe FSD. Elon thinks that computer vision can be good enough with more machine learning so he is pressing ahead with trying to solve computer vision with machine learning so that he won't need lidar.

Analogies aren't great, but I think this fits:
Goal: Riding a two wheeled bike solo:
Approach 1: Training wheels (additional hardware to start that is not needed later)
Approach 2: A parent supporting the bike and walking/ running along side (more work needed during development, longer training period, less material cost)
 
  • Like
Reactions: APotatoGod
Lidar is accurate to within 1cm. Camera doesnt have good data. It doesnt give you distance. You are left to actually guess it. But then you call lidar using math to get accurate distance guessing. But then call running camera data through NN which tells you "I'm 54% sure this is 30m away but also 35% sure its 40m away" accurate and the other guessing.

LIDAR tells you "A point in this direction is X distance away." But it is constantly sweeping. What you need to know for safety purposes is not "A point in this direction is X distance away", but rather "An object of size S is X distance away". For some things, the difference doesn't matter, but for other things (e.g. a car). For other things, it does (e.g. "Is there something in the road, or did the road just rise?").
 
  • Like
Reactions: APotatoGod
LIDAR tells you "A point in this direction is X distance away." But it is constantly sweeping. What you need to know for safety purposes is not "A point in this direction is X distance away", but rather "An object of size S is X distance away". For some things, the difference doesn't matter, but for other things (e.g. a car). For other things, it does (e.g. "Is there something in the road, or did the road just rise?").

The "sweeping" is only for spinning lidar, not flash lidar and it is essentially instantaneous since the lidar works at the speed of light. Lidar absolutely tells you size of objects, not just points.

 
Lidar absolutely tells you size of objects, not just points.

Lidar post processing tells you that. Same as camera post processing. However, Lidar can differentiate Zebra in a herd and distance check Kangaroos better.
Regarding videos: People watching sensor data is meaningless from a SW implementation POV, we are great at finding patterns (even when they aren't there)
 
  • Like
Reactions: APotatoGod
The "sweeping" is only for spinning lidar, not flash lidar and it is essentially instantaneous since the lidar works at the speed of light. Lidar absolutely tells you size of objects, not just points.

Neat demonstration.

However, autonomy companies that are focusing on LIDAR will still need to have very good camera-based systems as well. I can't imagine that LIDAR will ever be able to identify the color of stop lights, read street signs, or identify painted road markings.

When it comes to building the camera-based half of those autonomous systems, Tesla will have an advantage in the diversity of training data.
 
Neat demonstration.

However, autonomy companies that are focusing on LIDAR will still need to have very good camera-based systems as well. I can't imagine that LIDAR will ever be able to identify the color of stop lights, read street signs, or identify painted road markings.

When it comes to building the camera-based half of those autonomous systems, Tesla will have an advantage in the diversity of training data.

All autonomous car companies like Waymo or Cruise use both cameras and lidar. They don't do FSD with just lidar. Yes, you need cameras for things like stop lights and street signs. But with camera + lidar, you can divide the labor. You can rely on lidar for things like tracking cars and pedestrians, avoiding obstacles and detecting driveable space which lidar is extremely reliable at doing. And you can use cameras for reading stop lights, traffic signs, lane markings and other road markings which camera vision is very good at. So your camera vision only needs to be perfect for what the cameras are in charge of and for what it is already good at. With camera only, the camera vision needs to be perfect to do all of it. So there is a higher burden placed on the camera system.
 
Mobileye ditched lidar in their test car and seems to be doing fine
We have no way of knowing how well Mobileye is doing. They can do impressive looking demos just like many other companies. The only thing we know for sure is that they have not yet deployed a L3-5 vehicle and that they say they're going to use LIDAR to do so (or at least that was their position a month ago. Has it changed?).
Mobileye CES presentation
 
  • Like
Reactions: rnortman
Mobileye ditched lidar in their test car and seems to be doing fine

That is misleading. Mobileye is testing FSD without lidar (camera only) to perfect their camera vision and another FSD with lidar and then plans to combine the two in order to achieve the safety needed for L4 and L5 robotaxis. In the CES2020 presentation, Ammon talks about this. So ultimately, Mobileye will still use lidar in their final L4/L5 FSD.
 
Neat demonstration.

However, autonomy companies that are focusing on LIDAR will still need to have very good camera-based systems as well. I can't imagine that LIDAR will ever be able to identify the color of stop lights, read street signs, or identify painted road markings.

When it comes to building the camera-based half of those autonomous systems, Tesla will have an advantage in the diversity of training data.

And Waymo has an advantage in the depth and experience of information processing of its programming team, which is what Google was founded on and is still Alphabet's strong suit. This race is still wide open and it's way too early to predict a winner.
 
That is misleading. Mobileye is testing FSD without lidar (camera only) to perfect their camera vision and another FSD with lidar and then plans to combine the two in order to achieve the safety needed for L4 and L5 robotaxis. In the CES2020 presentation, Ammon talks about this. So ultimately, Mobileye will still use lidar in their final L4/L5 FSD.

yes, its my understanding that everyone who is serious about self-driving wants as much sensor input as they can get, including camera AND lidar. tesla is fighting the trend by rejecting lidar. my guess is that they'll HAVE to include it at some point; elon will eat some crow (so to speak) and there will be lidar on the car.

you can only go so far with camera-only. and I think we're seeing the limits of it, now, in our cars.

again, I'm going to put in a big vote of confidence for v2x. the SHARED knowledge of the road is the only way we can get 'free' redundancy. the cars up ahead will have a chance to spot the box or obstruction in the road. by the time you are near that thing, you should not even have to see it - you'll be told about it by the 'road community'.

v2x has a long way to go, but I truly believe this will be game-changing.
 
  • Like
Reactions: alexucf
again, I'm going to put in a big vote of confidence for v2x. the SHARED knowledge of the road is the only way we can get 'free' redundancy. the cars up ahead will have a chance to spot the box or obstruction in the road. by the time you are near that thing, you should not even have to see it - you'll be told about it by the 'road community'.
So the first n cars hit the box until the combined false negative probability collapses beneath some threshold? Where does that threshold live relative to a false positive limit?
 
  • Funny
Reactions: willow_hiller
hmmm. I have no idea what your question even means.

so...

bunny.jpeg
 
yes, its my understanding that everyone who is serious about self-driving wants as much sensor input as they can get, including camera AND lidar. tesla is fighting the trend by rejecting lidar. my guess is that they'll HAVE to include it at some point; elon will eat some crow (so to speak) and there will be lidar on the car.

you can only go so far with camera-only. and I think we're seeing the limits of it, now, in our cars.

again, I'm going to put in a big vote of confidence for v2x. the SHARED knowledge of the road is the only way we can get 'free' redundancy. the cars up ahead will have a chance to spot the box or obstruction in the road. by the time you are near that thing, you should not even have to see it - you'll be told about it by the 'road community'.

v2x has a long way to go, but I truly believe this will be game-changing.

I agree about v2x. My question: Can this be accomplished with 4G or will it need 5G? Obviously 5G would allow for a lot more information sharing, but is it needed for useful v2x? And does every car communicate with every other nearby car that uses the same system, or does every car communicate with a central supercomputer that acts as a clearinghouse? I imagine that meaningful v2x is a very long way off.

I know there are apps now that allow drivers to report road conditions and see what other drivers have reported, but an autonomous system would be sending and receiving a lot more information.
 
I'm still new to v2x, but a project at work is moving forward and I'm going to learn this, first-hand (coding, testing, deploying) over the next year.

I've heard that v2x is taking off a bit faster in china and its not clear to me if the different countries will use different frequency bands.

I expect this to be refined, over time, and the better designs will stay and the inferior ones will get dropped. its an evolution; no one expects to turn on a switch, one day, and have it all work. its going to take time, there will be growing pains, like all new disruptive techs.

but once it gets worked out, I do think it will help immensely. more intelligence (more data points from distributed sources) is a Good Thing(tm). nothing to be shot down or feared or made fun of. (I see all kinds of 'noise' about fighting v2x; I'm at a loss to understand why tesla fans would FIGHT a tech that will only help everyone, industry wide).
 
Tesla already offers insurance in CA and plans to expand to other states.

If the software is provably 10x or 100x safer than a human it'll obviously be a LOT cheaper to insure too.

Take what you pay for insurance. Divide it by 10. Multiply it by the number of FSD cars Tesla has sold.

Now multiply it by 10 for the inevitable negligence claims because they are a corporation and their engineering process will be under scrutiny.