Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AutoX CEO bashes lidar

This site may earn commission on affiliate links.
Jianxiong Xiao, the founder and CEO of the autonomous driving startup AutoX, has some harsh words on lidar:

[Xiao] explains that the sensors have a lower resolution than even the cheapest cameras — 64 pixels vertically, compared to a VGA camera that has a vertical resolution of 480 pixels.

Xiao also says that lidar doesn’t cope well in extreme conditions. Hot and cold temperatures can throw off the sensor calibration, which could disrupt the data produced by the sensors. Xiao claims most lidar will fail to make it through a year.

“If we are talking about self-driving cars in the four years, five years, it’s going to be very difficult to have any automotive-grade hardware for the lidar perspective,” Xiao says.

In rainy conditions, the drops of water can also diffract the laser beam. A puddle of water may act like a mirror, shooting the laser off elsewhere. That means the beam may never make it back to the sensor, losing the information in the process. Cameras have a similar issue with rain collecting on the lens, but Xiao’s team has designed a wiper to clean the lens mid-driving.
 
  • Informative
Reactions: stopcrazypp
...harsh words...

It's a teaser. After blasting LIDAR he then conceded that "it all depends on whether the price is right."

I didn't know that LIDAR fails after a year. $80,000 replacement per year is a lot!

Ford demonstrates that its LIDAR can work in rain and snow because it's just a matter of algorithm!

The article mentioned one critical LIDAR capability: "depth".

How can a camera interpret that a tractor-trailer is not a 2-dimensional object that's paper thin?

LIDAR can detect that that tractor-trailer is not paper thin but rather it is a big 3-dimensional object!

It's just like in the old days, I would say I am very satisfied with GM EV-1 range of 70 miles.

The excuses for not wanting more range would be because it's undesirable or so heavy, so inefficient, so bad...but the truth was: it's just much more expensive for more range at that time.
 
You seem like a man on a mission @Trent Eady judging by all the threads you are starting. :) Good for you, I get passion.

I would recommend opening up your views a bit, though. If you really want to learn about the benefits of Lidar as a redundant component of autonomous systems, may I recommend not reading from a vision-based start-ups pitching their product.

Self-driving redundancy is really an interesting topic where the different strenghts and weaknesses of the sensor types really show. May we live in interesting times.
 
may I recommend not reading from a vision-based start-ups pitching their product.
May I recommend not reading from a lidar-based start-ups pitching their product.

I have two eyes and two ears. I don't have a lidar, nor radar.
Cameras are cheap. Lidar is not cheap. It may get cheaper, but so may cameras. One lidar is not redundancy, 20 cameras is redundancy.
 
  • Like
Reactions: Krugerrand
I have two eyes and two ears. I don't have a lidar, nor radar.
Cameras are cheap. Lidar is not cheap. It may get cheaper, but so may cameras. One lidar is not redundancy, 20 cameras is redundancy.

20 cameras would be nice reduncancy as well, no argument from me there. But 20 cameras complemented by 360 degree lidar and rader would be even better.

See, my eyes are attached to feet and hands that can operate cleaning apparatus...

May I recommend not reading from a lidar-based start-ups pitching their product.

I'm not. I'm using my brains and the fact that I've lived in a four-season area for three years with a Tesla that can't see anything backwards on a bad weather day. One of these years has been with an AP2 car - it is just as bad there. I can see it with my own eyes, the camera basically goes blind all the time.

Sure would be nice to have redundancy there - radar, lidar, more cameras, whatever they can throw at the problem. Because this car sure as heck won't be reliably self-reversing anywhere part of the year. Sure it can ultrasonic somewhere, but that's no self-driving because of its limits.

We shall see how other angles fare as they enable the technology.
 
But 20 cameras complemented by 360 degree lidar and rader would be even better.
It boils down to:
- what can one lidar do, that the X amount of cameras cannot do? X being the amount of cameras you get for the cost of one lidar.
and
- can one have purely vision based or purely lidar based autonomous driving?

Cameras see everything there is to see, does lidar?
 
May I recommend not reading from a lidar-based start-ups pitching their product.

I have two eyes and two ears. I don't have a lidar, nor radar.
Cameras are cheap. Lidar is not cheap. It may get cheaper, but so may cameras. One lidar is not redundancy, 20 cameras is redundancy.

Depends on what you mean by redundancy and what you are trying to protect against. If you are trying to protect against camera failure, then sure, you can have twenty cameras. If you are trying to protect against navigation system error, then even a thousand cameras is not redundancy. All you will have is a stack of navigation equipment that suffers from the same types of faults. If a fault condition occurs, they will all go down.
 
It boils down to:
- what can one lidar do, that the X amount of cameras cannot do? X being the amount of cameras you get for the cost of one lidar.
and
- can one have purely vision based or purely lidar based autonomous driving?

Cameras see everything there is to see, does lidar?

Nobody, absolutely nobody, here is disputing the need for vision for autonomous driving. Least of all me. What I'm saying is what redundancy would be useful and/or needed.

You keep ignoring my point about 360 degree radar. I would be very happy with 360 degree radar redundancy (or even just corner radars in all corners in addition to a front radar). Of course even happier with 360 degree radar + lidar + vision, but any secondary long-range sensor in addition to vision would be welcome... Radar can see through objects, which no vision can - this is one reason why 360 degree radar would be useful.

As for what can lidar see that vision can't, a few things: in darkness. This would be especially useful in seeing animals, objects and cars with headlights off - possibly approaching the car from all directions. Radar can do this also, but lidar can do this to a much higher accuracy. It is also probable Lidar and vision systems have different weakness profiles when it comes to bad weather and quite possible each could complement the other in such scenarios.

Finally, Lidar is a very low-cost operation performance-wise, with great accuracy in e.g. distance estimation, so great for redundancy if you are unable to operate 20 cameras e.g. due to processor performance reasons. Whether or not the cost-curves of Lidars and required processing power for redundant vision cross and where, remains to be seen, of course. But this is a potentilal benefit.

But once more, to be clear, my biggest disappointment - after the state of the software - in AP2 is lack of any long-range redunancy on the sides and towards the rear. Even just 4 corner radars added to the current suite would IMO be a massive improvement on redundancy.
 
Last edited:
You seem like a man on a mission @Trent Eady judging by all the threads you are starting. :) Good for you, I get passion.

Stop patronizing and insulting @Trent Eady . Stop. It's a control technique - an attempt to place yourself in a dominant position of authority over another person. Stop.

I would recommend opening up your views a bit, though. If you really want to learn about the benefits of Lidar as a redundant component of autonomous systems, may I recommend not reading from a vision-based start-ups pitching their product.

I recommend you stop insulting @Trent Eady 's sources - stop. You are fine with commercial sources of information which have their own biases as long as they support your anti-Tesla narrative. You are a cancer to TMC and you do your best to prevent open discussion from all viewpoints because you consistently attempt to discredit TMC members and their sources who post anything in support of Tesla's approach.

We all know Lidar's benefits. We all know that Professor X has a startup. That does not mean his information is automatically wrong.

The only hidden agenda here is yours Anxiety.

]
Self-driving redundancy is really an interesting topic

Stop bullying and demeaning people. Stop now. You imply that the topic Trent posted is not interesting or worthy of discussion. You do this psychological bully routine over and over and over. Stop now.
 
I didn't know that LIDAR fails after a year. $80,000 replacement per year is a lot!
I didn't know that either. But digging it up, it's actually true of the spinning velodyne lidars that we see in most self driving prototypes today:

"Even at $8,000, Velodyne’s rotating LIDAR is too expensive to include on most consumer vehicles. Worse, its design has raised concerns about longevity. Most consumers keep a car for 10 years or more. “Many OEMS have issues with mechanical scanning LIDAR,” says Kona. “They say upwards of 60 percent need to be sent back to the manufacturer for recalibration.”"
The billion dollar widget steering the driverless car industry

I didn't realize it was that unreliable. I think it has to do with it being a mechanical part that additionally has to be high precision.

How can a camera interpret that a tractor-trailer is not a 2-dimensional object that's paper thin?
The same way that someone with a single eye can do it or if someone gave you a picture of a roadway situation and asked you approximately how far away the tractor-trailer is from the camera: by using monocular depth cues (it would be done by AI). For single camera, motion parallax can also be used as well as SLAM based approaches, for example, a link posted previous by someone else:
comma ai on Twitter

If you have multiple cameras, you can use a disparity map to get depth.
 
Last edited:
Stop patronizing and insulting @Trent Eady . Stop. It's a control technique - an attempt to place yourself in a dominant position of authority over another person. Stop.

You are a cancer to TMC and you do your best to prevent open discussion from all viewpoints because you consistently attempt to discredit TMC members and their sources who post anything in support of Tesla's approach.

You are a cancer to TMC and you do your best to prevent open discussion from all viewpoints because you consistently attempt to discredit TMC members and their sources who post anything in support of Tesla's approach.

The feeling is mutual, worry you not.

I have no desire to insult @Trent Eady, I welcome his views and responded with mine.
 
...multiple cameras, you can use a disparity map to get depth.

Sure, multi-cameras can detect distance but it is still 2-dimensional:

1) They know how long the tractor-trailer is.

2) They knows how high the tractor-trailer.

It does not know the third dimension: the depth or the thickness of the tractor-trailer.

LIDAR can detect a threatening depth with lots of thickness behind the frontal 2-dimensions.

A work around: I think TeslaVision will assign a 3 dimension depth to a detected shape similar to a tractor-trailer.
 
@Tam @stopcrazypp does have a point here: even MobilEye EyeQ3 now does 3D shape recognition, depth included, on a single camera I believe. Nvidia's codebase does not yet to this though, if their public demos are an accurate representation. Tesla's FSD codebase status is of course unknown...

But IMO getting depth overall from vision does not really look like that's the problem. It may take more processing power than lidar, but it seems doable. I would be more worried about getting reliable depth etc. it in various weather circumstances. Separating the corner cases can take a lot of effort, though, for vision.
 
I've lived in a four-season area for three years with a Tesla that can't see anything backwards on a bad weather day. One of these years has been with an AP2 car - it is just as bad there. I can see it with my own eyes, the camera basically goes blind all the time.

What happens to the backup camera? Does it get wet, covered with snow, covered with mud...?
 
  • Helpful
Reactions: AnxietyRanger