Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Solid state Lidar progressing

This site may earn commission on affiliate links.
quite a good round up article, quotes pricing as low as $250.

Solid-State LiDAR Is Coming to an Autonomous Vehicle Near You

Thanks for sharing, I wasn't familiar with some of those.  Saw this while looking after some of the info:
CES 2018: Summary of 15 solid-state LiDAR makers-VehicleTrend

I get the sense from article that this stuff is imminent, but none of these companies seems to be currently making a part for use in passenger cars and none of them has made a firm announcement about when they will have one. I was excited two years ago to hear claims from Quanergy that they'd have one available soon, but they keep missing their timetable and now they seem to have removed the product datasheets from their website. Their last announcement a little over a year ago was that they'd be in volume production in 2017, but that doesn't seem to be happening.

The Innoviz technology seems like a great approach, but the part they have right now seems to be a demo or development part. Their automotive grade part has impressive specs but seems to be a year or two away.

Leddartech seems to be a component supplier with partners who will make the actual lidar using their tech. They're currently saying that their partners could have a part by 2019. There don't seem to be any announcements from that partner though.

Everyone seems to be targeting $250 give or take, so that must be what automotive OEMs are saying they need. And it looks like solid state is where things are heading, which makes sense from what I see. But it doesn't seem to be clear at all when the sensors are going to be available with that price. Seems like 2019 is what everyone is saying right now, but then I remember that in 2015 people were saying they'd be here by 2017. Since then I haven't heard of any automotive grade, $250, solid state sensor being seen anywhere other than in demos at CES.

Anybody else got hard data (non press release / non fluffy article) on how lidar is doing?
 
Adoption of LIDAR may be contingent on packaging. Currently the ugliness and aerodynamic problems of a LIDAR installation remain unsolved.

Unless you are talking about only a foreward facing LIDAR. Then, why not use radar?
 
Technically true. But the question is this: will we be able to develop cheap LIDAR first, or a supercomputer that can drive with vision (and radar) only?

Aye - there's the rub. How smart can we make the computer - and how simple can we make the problem? Or as an acquaintance of mine put it - will lidar get cheap before vision gets good? Although there's this other angle that I've heard more lately which says you have to have good vision even if you have cheap lidar (because there are so many driving problems that neither radar nor lidar can help with) and if you have good vision then do you even need cheap lidar?

I imagine we'll know pretty soon. It's hard to imagine cheap lidar taking more than another five years so I suppose it will be settled in that time range.
 
...why not use radar?

Of course radar is used and not deleted in a LIDAR system.

The reason for not using Radar and Cameras only is because of the Florida fatal accident case.

Tesla believes that kind of collision would have been prevented because it has now tweaked with the radar.

So far so good. OK, it was a big white tractor trailer that radar/camera let the collision happened, but how a big red fire truck ?

Radar system needs much more improvement to prevent those 2 kinds of scenarios above.

Radar researchers keep saying that they got it this time (including Tesla) but when will they admit that in the mean time, they need something more reliable for now?

LIDAR industry says those kind of collisions could have been prevented only if Tesla equips its system with LIDAR.
 
if you have good vision then do you even need cheap lidar?

I don't know. That's a good question.

I speculate that to drive with vision only (and radar), your vision needs to be at 100%, while if you have vision and LIDAR (and radar), your vision only needs to be at 99% (or 99.999%). The implication is that getting something from 99% to 100% is very difficult, and that's it's easier to just add another sensor. If this is the case, we are still left with the question of which approach results in level 4 or 5 self-driving earlier (and at a reasonable price).
 
  • Like
Reactions: OPRCE
Really. You drive one with two cameras and a supercomputer. Two eyes and a brain.

The biggest problem with this is robot drivers have to be orders of magnitudes safer than human drivers or the whole entire thing is dead on arrival.

The second problem is were quite far away from being able to duplicate all the amazing things the human brain can do. The way to get around this is to give what computing power we do have data from a lot of sensors. That way it can have an easier time figure outing what's a shadow and what isn't a shadow.

Personally I think it's going to take a combination of

Cameras that have better all weather capability
Front Radar of AP 2.5
Corners Radars (that also cover the rear)
Car to Car and road to car communication (situational updates and for allowing cars to form minitrains)
Solid State Lidar
Primary AP computer
Back up AP computer.
Near Real time remote assistance.

Where it's a bit like throwing the entire kitchen sink at it, and then over time removing things. Tesla current way is more of a minimal adding things along the way while not being clear over limitations.
 
Two eyes. Stereo vision. On a movable platform. If I'm trying to park or avoid hitting a curb being able to move my head is pretty important.
And world leading autofocus, 3 D enabled, better dynamic range than most CMOS sensors and a neural net that is for most cases 99,999% sensitive and specific when recognizing where the road goes in all weather, day and night.
 
  • Like
Reactions: OPRCE
Really. You drive one with two cameras and a supercomputer. Two eyes and a brain.
Wake me up once we all got the computational capabilities of at least a deep blue in all our cars then, because what we have now will never match the capabilities of a human brain, nor will the cameras achieve the flexibility of our eyes or necks.........the comparison is utter nonsense with our current tech.
 
Last edited:
I don't know. That's a good question.

I speculate that to drive with vision only (and radar), your vision needs to be at 100%, while if you have vision and LIDAR (and radar), your vision only needs to be at 99% (or 99.999%). The implication is that getting something from 99% to 100% is very difficult, and that's it's easier to just add another sensor. If this is the case, we are still left with the question of which approach results in level 4 or 5 self-driving earlier (and at a reasonable price).

Yes. The idea that more sensors makes the problem easier seems to be pretty widely accepted. It's kind of common sense, I suppose, that having redundant sense modalities and redundant coverage is going to make your accuracy better, and since accuracy is a core element that's currently missing it follows that more sensors would lead to better accuracy sooner and thus more sensors would be the winner in terms of time to market.

There's an assumption baked into that way of thinking about the topic that might be worth examining though. When you add more sensors you complicate the problem of integrating all of them to make decisions. So there's this assumption that managing the added complexity of the additional sensors is less difficult than getting good decisions from fewer sensors. Now that assumption certainly might be true. But this kind of software is pretty hard to get right and once it's right you have to come up with some way to prove that it's right before you turn it loose. The more parts there are the more ways there are for things to go wrong, for both hardware and software, and the effort needed to get a provably good system multiplies.
 
Technically true. But the question is this: will we be able to develop cheap LIDAR first, or a supercomputer that can drive with vision (and radar) only?

That's the key question - what's the most cost effective way to safely solve the problem.

Tesla has believed that they can do enough with software to drive safely without LIDAR. If that's true, it's likely to be a cheaper answer (depending on how expensive a computer you need.)

A lot of the industry experts believe the problem can't be solved without the additional data from LIDAR with present computers and techniques.

I don't know enough to know who is right. It'll be interesting to see how this race turns out.

(And I'll probably buy the first practical level 3 long range EV I can find to replace my AP1 X. Probably.)
 
Really. You drive one with two cameras and a supercomputer. Two eyes and a brain.

The brain also does exascale computing on 20 watts of power. I don’t think this analogy works.

Everything humans do just uses our human senses — primarily vision, sound, and touch — and our brains. We have robots with onboard computers and cameras, microphones, and tactile sensors. So why don’t we have robots that can do everything humans can do? Why can’t we have a robot President?

Because we don’t know how the brain implements intelligence. If we did, we could make a robot do anything a human can do. Since we don’t, it’s unclear what we will be able to make robots do, and what we won’t be able to.

A more compelling argument for a camera-only/camera-first approach is that lidar is only good at sensing depth, and some very important things on the road have no depth: lane lines, signs, and traffic lights. Once you get camera-based neural networks to be really good at recognizing the things that lidar can’t see, it stands to reason they’ll be really good at recognizing the things that lidar can see: cars, trucks, buses, bikes, pedestrians, etc. Conversely, if your cameras aren’t good enough at seeing the things lidar can’t see, you can’t drive, and lidar ain’t gonna help you.

This is the argument Elon and Amnon have been making for a while. Elon at TED:

“The whole road system is meant to be navigated with passive optical, or cameras, and so once you solve cameras or vision, then autonomy is solved. If you don't solve vision, it's not solved.”
Mobileye’s website, talking about camera-based HD maps:

“While other sensors such as radar and LiDAR may provide redundancy for object detection – the camera is the only real-time sensor for driving path geometry and other static scene semantics (such as traffic signs, on-road markings, etc.). Therefore, for path sensing and foresight purposes, only a highly accurate map can serve as the source of redundancy.”
At 21:30 in this video, Amnon says the current sensor hardware for Mobileye's test vehicles is cameras only. The plan is to add radar and lidar later for redundancy, but Amnon is obsessed with redundancy. He’s said that he’s worried society will revolt against autonomous vehicles if they are anything less than 1,000x safer than human drivers. Elon is going for “only” 10x safer. So the difference in sensor suite can perhaps be explained by Mobileye’s extremely cautious approach.

It seems like a lot of folks working on autonomous vehicles are scared of public opinion souring if there are any autonomy-related deaths. They fear that could delay the deployment of autonomous vehicles by years, resulting in more deaths in the long run. So, they want to be super cautious and try to avoid any deaths whatsoever. But being super cautious and e.g. waiting for lidar prices to come down means the deployment of autonomous vehicles could be delayed by years, resulting in more deaths in the meantime...

I think a good precedent is drunk driving. If you’re just above the legal alcohol blood limit, your chance of a crash is 3x higher than average. Being 3x more dangerous is enough to make something illegal and socially frowned upon. So I think being 3x safer should be enough to make something legal and socially accepted. If autonomous cars are 3x safer, that should be enough.

Even if the public is reluctant to accept autonomous cars at first, I think the public could be persuaded by arguments like this. I think it’s kind of condescending how people like Amnon don’t actually say it would be bad if autonomous cars were only 3x safer. They say the public is so irrational and hysterical that they will think it’s bad, even if it’s not. People like Amnon are smart enough to get why fewer deaths is good, but the public — well, they’re just too dumb.
 
Last edited:
  • Disagree
Reactions: AnxietyRanger