Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Solid state Lidar progressing

This site may earn commission on affiliate links.
I'm here and Livox just announced this week Lidars for autonomous cars, this includes units that you can buy right now for $599 without direct contact. Looks like they just came out of stealth.

LiDAR Sensors - Livox

The specs look too good to be true. but i'm skeptical, Should I buy and report back?

thank you for this info! Looks good but it is completely possible. Should definitely try it!
 
some pictures.

cgnp8t85rrb21.png
 
  • Informative
Reactions: Joel
At 21:30 in this video, Amnon says the current sensor hardware for Mobileye's test vehicles is cameras only. The plan is to add radar and lidar later for redundancy, but Amnon is obsessed with redundancy. He’s said that he’s worried society will revolt against autonomous vehicles if they are anything less than 1,000x safer than human drivers. Elon is going for “only” 10x safer. So the difference in sensor suite can perhaps be explained by Mobileye’s extremely cautious approach.

I think you are misunderstanding what Amnon is saying. He is saying the probability of a fatality in one hour of human driving is 10^-6. 1,000 times better than that is 10^9 (1 billion hrs). 10x times better than that is 10 ^7 (10 Million hrs). If the average speed of driving is 30 MPH, you would need 30 Billion miles to validate a 1000x system and 300 Million to validate a 10x system.

He then said that validation must be done separately because sensing errors and planning errors are different.

So for perception, collecting 30 billion miles or 300 million miles of validation sensing data is unfeasible.
He then said, if you had two sensing modalities that were separate you only need to ensure you can go 30,000 hours without a severe sensing mistake. If you only had one sensing modality (camera only), you would need all 1 billion hours if you were aiming for 1000x safety or 10 million hours if you were aiming for 10x safety.

Its not that adding lidars and radars is what makes the system 1000x, its that it reduces the amount of validation hours you need for your camera system. So if you have a camera only system. You would need 1 billion hours of validation if you were aiming for a 1000x system. 10 million hrs if you were aiming for a 10x system.

Intel’s Mobileye wants to dominate driverless cars—but there’s a problem

The company argues that validating the sensing system is made even easier by capitalizing on sensor redundancies. Mobileye's plan is to develop a system that can navigate safely using only cameras and then separately develop a system that can navigate safely using only lidar and radar. Mobileye argues that if the company can show that each system separately has a sensing error less often than once every 30,000 hours, then it can conclude that a system with both types of sensors should have an error no more often than 30,000*30,000=~1 billion hours.

So if Tesla to have a 10x of safety in their sensing system, they need a guarantee that it will only have a sensing error every 10 million hours (300 million miles).

Hope that clears this up.
 
  • Like
Reactions: Randy7fx
Lidar will never have as much data input as multiple cameras. Musk is correct in defining the problem as teaching the cameras to see. They are very close. They already analyse 2 frames at a time from each camera for pixel movement. Tesla's already identify a "driving surface" separately from lanes marked by lines. They do that also as well as identify routes and all objects on or near the driving surface. FSD using accurate maps won't work because of construction/traffic anomalies. I have heard no discussion of environmental effects on living creatures of thousands of lasers completely saturating an environment where there are several hundred autonomous cars within lidar range. Lidar will always be hardware limited. Little such restraint exists for software. Computers are cheap.
 
  • Love
Reactions: CarlK
At 21:30 in this video, Amnon says the current sensor hardware for Mobileye's test vehicles is cameras only. The plan is to add radar and lidar later for redundancy, but Amnon is obsessed with redundancy. He’s said that he’s worried society will revolt against autonomous vehicles if they are anything less than 1,000x safer than human drivers. Elon is going for “only” 10x safer. So the difference in sensor suite can perhaps be explained by Mobileye’s extremely cautious approach.

I can't say more of how absurd this redundancy talk is. All sensors have to work in sync. The whole system either work or not work. If the new set of sensors gives you conflicting information how are you going to decide if you want to trust the primary one or the secondary one? If it only gives you conforming info then why do you need it? May sound good on paper until when you actually try it. Or if you just don't believe your current sensor set will work.

BTW if autonomous car could truly be 10x safer than human there will not be human drivers other than the few who could afford exuberant high insurance premiums. Anyone who drives his own car and caused injury or death could be sued for negligence no less than DUI cases today. It's kind of like asking insurance company to insure you driving drunk that will increase the danger 10x.
 
Last edited:
  • Informative
Reactions: pilotSteve
If the new set of sensors gives you conflicting information how are you going to decide if you want to trust the primary one or the secondary one? If it only gives you conforming info then why do you need it?

The first one is indeed a fair question though I must add one that amateurs like us seem to be worried about more often than those actually working on redundant autonomous driving systems. :) As for conforming information from two sources, that is obviously useful as it increases the confidence. At the very least with redunant systems the car knows better when its confidence is lower and can take more careful action for example.

@Bladerskb what is your view on MobilEye’s redundancy decision-making if Lidar/Radar and Camera are in conflict? I recall some weeks ago a discussion at least where the traffic scenario was a deciding factor (eg trust your Lidar/Radar more in a cross traffic situation with fast-approaching vehicles coming from afar or something like that).

I guess one more way to look at this is MobilEye and Waymo etc are actually working with three redundant sources: camera (possibly more than one), Lidar and radar. If two agree and one disagrees that might be an easier ”vote” than 50/50...
 
The objects that Tesla cameras see and identify have confidence percentages tagged to them. The objects within the driving surface seem to have 100%. Pedestrians walking on sidewalks well less than that. Watch Pranav Kodall's stuff on YouTube:

It's really quite depressing that they are still struggling with the basics. The really hard stuff that Google/Waymo struggled with for so long is well beyond anything seen here, e.g. complex junctions or areas with huge numbers of moving objects.
 
  • Like
Reactions: GeoX750
The first one is indeed a fair question though I must add one that amateurs like us seem to be worried about more often than those actually working on redundant autonomous driving systems. :) As for conforming information from two sources, that is obviously useful as it increases the confidence. At the very least with redunant systems the car knows better when its confidence is lower and can take more careful action for example.

@Bladerskb what is your view on MobilEye’s redundancy decision-making if Lidar/Radar and Camera are in conflict? I recall some weeks ago a discussion at least where the traffic scenario was a deciding factor (eg trust your Lidar/Radar more in a cross traffic situation with fast-approaching vehicles coming from afar or something like that).

I guess one more way to look at this is MobilEye and Waymo etc are actually working with three redundant sources: camera (possibly more than one), Lidar and radar. If two agree and one disagrees that might be an easier ”vote” than 50/50...

"Reduncancy" is a misleading term and nothing but a spin of how they do not know how to get there with their current setup. If an extra sensor set is needed just add it when you design the system. It's not like if your system could work 99% but adding a new set of sensors will improve it to work 99.9% WITHOUT redo the entire system from the beginning. Let's say we give you an extra set of radar eyes you'd be screwed without relearn every experience and instinct you obtained through the years, that is even if your brain is able to handle that extra info.

Using your cross traffic scenario you can say camera is not good enoungh (for your system) and you'll need to have LIDAR/RADAR to do the job but it's still one or the other at a giving moment and not simultaneously. It has nothing to do with redundancy that somehow can magically improve the safety by just adding another (conflicting) input. All these talks are just Mobileye's spin that it does not see its bread and butter camera system could do the job in the near future (note this is the key). Not saying cameras (and Radar) are not enough though since Tesla is using those without needing the crutch but by attacking the processor and NN algorithm.
 
Last edited:
"Reduncancy" is a misleading term and nothing but a spin of how they do not know how to get there with their current setup. If an extra sensor set is needed just add it when you design the system. It's not like if your system could work 99% but adding a new set of sensors will improve it to work 99.9% WITHOUT redo the entire system from the beginning. Let's say we give you an extra set of radar eyes you'd be screwed without relearn every experience and instinct you obtained through the years, that is even if your brain is able to handle that extra info.

I don’t really mean with redundancy anything more than just to refer to sensors covering the same area as some other sensor is already doing. What it will eventually do or mean in each vehicle in practice is an open question for sure. As for the rest of your comment I think various companies have outlined their general approaches on redundancy — and there are very different approaches indeed — but too little is known to really debate them in my view.

Using your cross traffic scenario you can say camera is not good enoungh (for your system) and you'll need to have LIDAR/RADAR to do the job but it's still one or the other at a giving moment and not simultaneously. It has nothing to do with redundancy that somehow can magically improve the safety by just adding another (conflicting) input. All these talks are just Mobileye's spin that it does not see its bread and butter camera system could do the job in the near future (note this is the key). Not saying cameras (and Radar) are not enough though since Tesla is using those without needing the crutch but by attacking the processor and NN algorithm.

Yes, that is another big question right: the near future. Tesla’s approach might be to use the driver as the crutch ie stay at Level 2 as long as possible. Others may be aiming at Level 3 and above sooner with a multitude of sensors to cover them.

As for using radar and Lidar in the cross-traffic scenario I guess my main point is that those technologies might have inherent benefits in seeing better into those directions unless the car also has side-facing narrow cameras that can see far... One could of course get similar benefits by using a larger set of cameras too though that would not help see ”through” cars like side-facing radar could.
 
I don't disagree with anything you said. There are always more than one way to get there but the devil is always in the details. It's not a problem with using any different types of sensors long as that will take you there and hopefully with the least effort. My comment was only directed toward Mobileye's use of "redundancy" to justify their approach which is the utter nonsense. Just say the truth your camera system will not work if not adding LIDAR/RADAR instead of thinking everyone is a fool.

Let me elaborate this a little so we can get a clearer (non-technical) background of this. I don't know what Mobileye told Intel to convice it to spend that kind of money to buy it but I'm sure it's not our camera based system is not good enough to get there and we need to have a new camera/LIDAR/RADAR based system that could take many years to develop. It WILL take years to develop a new hw/sw algorithm for those additional types of sensors. It's not just to add redundancy or band aid the current (working) system as he wants people to believe. I just don't believe anything this guy says from its Tesla fiasco. Now we know Tesla had worked for years to eventually get rid of Mobileye but at time of separation Mobileye tried(lied), and it somewhat worked, to convince people that it was them who dumped Tesla because it wanted to move too fast. What a reasonable company would dump its most important customer instead of trying to work with it? Nothing more than you did not fire me I quite but a lot of people, probably include Intel, took line, hook and sinker together. In comparison Nvidia is much more classy when Tesla went its own way of AI processor. Huang only said he will welcome Tesla if it decides to come back to Nvidia chip.
 
The Tesla system has redundancies. Each camera slightly overlaps the coverage area of at least one other camera and the front ones, two cameras. Analyzing pixel movement between two adjacent frames can yield relative motion data and the radar gives additional accurate velocity and distance for the more important front sector where the cars actions can avoid accidents. There's little a car can do to avoid accidents from the rear or sides. There is a high rate of pixel movement available from the side cameras for overtaking vehicle avoidance during merging and lane changes. There are two inputs to the electric steering, braking systems have been redundant for decades, the car senses when any camera is obscured and gives warnings. There is probably some computational redundancy. This car appears to be designed from the ground up for at least level 4 autonomy. The EAP/FSD computer is all in one package and accessible for any upgrade needed for 2.5 cars. The requirements for autonomous vehicles have been known for many years. The level of engineering and electronics and software under the simple skin of the M3 argues that the EAP/FSD development team knows what they're doing.
 
The objects that Tesla cameras see and identify have confidence percentages tagged to them. The objects within the driving surface seem to have 100%.

Where are the percentages shown? I don't see any percentages in Pranav Kodall's videos or in verygreen's. Is it only for certain classes of object?

It's really quite depressing that they are still struggling with the basics. The really hard stuff that Google/Waymo struggled with for so long is well beyond anything seen here, e.g. complex junctions or areas with huge numbers of moving objects.

I suspect that every company, including Waymo, is still struggling with the basics. Until very recently, Waymo was a highly secretive project. By and large, the only public information that has been released about it (until recently) are commercials produced by Waymo, and other information Waymo has chosen to disclose.

In the media and the general public, I think there has been a huge gap between what people assume about Waymo's capabilities, and what we actually know about Waymo's capabilities. Timothy B. Lee, a journalist who covers autonomy for Ars Technica, tweeted this:

“Until recently my mental model of Waymo was that their technology was basically ready to go in late 2017 and they were doing a last few months of testing out of an abundance of caution, and to give time to build out non-technical stuff like customer service and maintenance.

This week has forced me to totally re-evaluate that. It now looks to me like Waymo is nowhere close to ready for fully driverless operation in its initial <100 square mile service area, to say nothing of the rest of the Phoenix metro or other cities.

...This means I have no idea how long it will take for Waymo (or anyone else) to reach full autonomy. It could take six months or it could take six years. Maybe Waymo will be forced to throw out big chunks of what they've built so far and start over.”
I think this shows how much of a gap there is between assumption and knowledge.

There are a few things the public has been shown that are impossible for us to assess:
  • demo videos
  • miles between disengagements
  • future timelines / statements by executives
#3 is almost as fallible as any random person's guess. Just because you work closely with a technology doesn't mean you can predict the future. Experts are often as bad at making predictions as laypeople.

Demo videos are easy to make, and have been made since at least 2012, but they don't reflect much information about a system's real capabilities. For that you need a large sample of driving, not a few cherry picked minutes.

So, disengagements then, right? Apparently not. Amir Efrati at The Information reported that companies are not obliged to report the vast majority of disengagements that occur. A safety driver or engineer taking a vehicle out for some daily testing — those disengagements don't have to be counted. This makes disengagements numbers pretty much just useless for telling us how many disengagements there actually are.

It's possible that full autonomy is just impractical to solve with current technology. It's also possible that it will be solved quickly by applying existing technologies like imitation learning and reinforcement learning at the scale of hundreds of thousands or millions of vehicles. I don't know which one it is, or if it's neither.

We are in a double bind:

1) Companies are highly secretive about what they're actually doing, and what the current capabilities of their prototype systems are. Anything released to the public is essentially just an ad.

2) Even engineers and executives at these companies can't predict the future. Experts make wrong predictions all the time. Even with inside information and subject matter expertise, they may be unable to assess how far along progress is relative to the end goal of full autonomy.

That doesn't mean autonomy is overrated or all hype. It just means it's highly uncertain. It could be underhyped for all we know.
 
Last edited:
Regardless of Waymo's position, Tesla promised a cross country demo two years ago and have been selling full self driving since 2016. Many of the people who bought it will have sold/scrapped the car before they deliver it, if they ever do.

At least Waymo are actually operating an FSD taxi service in the real world.
 
@banned-66611

Indeed. Let’s be honest: Back in October 2016 anyone claiming we would have no full self-driving features — and no coast to coast demos — two and a half years after Tesla FSD launch would have been labelled as a troll in any pro-Tesla community. If they had suggested Tesla would remove FSD from Design Studio after two years the response would have been even worse.

Yet here we are two and a half years later: zero full-self driving features and zero coast to coast demos and no FSD in Design Studio.

Reality is @Bladerskb has been much closer to the mark than anyone else I can see.
 
Reality is it's easy to say things (that were never done before) can not be done and you will more often be right than wrong. Those few who say things can be done are ones who made the world what it is today. We should be very careful in saying who is "right" and who is not that way. The right way to do analysis is with physics but not anecdotal evidences which is what most people, especially naysayers, do. Kind of like what Elon said the right way to do engineering design by the first priciple and not by analogy.
 
Last edited: