Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Bosch enters lidar business

This site may earn commission on affiliate links.
The Audi A8 has the Valeo apparently which is 0.25 degrees horizontal resolution apparently. So that would be 13cm at 30 meters. I guess it must not operate at a high rate since it was only for the TrafficJam thing. But technology matches on.
And it was released 2 years ago. Hopefully all the billions of dollars invested since then have resulted in some progress!
 
oh oh here comes the flood gates.

Media

RobotSense's new lidar retail cost $1800 and goes on sale this Q1 2020. At mass production scale I can see this being around $500.

notice_1578055664634.jpg


notice_1578055898317.jpg


 
I don't know much about the technology but it does seem like lidar is the technology to use. Every robot from Boston Dynamics uses lidar to navigate the environment. I know Elon dismisses it, but not everything he says is the ultimate truth. Solid state lidar systems are looking promising.

Thing is Elon thought he could do FSD in 2017. He was wrong and by the time he gets it working lidar will be cheap and compact enough.

Basically technology has overtaken Tesla on this one.
 
Too early to tell what approach is superior

interesting article here:
Researchers back Tesla's non-LiDAR approach to self-driving cars

That is a cherry picked article and the paper isnt saying what the article is saying.

Pseudo Lidar neural network is currently in active research in the sdc industry. Many are making use of it including mobileye, toyota, etc.

But what you are saying is that it's not clear which of the below is superior right? Because you are actually informed and dont actually think this is camera vs lidar?

Approach #1:

High res 360 cameras with no blindspots with self cleaning.
Next gen (5th gen) 360 continous heated radars with higher res and no blindspots.
High resolution 360 degree lidars with self cleaning and no blindspots.

Approach #2:

Low resolution 360 cameras with multiple blindspots and no self cleaning

Last gen (3rd gen) low res single forward radar with small FOV, no heating device and almost complete blindspots due to no coverage.

In summary you typically have:

~12 cameras, ~6 lidars and ~10 Radars

vs

8 cameras, 1 radar

Now repeat your statement, "Too early to tell what approach is superior"
 
  • Informative
Reactions: 1 person
Approach #1:

High res 360 cameras with no blindspots with self cleaning.
Next gen (5th gen) 360 continous heated radars with higher res and no blindspots.
High resolution 360 degree lidars with self cleaning and no blindspots.

Approach #2:

Low resolution 360 cameras with multiple blindspots and no self cleaning

Last gen (3rd gen) low res single forward radar with small FOV, no heating device and almost complete blindspots due to no coverage.

I think it is obvious just using common sense that approach #1 is superior. Approach #1 will give you a system with accurate 360 degree perception around the car, with redundancy if a sensor fails and self-cleaning to keep the sensors working even in less than ideal conditions. With approach #2, if the camera vision is good enough, it might be able to navigate the world around it, but only in ideal conditions and if a sensor fails, it will have to shut down. So approach #2 can give you rudimentary autonomous driving in ideal conditions, approach #1 can give you safe and reliable autonomous driving in virtually all conditions. So yeah, approach #1 definitely wins.

It's why I support including lidar in the sensor suite and why I am impressed with Waymo or the Lucid Air. And it's why I no longer expect my current Tesla to give me anything more than rudimentary autonomous driving in ideal conditions. But I do expect either my next Tesla (if Tesla ever changes the sensors) or another EV to give me real autonomous driving.

Perhaps adding 12 cameras, 6 lidars, and 10 radars to every production car would be too expensive for Tesla. But I think just upgrading the cameras to High Res and adding a front lidar and rear blind spot radar would probably go a long way to making the cars a lot safer.
 
Last edited:
We are slaughtering some chickens tomorrow and I wondered if anyone knows anything about reading chicken entrails. Does the person whose fortune is being told have to do the killing? Or is the reader supposed to do that? Does the reader have to handle and examine the entrails her/himself? Or can I take pictures of them and have the pictures read? Does anyone hear read entrails?

Thanks in advance!

As a vegetarian I'm finding the wording in this post extremely offensive. I believe it's ok and healthy for people to eat animals (even though I don't personally do so) and understand killing (slaughtering as you put it) is involved but the utter lack of respect of the life of the animal, indicated by the blunt wording, is a bit much not to have some sort of triggering warning in the title and perhaps a less graphic title as well.

sorry to be a party pooper, I'm sure you didn't mean any offence in this. Just keep in mind that this sort of thing is very difficult for some people to read and inflicts strong, negative, emotion. Not saying not to ask, just maybe add a warning of some sort to the title.

I'm just not sure.
 
By the way, the paper "Safety First for Automated Driving" (2019) on page 47 says the following about sensor redundancy:

"As of today, a single sensor is not capable of simultaneously providing reliable and precise detection, classifications, measurements, and robustness to adverse conditions. Therefore, a multimodal approach is required to cover the detectability of relevant entities. In more detail, a combination of the following technologies shall provide suitable coverage for the given specific product:

CAMERA
Sensor with the highest extractable information content as sensor captures visible cues similar to human perception. Main sensor for object/feature type classification. Limited precision in range determination, high sensitivity to weather conditions.
LIDAR High-precision measurement of structured and unstructured elements. Medium sensitivity to environment conditions.
RADAR High-precision detection and measurement of moving objects with appropriate reflectivity in radar operation range, high robustness against weather conditions.
ULTRASONIC Well-established near-field sensor capable of detecting closest distances to reflecting entities.
MICROPHONES Public traffic uses acoustic signals to prevent crashes and regulate traffic, e.g. on railway intersections. Thus, devices capturing acoustic signals are required for automation levels where the systems need to react to these."

So a collaboration between engineers from Aptiv, Audi, Baidu, BMW, FCA, Continental, HERE, Infineon, Volkswagen, Daimler, and Intel wrote that a camera-only approach does not work and that an autonomous vehicles needs multiple sensors, including cameras, lidar, radar, ultrasonics and microphones.

I believe this multimodal approach is basically an industry standard for autonomous driving.
 
  • Disagree
Reactions: mikes_fsd
It's really unprecedented for engineers to be able to make something smaller, cheaper and higher performance over time. No one could have predicted it. :p

Yeah the prices are in Free fall.

Livox

Fy1DWIe.jpg


Horizon (260 meters at 80% reflectivity, 130 meters at 20% reflectivity)
More than 64-lines in 100ms
81.7° horizontally and 25.1° vertically.
5x for a full 360° FOV.
Horizon lidar sensor - Livox

Tele-15 (500 meters at 80% reflectivity, 280 meters at 20% reflectivity)
More than 128-lines (99.8% of the FOV in just 100ms)
FOV of 15 degrees
Tele-15 lidar sensor - Livox

AutoX is gonna use five horizon and one Tele-15 on their new fleets
Livox
 
  • Informative
Reactions: diplomat33
I just can't see Tesla delivering what they promised with only basic cameras that don't self clean. You summon from the other side of the country and all it takes is some soot or a splash of mud and your car is stuck a thousand miles away.
 
  • Funny
Reactions: AlanSubie4Life
The truth is, probably no one in this forum really knows what the current Tesla sensor suite is capable of. We can speculate, but Tesla have already achieved a very capable driver assistance system which partially perceives the world around it with a few low res cameras. We judge on the current capabilities of the system but who says this is even remotely close to the capabilities in a few years.
We don't know, and we still know nothing about the Dojo project.

Elon has always said "the best part is no part". I understand that redundancy sounds great but lidar would add another fusion layer that could possibly go wrong and it's difficult to see what mm-accuracy depth estimation will really bring to the table when speaking of driving. When it snows, how much error/noise will you have to have to filter out of such a system? Do you ignore it entirely?

Look, I live in Montreal (capital of road salt) and my side and rear cameras get buttered every day and everyday I get messages that this and that camera are blocked. Nonetheless, autopilot can handle driving on the road with the front cameras/radar. I also know that people drive with snow covered windows or even just foggy windows. How? Probably because they can recognize moving objects and lights through the foggy windows, at least well enough to determine if it's safe to change lanes or not. I don't see why the perception system couldn't do the same if some light is still getting to the cameras. On top of that, it has 360 close proximity view with ultrasonics for low speed maneuvers. Don't forget that our cars were summoning in and out of garages without cameras, the lizard brain is just using the ultrasonics.

LIDAR requires line of sight, I really don't understand how you can rely on this technology for redundancy in inclement weather. If anything, it will be a source of greater error in these situations. HW2.5 cars dont drive much differently than HW3 cars, is it possible we haven't even scratched the surface of what's possible with a vision/ultrasonic/radar sensor suite with CNN's? I would bet my money on the guy who is launching reusable rockets two days in a row before the rest of skeptics who still have not achieved real-time modeling of the world around them on affordable passenger vehicles. Let's wait and see what the system is truly capable of instead of assuming we know what we're talking about after reading a few articles and watching YouTube videos on Lidar.
 
  • Like
Reactions: eli_ and mikes_fsd
LIDAR requires line of sight, I really don't understand how you can rely on this technology for redundancy in inclement weather. If anything, it will be a source of greater error in these situations. HW2.5 cars dont drive much differently than HW3 cars, is it possible we haven't even scratched the surface of what's possible with a vision/ultrasonic/radar sensor suite with CNN's? I would bet my money on the guy who is launching reusable rockets two days in a row before the rest of skeptics who still have not achieved real-time modeling of the world around them on affordable passenger vehicles. Let's wait and see what the system is truly capable of instead of assuming we know what we're talking about after reading a few articles and watching YouTube videos on Lidar.

It's interesting to consider what the absolute limits of these systems may be, the real engineering limits. Like what is the minimal possible sensor suite you can have and still complete trips, and still maintain the same safety factor. I think you could do most city driving with just a single fisheye cam on the front of the car. Lane changes are out, maybe you can re-route to stay in the right lane. Your inference confidence is much lower, so to achieve a similar safety factor you will need to drive WAY WAY slower than with your full cameras, maybe just creep along at 5mph. But you might still get there.

What if you went even further... reduce the camera to 1 frame per second, or blind it for a few seconds at a time. Like you capture a single camera frame every 2 seconds and drive based on that, a veritable slide show... I think you could still probably drive if you have a good internal model of vehicle dynamics and inferring the future lane path and intersection geometry between frames, error rate would continuously increase the more time passes since the previous frame, and you may miss, say, sudden cut-ins but you could still complete trips. I think a really robust FSD system could do all these things, it can run the full gamut. Drive at full speed when all the sensors are healthy and perception is confident, and then modulate the driving speed and even the planned route based on a computed confidence/error factor as things start to degrade, and do everything in between from a half-blind camera in the snow to a fully healthy sensor suite in clear weather, maybe without the safety factor changing at all. In reality you would never do this, you would pull over and stop long before the sensors became so unusable. But it's interesting to consider what might be possible if you forced a system to operate in such an unhealthy sensor regime and what the real engineering limits are. Basically an FSD torture test.
 
Last edited:
  • Like
Reactions: APotatoGod
The truth is, probably no one in this forum really knows what the current Tesla sensor suite is capable of. We can speculate, but Tesla have already achieved a very capable driver assistance system which partially perceives the world around it with a few low res cameras. We judge on the current capabilities of the system but who says this is even remotely close to the capabilities in a few years.
We don't know, and we still know nothing about the Dojo project.

Elon has always said "the best part is no part". I understand that redundancy sounds great but lidar would add another fusion layer that could possibly go wrong and it's difficult to see what mm-accuracy depth estimation will really bring to the table when speaking of driving. When it snows, how much error/noise will you have to have to filter out of such a system? Do you ignore it entirely?

Look, I live in Montreal (capital of road salt) and my side and rear cameras get buttered every day and everyday I get messages that this and that camera are blocked. Nonetheless, autopilot can handle driving on the road with the front cameras/radar. I also know that people drive with snow covered windows or even just foggy windows. How? Probably because they can recognize moving objects and lights through the foggy windows, at least well enough to determine if it's safe to change lanes or not. I don't see why the perception system couldn't do the same if some light is still getting to the cameras. On top of that, it has 360 close proximity view with ultrasonics for low speed maneuvers. Don't forget that our cars were summoning in and out of garages without cameras, the lizard brain is just using the ultrasonics.

LIDAR requires line of sight, I really don't understand how you can rely on this technology for redundancy in inclement weather. If anything, it will be a source of greater error in these situations. HW2.5 cars dont drive much differently than HW3 cars, is it possible we haven't even scratched the surface of what's possible with a vision/ultrasonic/radar sensor suite with CNN's? I would bet my money on the guy who is launching reusable rockets two days in a row before the rest of skeptics who still have not achieved real-time modeling of the world around them on affordable passenger vehicles. Let's wait and see what the system is truly capable of instead of assuming we know what we're talking about after reading a few articles and watching YouTube videos on Lidar.

Its currently impossible for a camera only system to achieve the 99.99999% of accuracy that's needed. Definitely not in autonomous driving and not in other areas of deep learning. Its about levels of accuracy and rate of failure. How many miles can you go without a serious perception failure? You need to be able to go millions of miles.

Our latest efforts in deep learning is not close. Take for example the fact that there are billions of android phones and hundreds of millions of google homes and every time you use their voice app, they get every single raw audio data (not below ~0.1% in Tesla's case). Yet Google still haven't solved voice recognition and we are no where close to solving computer vision. Same with Amazon's Alexa (hundreds of millions) and still they fall short.

So the noise that 'all you is a lot of data' is simply misleading.

This is why having different sensor suite that fail/exceed in different ways so that they compliment other sensor suites is the key. It solves the accuracy problem because radar doesn't fail in the same scenarios as cameras and lidars. Also lidar doesn't fail in the same scenarios as cameras and radars and vice versa.

So instead of needing ~99.99999% for one sensor suite. Now you only need ~99.99% for each modality of sensor suite. Its not a lidar vs camera. If someone had a lidar only system they would also need to reach ~99.99999% accuracy with just lidars which is also not currently possible.
 
You don't rely on just lidar as your only sensor. You use lidar in conjunction with other sensors.
To quote Elon:
The best part is no part, the best process is no process!

Adding more complexity to a system does not inherently make it "more safe".
Sorry, but you sound like an insecure middle schooler who's afraid others will point out you don't have the latest gadget.
 
  • Funny
Reactions: AlanSubie4Life