Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
This probably isn`t about data not being picked up but data not being interpreted correctly/fast enough. This has nothing to do with vision vs lidar....

I wasn't saying anything specific about either system. I believe the Uber system uses lidar, and it did not react. The vision system would use a camera and the footage showed the person was visible in time to react. That was all I was stating.

Either system as a whole needs to meet the safety requirements whether the weakness is in the initial dataset or interpretation thereof.
 
  • Like
Reactions: Tezlanian
Yep, the hypothesis I proposed (person stepping off curb) does not apply to this case. Does that invalidate the logic/ physics around it? It seems reasonable to discuss not just specific events, but also the general systemic issues of driving with uncontrollable external actors.

In this case, the car was overdriving its headlights (although the camera's dynamic range likely does not match human vision). At 40 MPH with about 2 seconds from first glimer of the shoe until impact, that is about 117 feet, a little under double the stopping distance at full braking. So reaction time would need to be less than 1 second (slightly more time if the car swerves). Vision system could have detected and stopped car in time, strange the LIDAR didn't.


Whats wrong with you tesla fans? Why can't you get it through your thick skull that others use vision system aswell as Lidar?
Uber cars has like 20 HDR cameras (they don't use that for decoration) and even the released footage was of dash cam instead of the footage their system actually sees.

I'm tired of repeating myself. you people never listen!

It certainly looks bad for Uber | Brad Ideas
 
I benefit from using Full Beam when driving at night, particularly on rural roads. Do car vision systems need that, or can then drive [just as well] all the time on dipped lights?

Would they benefit from supplementing using e.g. infra red? (headlights and camera presumably)
 
I believe the Uber system uses lidar, and it did not react. The vision system would use a camera and the footage showed the person was visible in time to react. That was all I was stating.
Since Uber uses a high end LIDAR as well as a whole array of cameras and radar your statement simply makes no sense. That`s also why I said that this is probably more about data handling, priorization and interpretation than about the source of the data.

The dilemma is that the car didn`t even try to break even though some sensors, be it lidar, radar or camera must have picked up something.
Nothing to be seen about "superhuman" reaction times here sadly.
 
Last edited:
Lt;dr; I'm not purposly disparaging any sensor type or company. If I did so, it was unintentional.

Whats wrong with you tesla fans? Why can't you get it through your thick skull that others use vision system aswell as Lidar?
Uber cars has like 20 HDR cameras (they don't use that for decoration) and even the released footage was of dash cam instead of the footage their system actually sees.

I'm tired of repeating myself. you people never listen!

It certainly looks bad for Uber | Brad Ideas

I'm not trying to argue vision vs lidar or Tesla vs anyone. I had invisioned a different situation and was curious on if there was any way to be 100% safe. Upon more evidence it was not representative of this situation, but I think it is still a valid question: what should autonomous cars do regarding people they detect that 'could' enter the vehicle's path?

Since Uber uses a high end LIDAR as well as a whole array of cameras your statement simply makes no sense which is why I said that this is probably more about data handling, priorization and interpretation than about the source of the data.

Makes sense. My only reason for mentioning lidar was that it is self illuminating. Therefore would likely have had data on the pedestrian when the camera (at least the one we saw which is not an actual sensor) did not.

Basically, in this situation ( night, clear, long range, wide road) I would expect lidar to pick up the person earlier/ easier than vision. I agree the issue likely lies in the SW (possibly due to low return from the jacket and noisy return from the bicycle). If the return had gotten classified as a non-object, is it plausible that the vision system was not allowed to react either?
 
Haven't watched the keynote that Waymo shows how its self-driving cars navigate snow references but Google I/O Recap: Turning self-driving cars from science fiction into reality with the help of AI has some more info and stats.
With machine learning, we can navigate nuanced and difficult situations; maneuvering construction zones, yielding to emergency vehicles, and giving room to cars that are parallel parking. We can do this because we’ve trained our ML models using lots of different examples. To date, we’ve driven 6 million miles on public roads and observed hundreds of millions of interactions between vehicles, pedestrians and cyclists.
...
We also rigorously test our ML models in simulation, where we drive the equivalent of 25,000 cars all day, every day.
 
Waymo and Jaguar announce a partnership where Jaguar will build up to 20K
self-driving variants of I-PACE for Waymo's autonomous fleet - by 2020


What about this (?) Apple Lexus self driving car with over 20 Lidar sensors?

LrMnmxzO6fwMxWfwjQNDMjnbUAxsyUny1RnnGIrZ_0E.jpg



Note: Unless it's only one of the 24 Hours of LeMons Race cars like the following one! :)

24-Hours-of-LeMons-006.jpg
 
Since Uber uses a high end LIDAR as well as a whole array of cameras and radar your statement simply makes no sense. That`s also why I said that this is probably more about data handling, priorization and interpretation than about the source of the data.

The dilemma is that the car didn`t even try to break even though some sensors, be it lidar, radar or camera must have picked up something.
Nothing to be seen about "superhuman" reaction times here sadly.

there is no way the lidar did not pick up the woman as long as she was in the field of view. i dont know how many degrees the lidar can scan.
 
yea, i remember reading somewhere that the 'setting' for stationary objects was too set too low.
sounds like their sensors/fusion is getting too many false braking events.
so perhaps the operator or the company turned down the sensitivity so it would not be annoying and more efficient for the testing...
 
Nice to see another camera only solution working in the wild. That means Telsa should be able to get it done eventually...

It says that the next version will have lidar, radar and much better cameras than Tesla fitted.

This confirms that Tesla's sensors are inadequate and will need to be upgraded. Subsequently the processor will need to be upgraded to handle the extra data.
 
It says that the next version will have lidar, radar and much better cameras than Tesla fitted.

This confirms that Tesla's sensors are inadequate and will need to be upgraded. Subsequently the processor will need to be upgraded to handle the extra data.
No not at all, it says the cameras are perfectly adequate, it also says the other systems will be there for redundancy. The “next version” will always be better. If you chase that horse you’ll never catch it. We’ll get faster computers updgrades as stated many times.
 
No not at all, it says the cameras are perfectly adequate, it also says the other systems will be there for redundancy. The “next version” will always be better. If you chase that horse you’ll never catch it. We’ll get faster computers updgrades as stated many times.

"Adequate" except that they don't have the redundancy to operate safely...

They wouldn't be adding extra cost if the low quality cameras were enough by themselves.
 
Redundancy means to support main perception with more affirmative input, ie. to support main sensor (vision) with even better information in dire conditions. For example, when vision is on the brink of failure, lidar and/or radar can become the main input for the system to see better in rain and in fog, or better accuracy positioning the car in a lane on a road.