Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
It's difficult to merge sensors.

Search - this has been discussed hundreds of times.
Wrong, it couldn't be easier. The guy who used to oversee AI at Tesla claimed this but, you know, he is not there anymore. Radar overrides neural nets if it detects a collision, USS overrides neural nets if they detect an object. Everybody else is doing exactly this. Doesn't prevent phantom breaking but makes the car safer (for you and for the other motorists and pedestrians) and easier to park.
 
  • Disagree
  • Like
Reactions: spacecoin and EVNow
Elon has missed many timelines, but Elon rarely, if ever, misses on his grand visions, and his vision for camera-only full autonomy is his *most* confident prediction. When Elon isn't sure about the success of something, he will say it. ... Considering Elon is intimate with many of his company's projects, it's difficult to believe that Elon doesn't know what he's saying about wrt to the camera-only approach and its limitations. And we are all armchair visionaries who know better than he does about something he's so confident about. That isn't to say we can't question Elon's timelines, but can anyone question that LIDAR is becoming more and more obsolete every day?
Wow! Weren't you just questioning someone else's logical disconnects? Sure you can say Elon "rarely, if ever, misses his grand visions" if you give him a pass for all the "grand visions" that haven't come to fruition yet as simply "missed timelines." He will never miss a grand vision if the timeline ain't a part of it. And you haven't shared your bonafides with us, but they must be pretty dang impressive if you can confidently say that no one can question that LIDAR is not useful in autonomous driving when the experts at both Mobileye and NVIDIA are including LIDAR in their most recent and most capable autonomous driving platforms. As far as Elon not knowing what he's saying? I think it would be clear to everyone by now that has followed Tesla FSD progress and Elon's prognostications since 2016 that either:

1) Elon has NO IDEA what he is saying about timelines and the final capabilities of the Tesla FSD product; or
2) Tesla cracked fully autonomous driving back in 2018, but the Pentagon bought up all the software and forced Tesla to slow-roll it out to their customers. I mean, how do you think Tesla got all that money to build new plants in China and Germany. Most of the model 3s driving around in Beijing and Eastern Europe are driverless CIA drones with dummies in the driver seat. And Karpathy didn't retire, he was liquidated when it looked like he was going to blow the whole deal. All his social media content is now generated by a sophisticated CIA chatbot developed by the Israelis - why do you think all his tweets are about bunny rabbits and eating cheese in Belgium instead of about hardcore vision deeplearning?

Which do you think is more likely?
 
Last edited:
Wow! Weren't you just questioning someone else's logical disconnects? Sure you can say Elon "rarely, if ever, misses his grand visions" if you give him a pass for all the "grand visions" that haven't come to fruition yet as simply "missed timelines." He will never miss a grand vision if the timeline ain't a part of it. And you haven't shared your bonafides with us, but they must be pretty dang impressive if you can confidently say that no one can question that LIDAR is not useful in autonomous driving when the experts at both Mobileye and NVIDIA are including LIDAR in their most recent and most capable autonomous driving platforms. As far as Elon not knowing what he's saying? I think it would be clear to everyone by now that has followed Tesla FSD progress and Elon's prognostications since 2016 that either:

1) Elon has NO IDEA what he is saying about timelines and the final capabilities of the Tesla FSD product; or
2) Tesla cracked fully autonomous driving back in 2018, but the Pentagon bought up all the software and forced Tesla to slow-roll it out to their customers. I mean, how do you think Tesla got all that money to build new plants in China and Germany. Most of the model 3s driving around in Beijing and Eastern Europe are driverless CIA drones with dummies in the driver seat. And Karpathy didn't retire, he was liquidated when it looked like he was going to blow the whole deal. All his social media content is now generated by a sophisticated CIA chatbot developed by the Israelis - why do you think all his tweets are about bunny rabbits and eating cheese in Belgium instead of about hardcore vision deeplearning?

Which do you think is more likely?
Problem is not that Musk may be wrong on any or all of his insights. It's that he acts upon them as if they were definitely going to materialize in the near future - even though he doesn't have anything even remotely close to a working prototype. Fake it till you make it.
 
This is a great and meaningless discussion, but can anyone answer the question?

What deficiencies in vision-only do you see vs LIDAR, please give specific categories / examples of failures or deficiencies?
In addition to vision/neural nets being far from perfect at the moment? As a concept, you mean, right? So assuming you put enough cameras to be able to see perfectly around the car (Tesla still doesn't offer 360 camera view for some reason)? Doesn't work if it's dark, if the sensor is not clean, if it's heavy rain
 
This is a great and meaningless discussion, but can anyone answer the question?

What deficiencies in vision-only do you see vs LIDAR, please give specific categories / examples of failures or deficiencies?
One more problem with vision. Fast forward to 09:00

Tesla vision refuses to see the red garbage container. Maybe because it's in the UK and you don't have them in the US, so the neural nets were not trained to see them. But it highlights another problem with vision generally. If you are about to hit an odd shaped or coloured object, even you as a human may fail to recognize it. The neural net, at least currently, won't unless it's seen enough of it before. The radar, on the other hand, will. And when you are close enough, the USS will as well. They will have no idea what it is you are about to run into, but they will know for sure that you are about to do it. And that's the level of certainty that the neural net will never give you. To me, it's a death wish to put your family in a car that doesn't have these safety features.
 
One more problem with vision. Fast forward to 09:00
Yes, it's important to acknowledge that Lidar and Radar doesn't rely on NN:s to tell the distance to an object. Physical measurement beats guessing from a 2d image or video. Semantic cues perform poorly at night too when there are very few reference objects and Tesla doesn't use stereo vision (parallax) - see Technology | Compound Eye
 
  • Like
Reactions: Doggydogworld
spacecoin said:
Let's see. Either Lidar or radar see through sun glare, heavy fog, smoke, snow/slush on lenses (you need self cleaning)... Cameras do not handle these scenarios well... Tesla has only two cameras (hw4) with cleaning.

And the humans don't do very well in these situations either, by the way. The whole point of technology is that it allows you to do things that you yourself are not capable of, like hold your hand when there's sun glare or fog. What exactly is the genius in handicapping yourself by limiting your available senses to vision only? And it's not that you need to invent anything, the technology is there, everybody else (literally) is using it and there are no complaints AFAIK.
 
Tesla vision refuses to see the red garbage container. Maybe because it's in the UK and you don't have them in the US, so the neural nets were not trained to see them.

Well obviously Tesla is constantly improving its occupancy network (particularly for park assist), but that doesn't say much about examples of deficiencies of cameras vs LIDAR for FSDb.

Since we are talking about say Cruise (L4) vs Tesla (L2), what deficiencies in cameras vs LIDAR do you see that would make it impossible for Tesla to achieve a L4 service in ANY locale vs Cruise, right now.

Please consider as well that Cruise is on record saying that they need a remote operator intervention every 10-15 miles in SF.
 
Well obviously Tesla is constantly improving its occupancy network (particularly for park assist), but that doesn't say much about examples of deficiencies of cameras vs LIDAR for FSDb.

Since we are talking about say Cruise (L4) vs Tesla (L2), what deficiencies in cameras vs LIDAR do you see that would make it impossible for Tesla to achieve a L4 service in ANY locale vs Cruise, right now.

Please consider as well that Cruise is on record saying that they need a remote operator intervention every 10-15 miles in SF.
You asked what the deficiencies are, and we gave them to you. Vision sensors have physical limitations (see above), and no matter how much you improve the NNs they will never be perfect. It's the nature of the beast, they only know what they've seen before.

Radars have deficiencies also. A radar has no idea what you are about to smash into, but it's still better than not knowing at all that you are about to, no? A radar won't stop you from smashing into a stealth bomber, either. But then again, in this particular case, neither will the neural net because, just like with that red garbage container in the youtube video, they were never trained for this situation.

So the natural question is, why not combine all of these different types of sensors since they so obviously compliment each other and make the car safer?
 
  • Like
Reactions: spacecoin
no matter how much you improve the NNs they will never be perfect. It's the nature of the beast, they only know what they've seen before.

This is the problem with any discussion in AVs.

People know perfection is impossible with any sensor, so that's why I posed the question about Cruise being L4 and still requiring remote operator intervention every 10-15 miles.

Cameras don't need to be perfect to be L4. The reason fsdb has performance problems is more related to the fact that it's trained on road geometry and semantics worldwide (intercontinental?). Tesla would rather achieve a generalized solution worldwide than wasting time on feature engineering for a particular locale.

It's almost as if people forgot Uber killed a woman with LIDAR and Cruise ran into a bus and Waymo ran into sandbags (?).
 
  • Like
Reactions: JB47394
This is a great and meaningless discussion, but can anyone answer the question?

What deficiencies in vision-only do you see vs LIDAR, please give specific categories / examples of failures or deficiencies?
dirty cameras, bright sun, poor lighting, heavy rain, thick fog. But lidar only would be a weak solution also.

Vision only is one sense only.
Humans would die in a day without the combination of all senses, or sensors (kind of a derivative), and humans also have their life history that can be recalled. The Tesla data is only used to tune the model. It should have prevision and know what to expect. My FSD seems to react way to late to navigation like it is surprised all the time and has no recall on a very familiar road to me, a simple human

As a control engineer I believe to ever get FSD to function like the fantasy would take

massive computing power (much more than HW4)
More and better cameras in better locations
radar and or LIDAR
ultrasonics for wider beam
Historical road navigation and condition data (pot-hole-data)
Beacons for construction sites that transmit updated information
HD microphones for audio (people yelling to police sirens)
Peer to Peer communication and convoy mode with other smart vehicles
Integrated weather

nice things like augmented reality, lifestyle AI trip planning and routing

for tight or high risk areas, special tape or sensors that you could put in your garage or parking bollards
 
Cruise being L4 and still requiring remote operator intervention every 10-15 miles.
First, can we get a source on this number or did you just make it up? Secondly, there are no "remote operator interventions". The car sometimes asks for a human to weigh in if it's unsure. This is for safety reasons. It doesn't need a human from stopping it from running into things or breaking the law. All this is top class safety engineering. That's what autonomy and reliability is about. The system need to understand its limitations.

Meantime Tesla FSD can even go driverless in Elons silly Las Vegas tunnel system.
It's almost as if people forgot Uber killed a woman with LIDAR and Cruise ran into a bus and Waymo ran into sandbags (?).
Lidar didn't kill that poor cyclist. A premature deployment and a test driver that didn't perform its job did. What's you're point? All systems are imperfect, but at least some are safer by design than others. And some believe enough in their system to take on liability for testing it.

I'm sorry but you're delusional if you think Tesla's existing system (or any other camera only system) will get to geofenced L4 in the coming years. Perhaps it will happen late this decade if there are some major computer vision breakthroughs.
 
Last edited:
  • Disagree
  • Like
Reactions: EVNow and aataskin
Here is what one real expert thinks about Google and Waymo.

1685641828480.png
 
  • Funny
Reactions: Doggydogworld
I haven't seen any convincing argument that vision-only can't get to L4 within the next year, so far, we got:

1) example of an old video of the initial release of vision only park assist bumping to into a soft trash bin

2) claims of major breakthrough needed in CV

3) no videos of actual driving where vision only can't drive because of camera limitations

I drive 11.4.2 every day, and the only problems I've encountered can be solved by HD maps, nothing related to the vision system itself.
 
  • Like
Reactions: EVNow
I haven't seen any convincing argument that vision-only can't get to L4 within the next year
You understand that L4 autonomy is driverless robotaxi right? Send your kids to practice alone in the car and have the car return and then jump into the the backseat and sip champagne while it drives you and the missus to a restaurant? That's what you envision Tesla getting to in 12 months?

Just explain to me how you get from 5-15 miles per disengagement in 36 months. Then then magically from 15 miles to 40000 miles per disengagement in 12 months? Try a graphing tool?

Each "nine" (order of magnitude improvement) is harder, not easier, you know... I'll be amazed if the fix the auto wipers in 12 months and get to 30 miles per DE... 🙃
 
Last edited:
  • Like
Reactions: daktari