1) If anything in the AV space is a solved problem, it's sensor fusion.It's difficult to merge sensors.
2) I wasn't talking about sensor fusion if you read the post again.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
1) If anything in the AV space is a solved problem, it's sensor fusion.It's difficult to merge sensors.
Wrong, it couldn't be easier. The guy who used to oversee AI at Tesla claimed this but, you know, he is not there anymore. Radar overrides neural nets if it detects a collision, USS overrides neural nets if they detect an object. Everybody else is doing exactly this. Doesn't prevent phantom breaking but makes the car safer (for you and for the other motorists and pedestrians) and easier to park.It's difficult to merge sensors.
Search - this has been discussed hundreds of times.
Wow! Weren't you just questioning someone else's logical disconnects? Sure you can say Elon "rarely, if ever, misses his grand visions" if you give him a pass for all the "grand visions" that haven't come to fruition yet as simply "missed timelines." He will never miss a grand vision if the timeline ain't a part of it. And you haven't shared your bonafides with us, but they must be pretty dang impressive if you can confidently say that no one can question that LIDAR is not useful in autonomous driving when the experts at both Mobileye and NVIDIA are including LIDAR in their most recent and most capable autonomous driving platforms. As far as Elon not knowing what he's saying? I think it would be clear to everyone by now that has followed Tesla FSD progress and Elon's prognostications since 2016 that either:Elon has missed many timelines, but Elon rarely, if ever, misses on his grand visions, and his vision for camera-only full autonomy is his *most* confident prediction. When Elon isn't sure about the success of something, he will say it. ... Considering Elon is intimate with many of his company's projects, it's difficult to believe that Elon doesn't know what he's saying about wrt to the camera-only approach and its limitations. And we are all armchair visionaries who know better than he does about something he's so confident about. That isn't to say we can't question Elon's timelines, but can anyone question that LIDAR is becoming more and more obsolete every day?
Isn't it funny outside China these experts have not a single consumer car with FSD like capability?Mobileye and NVIDIA are including LIDAR in their most recent and most capable autonomous driving platforms
Isn't it funny the same is true for Tesla and the US/Canada.Isn't it funny outside China these experts have not a single consumer car with FSD like capability?
Problem is not that Musk may be wrong on any or all of his insights. It's that he acts upon them as if they were definitely going to materialize in the near future - even though he doesn't have anything even remotely close to a working prototype. Fake it till you make it.Wow! Weren't you just questioning someone else's logical disconnects? Sure you can say Elon "rarely, if ever, misses his grand visions" if you give him a pass for all the "grand visions" that haven't come to fruition yet as simply "missed timelines." He will never miss a grand vision if the timeline ain't a part of it. And you haven't shared your bonafides with us, but they must be pretty dang impressive if you can confidently say that no one can question that LIDAR is not useful in autonomous driving when the experts at both Mobileye and NVIDIA are including LIDAR in their most recent and most capable autonomous driving platforms. As far as Elon not knowing what he's saying? I think it would be clear to everyone by now that has followed Tesla FSD progress and Elon's prognostications since 2016 that either:
1) Elon has NO IDEA what he is saying about timelines and the final capabilities of the Tesla FSD product; or
2) Tesla cracked fully autonomous driving back in 2018, but the Pentagon bought up all the software and forced Tesla to slow-roll it out to their customers. I mean, how do you think Tesla got all that money to build new plants in China and Germany. Most of the model 3s driving around in Beijing and Eastern Europe are driverless CIA drones with dummies in the driver seat. And Karpathy didn't retire, he was liquidated when it looked like he was going to blow the whole deal. All his social media content is now generated by a sophisticated CIA chatbot developed by the Israelis - why do you think all his tweets are about bunny rabbits and eating cheese in Belgium instead of about hardcore vision deeplearning?
Which do you think is more likely?
In addition to vision/neural nets being far from perfect at the moment? As a concept, you mean, right? So assuming you put enough cameras to be able to see perfectly around the car (Tesla still doesn't offer 360 camera view for some reason)? Doesn't work if it's dark, if the sensor is not clean, if it's heavy rainThis is a great and meaningless discussion, but can anyone answer the question?
What deficiencies in vision-only do you see vs LIDAR, please give specific categories / examples of failures or deficiencies?
Let's see. Either Lidar or radar see through sun glare, heavy fog, smoke, snow/slush on lenses (you need self cleaning)... Cameras do not handle these scenarios well... Tesla has only two cameras (hw4) with cleaning.
One more problem with vision. Fast forward to 09:00This is a great and meaningless discussion, but can anyone answer the question?
What deficiencies in vision-only do you see vs LIDAR, please give specific categories / examples of failures or deficiencies?
Yes, it's important to acknowledge that Lidar and Radar doesn't rely on NN:s to tell the distance to an object. Physical measurement beats guessing from a 2d image or video. Semantic cues perform poorly at night too when there are very few reference objects and Tesla doesn't use stereo vision (parallax) - see Technology | Compound EyeOne more problem with vision. Fast forward to 09:00
spacecoin said:
Let's see. Either Lidar or radar see through sun glare, heavy fog, smoke, snow/slush on lenses (you need self cleaning)... Cameras do not handle these scenarios well... Tesla has only two cameras (hw4) with cleaning.
Tesla vision refuses to see the red garbage container. Maybe because it's in the UK and you don't have them in the US, so the neural nets were not trained to see them.
You asked what the deficiencies are, and we gave them to you. Vision sensors have physical limitations (see above), and no matter how much you improve the NNs they will never be perfect. It's the nature of the beast, they only know what they've seen before.Well obviously Tesla is constantly improving its occupancy network (particularly for park assist), but that doesn't say much about examples of deficiencies of cameras vs LIDAR for FSDb.
Since we are talking about say Cruise (L4) vs Tesla (L2), what deficiencies in cameras vs LIDAR do you see that would make it impossible for Tesla to achieve a L4 service in ANY locale vs Cruise, right now.
Please consider as well that Cruise is on record saying that they need a remote operator intervention every 10-15 miles in SF.
no matter how much you improve the NNs they will never be perfect. It's the nature of the beast, they only know what they've seen before.
dirty cameras, bright sun, poor lighting, heavy rain, thick fog. But lidar only would be a weak solution also.This is a great and meaningless discussion, but can anyone answer the question?
What deficiencies in vision-only do you see vs LIDAR, please give specific categories / examples of failures or deficiencies?
First, can we get a source on this number or did you just make it up? Secondly, there are no "remote operator interventions". The car sometimes asks for a human to weigh in if it's unsure. This is for safety reasons. It doesn't need a human from stopping it from running into things or breaking the law. All this is top class safety engineering. That's what autonomy and reliability is about. The system need to understand its limitations.Cruise being L4 and still requiring remote operator intervention every 10-15 miles.
Lidar didn't kill that poor cyclist. A premature deployment and a test driver that didn't perform its job did. What's you're point? All systems are imperfect, but at least some are safer by design than others. And some believe enough in their system to take on liability for testing it.It's almost as if people forgot Uber killed a woman with LIDAR and Cruise ran into a bus and Waymo ran into sandbags (?).
You understand that L4 autonomy is driverless robotaxi right? Send your kids to practice alone in the car and have the car return and then jump into the the backseat and sip champagne while it drives you and the missus to a restaurant? That's what you envision Tesla getting to in 12 months?I haven't seen any convincing argument that vision-only can't get to L4 within the next year