I hope you are 100% right. I am a Tesla fan, but confess that I am very skeptical that the FSD can work safely with just cameras input. Why not use the radars in conjunction with the cameras?
This is always a comment that just floors me. How can FSD function with just cameras?
How are you able to see with just two eyes?
Some people only have one eye.
Some people barely can see the road.
Some people are color blind.
But they all drive.
So let me ask you this, let me put a detailed RADAR in front of you, do you think that you could drive better? What do you do if what your eyes see and what the RADAR tells you differ?
Tesla already is doing a great job of seeing the road and everything on it. If you look at any of the internal videos showing the detailed visualization, you will find that it see so much more than you can ever comprehend. It knows where essentially every car around it is and the speed at which it is going. It's looking at the lines on the roads in all of the lanes. It knows where the curbs and the buildings are at all time.
Next time you get out on the road, see how many of these things that you actually saw. the answer is essentially near none. If at any moment in time, I stop your driving and get you to draw a picture of the things around you, how many can you get right?
When using computers, there's definitely something called data overload. It can be real easy to overwhelm a processor with tasks. If you double the resolution of a camera, you increase the computer utilization by at least 4 times.
RADAR or LIDAR just really don't do that much good when you get down to the brass tacks. Sure, they may be more accurate, But does the difference in a car being 400m away vs 410m away make that much difference?
And neither RADAR nor LIDAR can see a red stop sign, they can't see colors! And the color of signage can be important.
I'm 99.99% positive that when Tesla looked at radar, they decided that it just wasn't providing anything useful. And it was taking extra processor cycle to do so.