Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is FSD based solely on cameras input?

This site may earn commission on affiliate links.
What are the news about the technology to be used as input for FSD? If it relies only on camera input, how is it going to handle when the cameras are unusable due to rain (very common problem with the rear camera) or when it is icy outside (all the cameras have a tendency to be totally fuzzy)?
 
What are the news about the technology to be used as input for FSD? If it relies only on camera input, how is it going to handle when the cameras are unusable due to rain (very common problem with the rear camera) or when it is icy outside (all the cameras have a tendency to be totally fuzzy)?

Yes, it's the cameras only. Software can do wonderful things even with fuzzy rain covered cameras. The main camera (above the rear-view mirror) has wipers to keep it clear though.

But today, you sometimes do get the warning the FSD may be degraded when raining.

Ask yourself this though. How do you do it with your vision system (eyes)
 
  • Like
Reactions: Silicon Desert
Upvote 0
Yes, it's the cameras only. Software can do wonderful things even with fuzzy rain covered cameras. The main camera (above the rear-view mirror) has wipers to keep it clear though.

But today, you sometimes do get the warning the FSD may be degraded when raining.

Ask yourself this though. How do you do it with your vision system (eyes)
Well. I most definitely can see much better than what the cameras are displaying during those events! What do you say about that?
 
Upvote 0
Well. I most definitely can see much better than what the cameras are displaying during those events! What do you say about that?
That's too easy. You are seeing raw data, probably not even at the resolution of the cameras. Tesla shows you the cameras more as a courtesy than anything else. What you don't see is the background processing.

40 years ago in college I ran a routine that would take an out of focus picture and make it nearly perfect. Spherical aberrations on a lens are pretty easy to solve. That's how NASA fixed Hubble!
 
Upvote 0
That's too easy. You are seeing raw data, probably not even at the resolution of the cameras. Tesla shows you the cameras more as a courtesy than anything else. What you don't see is the background processing.

40 years ago in college I ran a routine that would take an out of focus picture and make it nearly perfect. Spherical aberrations on a lens are pretty easy to solve. That's how NASA fixed Hubble!
I hope you are 100% right. I am a Tesla fan, but confess that I am very skeptical that the FSD can work safely with just cameras input. Why not use the radars in conjunction with the cameras?
 
Upvote 0
What are the news about the technology to be used as input for FSD? If it relies only on camera input, how is it going to handle when the cameras are unusable due to rain (very common problem with the rear camera) or when it is icy outside (all the cameras have a tendency to be totally fuzzy)?
Basically, yes it only relies on camera input, though it also use GPS data (via the nav system) etc. It doesnt use radar (though that may come back for those cars that have the newer HD radar) nor the USS sensors (again, for those care that still have them).

However, even for cars WITH all the sensors, the car will still not be able to drive if blinded. No-one is attempting to make a car that can drive using (e.g.) radar/lidar without operational cameras. Issues such as ice etc are handled with camera heaters (though this varies from model to model). Common sense really .. what would you do if your windshield were covered in snow?
 
Upvote 0
Well. I most definitely can see much better than what the cameras are displaying during those events! What do you say about that?
How do you know that? You cannot "see" the camera input .. what you see on the screen is a VERY heavily processed version of the raw camera data, viewed on a moderate resolution screen. Furthermore, you eye can only "see" all its details in a very small segment of your visual field of view (the fovea), whereas the camera has full resolution across its entire FOV.

I can tell you from actual experience that in heavy rain in poorly lit conditions the car does MUCH better than my eyes at finding lane lines than I can.
 
Upvote 0
Tesla only shows a camera view (which is not optimized for our eyes) to give us a warm fuzzy. In the days with radar and USS, the neural net fought against itself with conflicting data which degraded FSD performance. By dropping the other inputs and re-training the net using visual information, FSD performs better since it doesn't have to waste processing time deciding which input to use.

From my own experience, the first few iterations after they dropped radar were a step backwards. However, these last few releases this year are significantly better (in my opinion) using vision only compared to when it had vision/radar/USS. I've been using FSD since the first public release and it's much much better!
 
Upvote 0
I hope you are 100% right. I am a Tesla fan, but confess that I am very skeptical that the FSD can work safely with just cameras input. Why not use the radars in conjunction with the cameras?
Because, until very recently, cost-effect radar doesnt help. Sure, radar on fighter jets can do amazing things .. at a cost. But the lower cost radars that are viable for consumer use lack the acuity and precision beyond saying "there is something biggish someplace vaguely over there that has an approach velocity that is roughly XXX". So you rely on cameras to figure out what that something actually IS and how the car should react to it. And guess what? Turns out that when the visual system is doing that it can pretty well figure out the distance and velocity for itself (including direction, which radar cannot give you).

if someone blindfolds you, then with difficulty you can feel your way around a room. But once the blind fold is off, you dont carry on feeling the furniture when walking around because you no longer need to. Your eyes tell you all you need to know. You COULD continue to feel everything, but would it help? (apart from looking silly!)
 
Upvote 0
I hope you are 100% right. I am a Tesla fan, but confess that I am very skeptical that the FSD can work safely with just cameras input. Why not use the radars in conjunction with the cameras?

This is always a comment that just floors me. How can FSD function with just cameras?
How are you able to see with just two eyes?
Some people only have one eye.
Some people barely can see the road.
Some people are color blind.

But they all drive.

So let me ask you this, let me put a detailed RADAR in front of you, do you think that you could drive better? What do you do if what your eyes see and what the RADAR tells you differ?

Tesla already is doing a great job of seeing the road and everything on it. If you look at any of the internal videos showing the detailed visualization, you will find that it see so much more than you can ever comprehend. It knows where essentially every car around it is and the speed at which it is going. It's looking at the lines on the roads in all of the lanes. It knows where the curbs and the buildings are at all time.

Next time you get out on the road, see how many of these things that you actually saw. the answer is essentially near none. If at any moment in time, I stop your driving and get you to draw a picture of the things around you, how many can you get right?

When using computers, there's definitely something called data overload. It can be real easy to overwhelm a processor with tasks. If you double the resolution of a camera, you increase the computer utilization by at least 4 times.

RADAR or LIDAR just really don't do that much good when you get down to the brass tacks. Sure, they may be more accurate, But does the difference in a car being 400m away vs 410m away make that much difference?
And neither RADAR nor LIDAR can see a red stop sign, they can't see colors! And the color of signage can be important.

I'm 99.99% positive that when Tesla looked at radar, they decided that it just wasn't providing anything useful. And it was taking extra processor cycle to do so.
 
  • Like
Reactions: drtimhill
Upvote 0
Because, until very recently, cost-effect radar doesnt help. Sure, radar on fighter jets can do amazing things .. at a cost. But the lower cost radars that are viable for consumer use lack the acuity and precision beyond saying "there is something biggish someplace vaguely over there that has an approach velocity that is roughly XXX". So you rely on cameras to figure out what that something actually IS and how the car should react to it. And guess what? Turns out that when the visual system is doing that it can pretty well figure out the distance and velocity for itself (including direction, which radar cannot give you).

if someone blindfolds you, then with difficulty you can feel your way around a room. But once the blind fold is off, you dont carry on feeling the furniture when walking around because you no longer need to. Your eyes tell you all you need to know. You COULD continue to feel everything, but would it help? (apart from looking silly!)
Yea, cars and jets are two very different situations. With jets, they are looking at things that cannot be seen, things that may be 100 miles away.
If it is 100 ft away in a jet, something has gone wrong a LONG time ago.

Oh and when the radar reports velocity, if I'm mistaken, it reports the velocity component in your your direction. If the target is going side to side, that takes post processing.
 
  • Like
Reactions: T-Mom
Upvote 0
Yea, cars and jets are two very different situations. With jets, they are looking at things that cannot be seen, things that may be 100 miles away.
If it is 100 ft away in a jet, something has gone wrong a LONG time ago.

Oh and when the radar reports velocity, if I'm mistaken, it reports the velocity component in your your direction. If the target is going side to side, that takes post processing.
Yes, thats why i stated approach velocity. Radar can only give you the component of motion toward the car. Useful daya but insufficient.
 
Upvote 0
I thank all that provided me with much more insight on the technology Tesla is using, and more yet, happy that it seems that all of those who replied are very positive on that technology. As I mentioned at the beginning, I a Tesla fan, but had my concerns, specially when thinking about the upcoming Tesla's robot taxis, which, supposedly will not have anyone behind the wheels.
 
  • Like
Reactions: T-Mom and drtimhill
Upvote 0
Well. I most definitely can see much better than what the cameras are displaying during those events!
nope, that is not right. You would be surprised at how much better the cameras and software combination perform as compared to your eyes. Assuming of course both your eyes and the cameras are not degraded in some way. :)
 
  • Like
Reactions: drtimhill and T-Mom
Upvote 0