Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is current sensor suite enough to reach FSD? (out of main)

This site may earn commission on affiliate links.
What part of the hardware suite do you believe is insufficient for safer than a human FSD?
1 More rear-facing cameras because my rear camera is regularly distorted by rain drops. It would be helpful if the repeater camera views intersected each other. I suggest 4 additional corner cameras (2 front, 2 rear).

2 Rear radar.

These are the two things that bother me the most.

For the record, I don't believe lidar is necessary.
 
1 More rear-facing cameras because my rear camera is regularly distorted by rain drops. It would be helpful if the repeater camera views intersected each other. I suggest 4 additional corner cameras (2 front, 2 rear).

2 Rear radar.

These are the two things that bother me the most.

For the record, I don't believe lidar is necessary.

For the record, your beliefs seem to be supported by nothing at all. Nobody cares what you think if your thoughts are based on random emotions.

Do you really believe that it is difficult for Tesla's AI to normalize vision for dirt and rain? Is so, why?

You say you think that more cameras and radar are needed. Why? Isn't a Tesla's visual capability already superhuman? Why is more needed?

There's no doubt that the problem is hard. Tesla seems to be putting a massive effort into improving its software and compute capability. But no effort into the things you mention. I wonder why that is.
 
1 More rear-facing cameras because my rear camera is regularly distorted by rain drops. It would be helpful if the repeater camera views intersected each other. I suggest 4 additional corner cameras (2 front, 2 rear).

2 Rear radar.

These are the two things that bother me the most.

For the record, I don't believe lidar is necessary.

Why do you think these rear cameras and radar are necessary for safer than a human FSD?

The rear camera being distorted by rain drops not only sounds like a technical problem rather than insufficient hardware, even if stops you from seeing clearly, it might not impact the neural nets as much.

For the record, your beliefs seem to be supported by nothing at all. Nobody cares what you think if your thoughts are based on random emotions.

Do you really believe that it is difficult for Tesla's AI to normalize vision for dirt and rain? Is so, why?

You say you think that more radar is needed. Why? Isn't a Tesla's visual capability already superhuman? Why is more needed?

There's no doubt that the problem is hard. Tesla seems to be putting a massive effort into improving its software and compute capability. But no effort into the things you mention. I wonder why that is.

It'd indeed be helpful if @Mo City could further elaborate his views, but no need to be so rude/offensive. He is simply explaining his point of view, for which I am thankful, even if I see things differently.
 
Why do you think these rear cameras and radar are necessary for safer than a human FSD?

The rear camera being distorted by rain drops not only sounds like a technical problem rather than insufficient hardware, even if stops you from seeing clearly, it might not impact the neural nets as much.
One example is if the car is backing up in a parking lot (for whatever reason), rear camera distortion prevents a clear view of the surroundings. Doesn't matter if you define that as a technical problem or insufficient hardware.

If multiple 9s are going to be added after the decimal, a better view of the rear seems needed IMO. Radar and corner cameras would offer that.
 
  • Like
Reactions: Christine69420
Not that you asked me, but the most likely limiting factor is the silicon memory and compute power.

For a given neural network, inference latency, memory usage is constant i.e. computational characteristics is invariant for a given neural network regardless of inputs.

Assuming we have an agreement on neural network inference computation charactertistics, I interpret what you said to mean that a neural network capable of handling FSD is larger than what can be fit inside AP3 memory. Why do you think so? And do you have a rough estimate of the minimum neural network size needed for FSD?
 
One example is if the car is backing up in a parking lot (for whatever reason), rear camera distortion prevents a clear view of the surroundings. Doesn't matter if you define that as a technical problem or insufficient hardware.

If multiple 9s are going to be added after the decimal, a better view of the rear seems needed IMO. Radar and corner cameras would offer that.

Isn't the obvious solution here to make sure the camera functions properly and doesn't get distorted rather than add more cameras?

Also sounds like this is only a problem when reversing in parking lots? Or does this also affect NoA performance on high ways in your experience?
 
  • Like
Reactions: JusRelax
Isn't the obvious solution here to make sure the camera functions properly and doesn't get distorted rather than add more cameras?

Also sounds like this is only a problem when reversing in parking lots? Or does this also affect NoA performance on high ways in your experience?

IMHO, there is a problem for bad weather, such as winter slushy melting snow being splashed up onto the cameras and obscuring them. Autopilot becomes non-functional in those conditions now. However, more cameras would not really solve this problem. The human driver can still see for 1 simple reason: the car has wipers to keep the windscreen clean so that the driver can see. So the solution should be something similar: technology to keep the camera lenses clean -- either tiny wipers on them or water-sprinkler like we have for the windscreen.
 
Is this a malfunction of the rear camera or a limitation?

I don't own a Tesla, and I don't even have a driver's license, so maybe I don't fully understand the problem you're describing, but it doesn't sound like adding more cameras would fix this.

Besides, we don't even know if it's a problem for the neural nets, or whether they can learn to recognize objects during rain distortion.

Clearly it is an issue for certain customers like yourself, but I haven't heard anyone suggest this prohibits NoA from making lane changes for example, so I'm not so sure a lack of additional rear cameras and/or a rear radar prohibits Tesla from developing safer than a human FSD.
 
Quick google search returned at least one paper describing how water distortion on a lens can be significantly reduced with software; "Image restoration via de-raining", link https://arxiv.org/pdf/1901.00893.pdf

Caveat 1; Don't know how well this can be done real time without taking too much computer resources.
Caveat 2; I only find the paper in Arxiv, an open depository. It apparently hasn't been published in a peer-reviewed journal so I don't know whether to trust the paper.

Additional note; If AI can really negate the effect of water droplets that well, the corrected image would probably only be useful for the human doing the labeling for the computer. AI would interpret the distorted image directly without first doing the de-distortion and then doing the interpretation. I would assume that the only thing needed would be enough labeled data with droplets on the lens. Which would obviously save computer resources compared to processing the image twice.

Caveat 3; I'm no AI expert.
 
Last edited:
One example is if the car is backing up in a parking lot (for whatever reason), rear camera distortion prevents a clear view of the surroundings.

Backing in a parking lot is done at 1 mph. Tesla v2.0 hdw suite (Oct 2016+) has 12 ultrasonic sensors with a range of dozens of feet distributed about the car. An obscured rear camera view is not a safety issue when there is already another independent data stream usable for parking / backing.

The following was a sensor suite proposed in an 2013 IEEE.org article. Notice the updates Tesla did in Oct 2016? Can you think of why a rear-facing radar was deleted from this spec?

13C315_18.jpg


Elon has already stated that there is a v2 of the FSD computer in the product development pipeline. It's expanded capabilities will help to tighten the 'training loop' between data collection / analysis, and adapting the neural net to learn by experience.

None of this requires new sensors. But it is the way Tesla plans to improve FSD.
 
I don't think more hardware would be required in 99% of scenarios. I also believe that a robotaxi network where in 1% of scenarios, certain cars had to be taken off the network until someone showed up to wipe some cameras is still vastly profitable.

We sometimes forget just how bad humans are at driving. We get spray come up on the windscreen and we cant see until the wipers activate. We get light bouncing off wet roads that blinds us. We cant see well through fog, we take our eyes off the road to look at a passenger as we talk to them, or get distracted by a pretty girl/guy on the street. We are busy shouting at talk radio. We are tired. We dont have perfect vision, or reactions.

You should watch some of the laughable attempts some of the elderly drivers in my village make of reversing down the single-track lane past my house, to let someone pass. Its comical.

FSD isn't going to drive perfectly, but its likely going to drive better than most of us.

And to get back to investment... EVEN if this first pass of FSD/robotaxi will only work in sunny climates in city streets, then its still worth tens of billions of dollars. Maybe hundreds. I'd happily buy an FSD car I could fall asleep in, if it would stop and beep at me once a year to ask me to go wipe a sensor/camera so it could carry on.
 
Why do you think these rear cameras and radar are necessary for safer than a human FSD?

The rear camera being distorted by rain drops not only sounds like a technical problem rather than insufficient hardware, even if stops you from seeing clearly, it might not impact the neural nets as much.



It'd indeed be helpful if @Mo City could further elaborate his views, but no need to be so rude/offensive. He is simply explaining his point of view, for which I am thankful, even if I see things differently.
The only thing I see as lacking is a way to detect a fast moving car in the fast lane when passing. Example: Divided highway. you're going 65, passing someone going 60, and a car in the fast lane is going 90. I don't believe the 90 mph car can currently be detected in time (this might not be true for the rewritten software, but it appears to be true for current software).
 
Quick google search returned at least one paper describing how water distortion on a lens can be significantly reduced with software; "Image restoration via de-raining", link https://arxiv.org/pdf/1901.00893.pdf

Caveat 1; Don't know how well this can be done real time without taking too much computer resources.
Caveat 2; I only find the paper in Arxiv, an open depository. It apparently hasn't been published in a peer-reviewed journal so I don't know whether to trust the paper.

Additional note; If AI can really negate the effect of water droplets that well, the corrected image would probably only be useful for the human doing the labeling for the computer. AI would interpret the distorted image directly without first doing the de-distortion and then doing the interpretation. I would assume that the only thing needed would be enough labeled data with droplets on the lens. Which would obviously save computer resources compared to processing the image twice.

Caveat 3; I'm no AI expert.
For human vision, water droplets alone don't interfere much (at least not in the S and X over the past seven plus years). You're trying to detect objects, not read the license plate number. For slush and mud, it's a different story because they totally block the cameras. A robust cleaning system is going to be required for all-weather use. Currently, heavy rain gets the one or more cameras are blocked message. I was actually impressed by how they often come back online once past the heavy rain and how well it functions even with the blocked message.
 
  • Informative
Reactions: FrankSG
I don't think more hardware would be required in 99% of scenarios. I also believe that a robotaxi network where in 1% of scenarios, certain cars had to be taken off the network until someone showed up to wipe some cameras is still vastly profitable.

We sometimes forget just how bad humans are at driving. We get spray come up on the windscreen and we cant see until the wipers activate. We get light bouncing off wet roads that blinds us. We cant see well through fog, we take our eyes off the road to look at a passenger as we talk to them, or get distracted by a pretty girl/guy on the street. We are busy shouting at talk radio. We are tired. We dont have perfect vision, or reactions.

You should watch some of the laughable attempts some of the elderly drivers in my village make of reversing down the single-track lane past my house, to let someone pass. Its comical.

FSD isn't going to drive perfectly, but its likely going to drive better than most of us.

And to get back to investment... EVEN if this first pass of FSD/robotaxi will only work in sunny climates in city streets, then its still worth tens of billions of dollars. Maybe hundreds. I'd happily buy an FSD car I could fall asleep in, if it would stop and beep at me once a year to ask me to go wipe a sensor/camera so it could carry on.
Well, it would probably be more than once a year, but I basically agree.
 
The only thing I see as lacking is a way to detect a fast moving car in the fast lane when passing. Example: Divided highway. you're going 65, passing someone going 60, and a car in the fast lane is going 90. I don't believe the 90 mph car can currently be detected in time (this might not be true for the rewritten software, but it appears to be true for current software).
I've wondered this myself as I turn off the NoA automatic lane change usually because:
  • it takes too long to actually initiate and move into the fast lane
  • doesn't seem to factor the speed of the approaching car per your example
  • signals for a lane change when there's a car just behind me causing them to wonder if I see them
  • inconsistent behavior of parking in the fast lane and not getting back over or signaling to move but just staying put
  • will move into fast lane for no apparent reason, e.g. car in front is 1,000 yards ahead
 
I've wondered this myself as I turn off the NoA automatic lane change usually because:
  • it takes too long to actually initiate and move into the fast lane
  • doesn't seem to factor the speed of the approaching car per your example
  • signals for a lane change when there's a car just behind me causing them to wonder if I see them
  • inconsistent behavior of parking in the fast lane and not getting back over or signaling to move but just staying put
  • will move into fast lane for no apparent reason, e.g. car in front is 1,000 yards ahead
Peoples experience will differ, so maybe that accounts for it, but what you are describing sounds remarkably like an older version of Autopilot.

  • takes too long: I used to have this frequently. I don't remember now which update it was, but in a single update there was a very significant change. So much so that I warned my wife to expect it.
  • doesn't factor speed: either I haven't seen this or we have different interpretations of the symptoms. Either way, no real comment
  • signals for lane change when occupied: it should be doing this. I know that it is common to only signal a lane change when the driver is definitely going to do so (or delay until the lane change is started, or never even bother) but that is not how lane changes are supposed to work. The blinker is used to communicate intention to other drivers on the road. Will they back off to allow someone out? Most of the time, no. But there are a few polite drivers. If someone is in the lane it signals to others that, yes, you aren't just idling behind the vehicle you follow, but do intend to pull out. This feature is done correctly.
  • not returning to lane: I have observed this in two forms. The first is when it doesn't see the slow vehicle exit the highway and appears to be an issue of state. Noted that the very first time I used Autopilot (so I guess in 2018) but it seems to have improved. But in the more general case I suspect it is showing training bias and simply isn't conforming to the "stay right" rule. In my experience when it loiters it will stop doing so when a vehicle approaches from behind.
  • change for no reason: I don't believe I've ever seen that. The closest I can think of is a spot on I-44 west of St. Louis where there are no interchanges, but for some reason it thinks it needs to move left to stay on the route. It will try to change lanes and the screen message lists that as the reason. The case I describe appears to be a problem with how the navigation is stitched up
Not that NoA is perfect for me, but it is very good. The lingering issues I've had are substantially less frequent. For example:

I have it set for "mad max" and -- with the fast lane completely empty -- it will slow from 70 mph to 60 mph without ever offering to pass. It makes no sense, this isn't even a matter of determining the slow vehicle's speed, it will decelerate without any apparent lower bound.

The current version cannot properly estimate distance for curves. This isn't an issue on the interstate, but some in-town curves are problematic and a perfect example was on a state highway where it is completely open and a perfect view of the road as it goes straight away and you can then see the gentle curve to the right. My Tesla dropped speed from 55 to 25 for at least 100 yards on this approach which was pointless and would have been problematic if there had been anyone behind me. It was very much like it thought it was 100 feet (instead of >100 yards) to a 90 degree bend (rather than a gradual curve).

Nor does it always follow pavement. Same drive, later curve, it had been fine on the way out, but on return it had every intention of launching off the pavement of a banked curve to "stay straight" to continue on a gravel secondary road. If I hadn't been paying attention that would've been... rough to say the least. I try to trust the car, but there comes a point where you have to intervene. (To be clear, this was not NoA so it had no route, but taking the secondary would have required slowing down quite a bit.)

Pedestrian detection is also very iffy. Driving on highways through town where's 35 mph I have had, multiple times, mailmen run out in front of me requiring intervention with the Tesla never displaying the pedestrian. Most recently, on 28.6, it never registered an old lady jay walking with her dog. She was slow, it was a long run up where she was completely visible and it never once registered a thing.

So, no, I have no illusions about current versions being perfect. But on interstates I have found it to be very good at driver assistance. Do I think it is good enough to be no supervision in that use case? Not at all, but it sure has eased my drives. And, even in its current state, the way it handles stop signs makes it helpful for me in town as well.
 
I've wondered this myself as I turn off the NoA automatic lane change usually because:
  • it takes too long to actually initiate and move into the fast lane
  • doesn't seem to factor the speed of the approaching car per your example
  • signals for a lane change when there's a car just behind me causing them to wonder if I see them
  • inconsistent behavior of parking in the fast lane and not getting back over or signaling to move but just staying put
  • will move into fast lane for no apparent reason, e.g. car in front is 1,000 yards ahead
I use it, but Mad Max acts more like Nervous Nelly. I've only found it moves into the fast lane, with no slower traffic in front, because when it sees an exit, it wants to follow the route and thinks that the right lane is actually an exit only lane. The algorithm for getting out of the fast lane seems to be: If there are no cars behind stay in the fast lane, if there is a car behind move over. Not sure why that was chosen over move over once the slower car is passed unless there are more slower cars and nothing behind. I have found if there is a car behind it does move over quickly.
 
  • Informative
Reactions: Andy O