Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is current sensor suite enough to reach FSD? (out of main)

This site may earn commission on affiliate links.
change for no reason: I don't believe I've ever seen that. The closest I can think of is a spot on I-44 west of St. Louis where there are no interchanges, but for some reason it thinks it needs to move left to stay on the route. It will try to change lanes and the screen message lists that as the reason. The case I describe appears to be a problem with how the navigation is stitched up
This happens on about 50% of the interchanges in Kansas, 25% in Oklahoma, one or two in Texas, and some in Nebraska. (It appears that the wider the median the more likely it is to occur--GPS issue?) In Nebraska almost every speed limit (on highways) shows ten to twenty mph too low (signs 65, NoA 50 is common) and many of those highways are the kind where NoA sticks to the speed limit so you can't use it. 2020 X with the latest software.
 
I don't own a Tesla, and I don't even have a driver's license, so maybe I don't fully understand the problem you're describing, but it doesn't sound like adding more cameras would fix this.

Besides, we don't even know if it's a problem for the neural nets, or whether they can learn to recognize objects during rain distortion.

Clearly it is an issue for certain customers like yourself, but I haven't heard anyone suggest this prohibits NoA from making lane changes for example, so I'm not so sure a lack of additional rear cameras and/or a rear radar prohibits Tesla from developing safer than a human FSD.
My car has no problem backing up in the rain when I summon it. There’s tons of videos on YouTube showing people getting picked up by their Tesla in the rain. So I don’t think it’s an issue.
 
Backing in a parking lot is done at 1 mph. Tesla v2.0 hdw suite (Oct 2016+) has 12 ultrasonic sensors with a range of dozens of feet distributed about the car. An obscured rear camera view is not a safety issue when there is already another independent data stream usable for parking / backing.

The following was a sensor suite proposed in an 2013 IEEE.org article. Notice the updates Tesla did in Oct 2016? Can you think of why a rear-facing radar was deleted from this spec?

13C315_18.jpg


Elon has already stated that there is a v2 of the FSD computer in the product development pipeline. It's expanded capabilities will help to tighten the 'training loop' between data collection / analysis, and adapting the neural net to learn by experience.

None of this requires new sensors. But it is the way Tesla plans to improve FSD.
The problem that concerns me is passing on 2 lane roads, which I drive a lot. It’s hard enough to see around a truck from the drivers seat much less the center of the car. I’ll be interested in how they solve this.
 
Some of you mention: as long as FSD is safer than a human driver.... But is that enough?

Will the public be understanding towards the robotaxi network when a robotaxi carrying passengers hits an object? How about an animal? How about a human?

And despite the fact that it might be 10x or even 100x safer than a human (based on proof) what are the headlines going to say? And how will regulators take that incident? Will they instantly stop the entire network until further investigation? How will that affect the stock price at least in the short-term?

It’s one of the hardest feats of engineering and even once it’s functional, it can still cause Tesla massive problems. People are just not logical enough...

And of course I cannot wait for FSD to be out. Just raising some key points.
 
  • Like
Reactions: jmaddr
I use it, but Mad Max acts more like Nervous Nelly. I've only found it moves into the fast lane, with no slower traffic in front, because when it sees an exit, it wants to follow the route and thinks that the right lane is actually an exit only lane. The algorithm for getting out of the fast lane seems to be: If there are no cars behind stay in the fast lane, if there is a car behind move over. Not sure why that was chosen over move over once the slower car is passed unless there are more slower cars and nothing behind. I have found if there is a car behind it does move over quickly.

I don’t believe there is an algorithm for these type of lane change decisions that is programmed by humans. This is where Tesla differs from most others in their approach. (Search YouTube for Karpaty’s Software 2.0 where he explains.) The lane changing behavior will be based on learning from data captured over millions of miles. So it will represent more like the collective approach to lane changing.
 
Do you really believe that it is difficult for Tesla's AI to normalize vision for dirt and rain? Is so, why?

Almost 2 years after release NoA disables itself in even moderate rain... in HEAVY HEAVY rain even basic AP, which has been around years longer to be "normalized" for rain turns off.



Isn't a Tesla's visual capability already superhuman?

In that it can look multuiple directions at once- yes.

But it can't see as far (based on listed specs of the cameras) as a human.

It can't "turn it's head" to allow a working camera not obscured by something to see what an obscured camera can't see

And the cameras are vastly lower resolution than the human eye (it's getting super far afield to dive into where/when/if that makes differences, but it's certainly the opposite of the visual capability being superhuman)

The cameras also can't see low/close to the car at all (this is part of why there's no overhead 360 view- that'd require additional cameras)...

And currently there's no notion of object permanence- if something moves out of the field of a camera, it's GONE, if it appears in another camera that's a new thing. Part of why surrounding vehicles pop in and out of display.

That last one is allegedly going to be "fixed" with the AP re-write, but you can't do much about the others since they're physical HW limitations.



Isn't the obvious solution here to make sure the camera functions properly and doesn't get distorted rather than add more cameras?

How do you do that when there's no physical way to keep the lens clear?


Also sounds like this is only a problem when reversing in parking lots? Or does this also affect NoA performance on high ways in your experience?


See above- NoA turns off in even moderate rain (dropping to basic AP), and even basic AP can turn itself off in very heavy rains.


Rain (and forget snow!) is a BIG problem for camera based self-driving cars.

There's a reason Waymos main pilot program was in a little suburb in Arizona where it almost never rains.

Speaking of snow BTW- apparently it's a big enough issue Tesla added a front radar heater to the Model Y.

The S/3/X don't have that hardware though.


Backing in a parking lot is done at 1 mph. Tesla v2.0 hdw suite (Oct 2016+) has 12 ultrasonic sensors with a range of dozens of feet distributed about the car. An obscured rear camera view is not a safety issue when there is already another independent data stream usable for parking / backing.

Your Tesla might be going 1 mph... but the cross-traffic in the parking lot is probably going 30.

The ultrasonics are worthless in that situation compared to the rear-radar virtually everyone else in the industry uses for this job.


The reason Tesla didn't add the rear radar is cost.

Their 'solution' is very clear... the car won't "need" it because it's not meant to back out of spots.

Seriously.

Try self-park sometime.

It backs into spots

So that it pulls out forward. Where it has multiple cameras, and radar, available to see cross traffic.

That's their solution.

And objectively, backing in is safer so great.

But as many have noted, busy parking lots you're gonna have issues with people getting pissed off when your car pulls past a spot, then slowly backs into it, holding up traffic... in some cases those pissed off people following inches behind won't let you in either.

Others in discussions of this very thing also point out sometimes backing in is NOT a solution at all.... some lots make this illegal for various reasons for one.... for another you might want the rear facing the lot, not backed up to another car, for loading reasons.












Anyway- the above is why I'm very confident Tesla can offer L3 autonomy on the highway with existing HW in all cars made since late 2016.... (and given that'd cover 95% of my driving- I'm happy to let em keep my FSD money if I get that).

Car is responsible and driving in highway situations- driver NOT required to be paying active attention or even touching the wheel...So you could be reading a book, watching a movie, playing a game, whatever....


But MAY need the human to take back over with some amount of warning if say the weather is getting really bad- so driver still needs to be awake, and in the drivers seat, at all times- even if not actively paying attention to the driving while there.

(and technically if they can program it to safely pull over to the shoulder in such situations if the human ignores it they can probably get away with labeling that L4 even)

But I remain HIGHLY dubious L5 robotaxis that just work everywhere all the time are possible with current HW given it can't keep highway NoA on/working in moderate rain- so even worse in non-highway/urban situations where it'd have FAR more objects it needs to visually ID and track.


EDIT- for having seen evermores post.... based on his # of HW3 likely being able to "derain" a single camera at 3 fps...and the cameras in the car operating at 36fps... that means you'd need a computer ~12x more powerful than HW3 to handle one camera.... or 96x more powerful to handle all of them (which it'd need to especially with the re-write relying on ALL the cameras to build a 360 view of the world).

(even if we assumed this wasn't needed for the 3 front cameras because windshield wipers- a fact I'm not confident is sufficient in heavy rain or snow- that's still needing 60x more power than HW3 offers for the other 5 cameras)

That's a lot of additional compute.
 
Last edited:
Quick google search returned at least one paper describing how water distortion on a lens can be significantly reduced with software; "Image restoration via de-raining", link https://arxiv.org/pdf/1901.00893.pdf

Caveat 1; Don't know how well this can be done real time without taking too much computer resources.
Caveat 2; I only find the paper in Arxiv, an open depository. It apparently hasn't been published in a peer-reviewed journal so I don't know whether to trust the paper.

Additional note; If AI can really negate the effect of water droplets that well, the corrected image would probably only be useful for the human doing the labeling for the computer. AI would interpret the distorted image directly without first doing the de-distortion and then doing the interpretation. I would assume that the only thing needed would be enough labeled data with droplets on the lens. Which would obviously save computer resources compared to processing the image twice.

Caveat 3; I'm no AI expert.

This is a great find -- the research in the paper is around exactly this problem. I became curious and performed a quick read of the paper; here's my summary:

There are two related problems: the first is when droplets or particles in the atmosphere occlude vision. The paper does not deal with this, but cites references to other techniques that do, so "seeing through rain" may be a solved problem. I did not follow the links to evaluate that on my own. This paper, however, deals with the second problem: distortion from water droplets adhering to the camera lens.

They found that while you can train a neural network with both rainy and clear images and have it function, the results are not as good as if you apply a specialized de-raining process to images with raindrop distortion and feed it only clear images. Their benchmark was accuracy in identifying road labels; so it seems like the results of this would be useful for real-time on-board driving.

Their technique uses a generative adversarial network (GAN); this is the technique you may have seen used to automatically generate pictures of cats but also other useful things (replace a background in an image, etc). They configure and train it specifically to recognize that most of the image doesn't need to be modified or replaced. Their source code is available, so if you have a spare GPU you aren't using for mining Dogecoin right now, you could try it out. Their cameras are the same resolution as Tesla's, and they say their process takes about one second on an Nvidia Titan X; from what I can gather, it seems like HW3 may run about 3x that. Using napkin math, perhaps HW3 could de-rain 3 frames per second using this technique. That's not especially fast, but of course this is a research paper not using the actual hardware and software in Teslas.

What I get from all of this is that it's possible clean up images distorted by raindrops adhering to the camera using techniques similar to what Tesla is already using (neural networks) on HW3. Who knows whether or how they might use a technique like this, but I find it interesting to know that it could be done.

There's a great image on the first page of the paper showing the results.
 
For cars with air suspension, the car already has a compressor. It would be cheap and easy to have a small nozzle focused on the rear camera to clear rain drops that cling. Just needs an accumulator and valve near the rear camera. The quantity of air needed is very small so even if it needed to be applied frequently, it should not use vast amounts of power. Could be made automatic if the computer perceives that resolution is being lost.
 
For cars with air suspension, the car already has a compressor. It would be cheap and easy to have a small nozzle focused on the rear camera to clear rain drops that cling. Just needs an accumulator and valve near the rear camera. The quantity of air needed is very small so even if it needed to be applied frequently, it should not use vast amounts of power. Could be made automatic if the computer perceives that resolution is being lost.
It needs to be strong enough to wipe off mud and slush, not just water drops. I don't see it working without a washer system.
 
  • Like
Reactions: Lessmog
For cars with air suspension, the car already has a compressor. It would be cheap and easy to have a small nozzle focused on the rear camera to clear rain drops that cling. Just needs an accumulator and valve near the rear camera. The quantity of air needed is very small so even if it needed to be applied frequently, it should not use vast amounts of power. Could be made automatic if the computer perceives that resolution is being lost.
I thought the simplest solution was to mimic how the human eye gets cleaned by blinking. Instead of the eyelid swiping the surface though, the camera would be housed in a clear ball with a rubber seal where it attaches to the body of the car. If the camera senses an obstruction, a servo wheel on the back side of the clear ball would rotate the ball 180 degrees and the rubber seal wipes the ball clear ready for the next spin. Camera wire would be threaded at stationary axle through a grommet so camera stays fixed in place and unexposed to the elements.
 
in regards to this weekend's "Doubt" discussion (completely naturally introduced i'm sure) on Tesla's FSD approach, last year Tesla filed a patent for an automated laser cleaning system for glass on vehicles and solar panels. (the option to keep Summer safe won't be immediately available)


So you seem to be agreeing Tesla will need additional hardware- since you cite to a patent that (potentially) resolves the problem of obscured cameras by using... additional hardware.


I suppose the best investor-related tie in question then becomes how much it'll cost the company to either retrofit that on all cars who paid for FSD already, or refund the FSD money to those to whom they can't deliver due to insufficient HW if they determine that's cheaper/better than retrofits.
 
So you seem to be agreeing Tesla will need additional hardware
False. I see no evidence to suggest the system isn't perfectly capable as is, the network will simply not allow a vehicle to perform among the fleet until sensors are all functional (manually cleaned off on cars/trucks without an automated future-feature in the rare instances where they become obstructed).
 
False. I see no evidence to suggest the system isn't perfectly capable as is, the network will simply not allow a vehicle to perform among the fleet until sensors are all functional (manually cleaned off on cars/trucks without an automated future-feature in the rare instances where they become obstructed).


Then it's weird you cited Tesla patenting additional hardware specifically to clean off glass and camera lenses.

If that's not needed- why would they develop, patent, and potentially add it?


Further- how does a Robotaxi, with no owner in it, "manually" clean off the sensors?

Does it just pull over mid-trip and ask the passenger to get out and do it?

Or does it stop mid-trip and say "Sorry I've been removed from the fleet due to an obscured camera... get out and walk"?


And how does one "manually" keep all the cameras clear during heavy rain when they're going to be consistently obscured over and over again during said rain.... as happens in current cars today with current hardware all the time?


Or does the app just not allow cars on the network in any area expecting bad weather until the weather has passed?


All of the above are 100% non-issues for L3 driving... as that allows the car to do the real driving MUCH of the time- with a human driver available for those situations where there's a sensor or weather issue.

They're real, legit, happens-all-the-time problems for L5 driving though. (And L4 as well unless you define the ODD as "clear weather" which again is how Waymo kinda gamed it by picking a town in AZ with no snow and very little rain)





In any event- I'd strongly encourage folks to move this discussion to one of the threads over here:

Autopilot & Autonomous/FSD


As it's only relevant to the investor thread as far as how likely robotaxis are to happen on current HW (and potential HW retrofit or FSD refund costs).... and mostly THIS discussion has ignored that and broken down to "Here's the actual tech situation" and "NU UH! SHILL!" posts.

The tech situation discussion has its own forum, linked above.
 
Last edited: