Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
  • Want to remove ads? Register an account and login to see fewer ads, and become a Supporting Member to remove almost all ads.
  • Tesla's Supercharger Team was recently laid off. We discuss what this means for the company on today's TMC Podcast streaming live at 1PM PDT. You can watch on X or on YouTube where you can participate in the live chat.

Will Tesla ever do LIDAR?

This site may earn commission on affiliate links.
Based on the MobilEye's learning curve Tesla will need higher resolution cameras, 12 instead of 8, plus lidar (or high res radar) for real autonomous driving.

No. Because Tesla has so much more data with which to train their neural net with so I wouldn't draw conclusions from Mobileye's attempts. I think you are over-estimating the resolution required to safely drive a car. A computer has so many advantages over inconsistent humans it's not even funny. The neural net cameras could probably be legally blind and drive with a higher safety level than humans with normal vision.:cool:
 
No. Because Tesla has so much more data with which to train their neural net with...

are you serious?

honestly, I don't know how much data each OEM is obtaining, or even how many units are being sold.

the incumbent NXP is supposedly far behind, except they already have Baidu software working already, so their data for training could range from nil per year to 1,000,000s of cars per year and I (we) wouldn't know.

Waymo is equally opaque, even more so, obviously they use a very small amount of cars, but how much data can google be vampiring.

Mobileye is well represented across Japan and Europe, just who owns the data for Tesla AP 1.0? oneway or another, Mobileye will be assimilating data from various global manufacturers.

It is one thing to have the best data set, that one's own customers use. But it is another thing to have the data set from your customer's customers.
 
Wrong. Any photojournalist will tell you they get as close as possible to chain-link fences, window screens or rain covered glass when it is necessary to photograph through such obstructions. Being very close will provide the most detailed image with less distortion. It is actually LIDAR that has the most trouble in heavy rain (simply doesn't work).

That's actually an artifact of the way cameras are configured to focus on the closest object. A camera in a self-driving car, if placed back from the window, would have a focusing system configured to ignore anything in the near field (closer than the front bumper of the car), so that problem would not exist. (Secondarily, you can sometimes throw foreground objects enough out of focus that they don't matter, but this really isn't practical with a small-lens camera like the ones used for self-driving, with the possible exception of the narrow-field camera.)

The original poster is correct that having a tiny camera close to a window is a really terrible system in terms of rain handling, because a large raindrop can obliterate the camera's entire field of view, whereas if the camera were three feet back from the window, it would only obliterate a tiny fraction of the field of view. That's why camera redundancy is so critical.

That's also why the lack of redundancy in the current sensor suite (which has zero camera redundancy except in the forward-facing direction) doesn't bode well for handling T intersections in bad weather. So I'm pretty sure the current hardware can't quite reach L5 unless they add at least one more camera at each end of the front bumper facing outwards. That said, it should be possible to be L5 minus that single exception.
 
That's actually an artifact of the way cameras are configured to focus on the closest object. A camera in a self-driving car, if placed back from the window, would have a focusing system configured to ignore anything in the near field (closer than the front bumper of the car), so that problem would not exist. (Secondarily, you can sometimes throw foreground objects enough out of focus that they don't matter, but this really isn't practical with a small-lens camera like the ones used for self-driving, with the possible exception of the narrow-field camera.)

If you study the camera placement chosen by the Tesla engineers, you will note that all lenses are placed as close to the glass as the sloping glass and lens enclosure will allow. You will also notice that the area of glass the lenses "look" through has fine electric wires (like the back window only much finer) to defrost and melt snow/ice. This "as close as possible" lens placement was a deliberate decision due to the fact that small aperture lenses with small image sensors have very large depth of fields (meaning almost everything is in focus). The close placement is necessary to keep raindrops and electrical heating wires as out of focus as possible. A larger setback from the window would only be used if the raindrops themselves were of interest. In other words, you have it exactly backward.

The original poster is correct that having a tiny camera close to a window is a really terrible system in terms of rain handling, because a large raindrop can obliterate the camera's entire field of view, whereas if the camera were three feet back from the window, it would only obliterate a tiny fraction of the field of view.

This is incorrect. Raindrops cannot "obliterate the cameras entire field of view", but they can reduce the contrast of the image. The contrast can be boosted by software before the images are passed onto the FSD computer. I have a lot of video taken by the forward facing camera in all types of rain and the cameras entire field of view is never "obliterated" although the contrast of the images is reduced considerably (although not to the point that it can't be boosted by software).

So I'm pretty sure the current hardware can't quite reach L5 unless they add at least one more camera at each end of the front bumper facing outwards.

Unfortunately, your opinion that the current hardware is insufficient for Full Self Driving is misinformed by some basic misconceptions about the depth of field of miniature cameras and visual acuity through rain covered glass. The only thing preventing the existing sensors from reaching L5 is the fact that the neural net is not mature enough.
 
If you study the camera placement chosen by the Tesla engineers, you will note that all lenses are placed as close to the glass as the sloping glass and lens enclosure will allow. You will also notice that the area of glass the lenses "look" through has fine electric wires (like the back window only much finer) to defrost and melt snow/ice. This "as close as possible" lens placement was a deliberate decision due to the fact that small aperture lenses with small image sensors have very large depth of fields (meaning almost everything is in focus). The close placement is necessary to keep raindrops and electrical heating wires as out of focus as possible. A larger setback from the window would only be used if the raindrops themselves were of interest. In other words, you have it exactly backward.



This is incorrect. Raindrops cannot "obliterate the cameras entire field of view", but they can reduce the contrast of the image. The contrast can be boosted by software before the images are passed onto the FSD computer. I have a lot of video taken by the forward facing camera in all types of rain and the cameras entire field of view is never "obliterated" although the contrast of the images is reduced considerably (although not to the point that it can't be boosted by software).



Unfortunately, your opinion that the current hardware is insufficient for Full Self Driving is misinformed by some basic misconceptions about the depth of field of miniature cameras and visual acuity through rain covered glass. The only thing preventing the existing sensors from reaching L5 is the fact that the neural net is not mature enough.


If the lenses were close enough you were right, but that's not the case.
And as mentioned vulnerability is another issue, hitting one bug can render the camera useless. While that event wouldn't cause much issue for a human bc of the eye-windshield distance.

The heating wire is visible on the wide angle camera. Droplets are visible on all cameras and cause false detection. There were couple of nonexistent pedestrians and cyclists on the road (according to the NN) on the footage.


Some examples below. If you think there is no vision issue, that's fine. I have a different opinion.

Front wide angle:

truck invisible, NN thinks there are multiple cars:

a8_fw.PNG

a9_fw.PNG



Rear camera is almost useless due to the droplets:

a10_r.PNG


a12_r.PNG


Side camera thinks the drivable space reaches the top of the barrier;

a11_l.PNG



Front main camera:

no detection at all:

a7_fm.PNG


2 parallel traveling trucks became merged and turning


a13_fm.PNG



nonexistent pedestrian

a2_fm.PNG



Both front cameras and the radar think there is a car inside the truck on the right. (nope there is only one truck there)

a15_f.PNG
 
  • Helpful
Reactions: stretchsje
If you study the camera placement chosen by the Tesla engineers, you will note that all lenses are placed as close to the glass as the sloping glass and lens enclosure will allow. You will also notice that the area of glass the lenses "look" through has fine electric wires (like the back window only much finer) to defrost and melt snow/ice. This "as close as possible" lens placement was a deliberate decision due to the fact that small aperture lenses with small image sensors have very large depth of fields (meaning almost everything is in focus). The close placement is necessary to keep raindrops and electrical heating wires as out of focus as possible. A larger setback from the window would only be used if the raindrops themselves were of interest. In other words, you have it exactly backward.

The problem is, you're talking about the normal case. I'm talking about the worst case, which is an entirely different animal.

Those heating wires are negligible in size, and are of a fixed size, so putting the camera close to them gets them sufficiently out of focus that they don't cause problems. That's not necessarily true for raindrops. A single raindrop can be up to 4mm in size under normal circumstances, and in rare cases, several times that big. Even a 4mm drop can easily create a circular splash of water that's half an inch wide, which is basically a Tesla camera's entire field of view. When that happens, it doesn't just reduce contrast. It distorts the image severely, such that you can't trust the position of objects seen through it, making the images basically useless.

And then, you have the situation where water runs across a windshield in a stream. If your camera is behind that stream, you won't see anything of consequence from it. I'm not saying this happens often, in practice, but if there weren't multiple cameras in the front-facing direction, you'd be seriously screwed when it does. That's why there are multiple cameras.


The side-facing cameras don't have a backup, so there is a nonzero risk of a big droplet or a water run lodging itself right smack in the middle of the field of view once in a while. And over millions of miles driven, even a rare event still occurs often enough to matter, which is why having no camera redundancy in any direction is IMO not really good enough for a L5 system. (For that matter, it would still arguably be unacceptable even if we ignore rain and just consider the risk of camera electrical failure, sun glare at just the right angle, or any number of other issues.)

By contrast, if the camera is three feet from the window, this problem goes away. Yes, the rain drops are close enough to being in focus to be seen, but they're in focus for your eyes, too (which have a lens size even smaller than the Tesla cameras, I think). But it is a lot easier to ignore raindrops when they appear as a bunch of tiny distortion dots in the camera image than when they appear as a single, large-scale distortion that affects the entire image.

Note that I'm not saying Tesla should have put a camera in the middle of the car (though arguably it would have been a good idea as a backup). The main reason they put the cameras where they did is because if they put them in the middle of the car, the camera could be blinded by lights from inside the vehicle, blocked by passengers, etc. Their decision is probably the correct one, but it is a tradeoff.


This is incorrect. Raindrops cannot "obliterate the cameras entire field of view", but they can reduce the contrast of the image. The contrast can be boosted by software before the images are passed onto the FSD computer. I have a lot of video taken by the forward facing camera in all types of rain and the cameras entire field of view is never "obliterated" although the contrast of the images is reduced considerably (although not to the point that it can't be boosted by software).

That depends highly on the droplet size. West coast rain is really easy to deal with. Try it with Tennessee rain, where the droplet size is much, much larger, and the volume much, much higher. In all the time since I moved to California in 1999, I can count the number of times I've seen something that I consider to be actual rain on one hand, and the number of times I saw what I would call a hard rain on one finger.

Of course, there does come a point at which you simply have to pull over to the side of the road, but I'm pretty sure that point will come much earlier for any camera system that relies on a lens that's right next to the glass than a human whose lens is several feet away, for the reasons previously stated.
 
If the lenses were close enough you were right, but that's not the case.

There's no such thing as "close enough". Even without an extra layer of glass between the lens and the outside world, a single large water droplet can completely hose the view of the rear camera in a Tesla to the point of being useless, because a droplet is a substantial percentage of the total size of the camera lens.

The only way to guarantee that such a tiny lens can see meaningfully is for the water to hit a layer of glass several feet from the camera so that each droplet distorts only a small portion of the camera's field of view. Short of that, the only real alternative is redundancy.

Or you could put a DSLR-sized lens in the thing so that your depth of field is one seventh as deep, of course.
 
Last edited:
There's no such thing as "close enough". Even without a layer of glass, a single large water droplet can completely hose the view of the rear camera in a Tesla to the point of being useless. The only way to guarantee that you can see meaningfully is for the water to hit a layer of glass several feet from the camera so that each droplet affects only a small portion of the camera's field of view. Short of that, the only real alternative is redundancy.

This flies in the face of my experience as a motorcyclist. I've ridden through torrential mountain downpours and my eyes are only a couple of inches from my visor. Hint: Motorcycle helmets don't have windshield wipers, lol! In 1978 I bought a motorcycle that came with a pair of aviator goggles with the glass lenses less than one inch from my pupils. These worked in the rain also. In fact, the rain would pummel your eyeballs if you had no protection. What do you think the Red Barron did when it rained? Rip off his goggles? LOL!

During a hard rain, as the drops hit the glass, the water spreads out in a sheeting action. This reduces contrast and resolution by making the view more blurry, it doesn't "obliterate" forward vision. The wipers speed up in an attempt to keep the windshield clear (and the cameras are right behind the wipers). If it gets bad enough, even human drivers need to pull over. Cameras right behind the glass will have an easier time of it but a FSD car will still need to slow down in a heavy rain, just as humans do. They will do this for the same two reasons humans do:

1) Stopping distances increase
2) Forward vision is reduced.

It would be ridiculous if a FSD car continued to barrel along at 70 mph while cars with human drivers were slowing down to 35 mph!
 
The NN just needs more training! There's no way to ever prove that the cameras aren't enough, that's the real Tesla advantage and the beauty of deep learning.

I know that was said partially tongue-in-cheek but there's a lot of truth to that. Camera data from storm driving will be valuable for training the NN to be more proficient in poor visibility (the kind of rain in which LIDAR ceases to function).
 
Last edited:
This flies in the face of my experience as a motorcyclist. I've ridden through torrential mountain downpours and my eyes are only a couple of inches from my visor. Hint: Motorcycle helmets don't have windshield wipers, lol! In 1978 I bought a motorcycle that came with a pair of aviator goggles with the glass lenses less than one inch from my pupils.


The difference is that your eyes and the goggles/helmet are not in a fixed orientation. No matter what, you can turn your head a few degrees and keep your eyes pointed at the same spot, and you're instantly looking through a different part of the glass.
 
The difference is that your eyes and the goggles/helmet are not in a fixed orientation. No matter what, you can turn your head a few degrees and keep your eyes pointed at the same spot, and you're instantly looking through a different part of the glass.

Well, no matter what, you can actuate the wipers and you're instantly looking through cleared glass. There's more than one way to skin a cat. But I can see that wipers and cameras will never satisfy you in terms of being adequate for FSD.
 
If the lenses were close enough you were right, but that's not the case.
And as mentioned vulnerability is another issue, hitting one bug can render the camera useless. While that event wouldn't cause much issue for a human bc of the eye-windshield distance.

The heating wire is visible on the wide angle camera. Droplets are visible on all cameras and cause false detection. There were couple of nonexistent pedestrians and cyclists on the road (according to the NN) on the footage.


Some examples below. If you think there is no vision issue, that's fine. I have a different opinion.

Front wide angle:

truck invisible, NN thinks there are multiple cars:

View attachment 405260
View attachment 405261


Rear camera is almost useless due to the droplets:

View attachment 405263

View attachment 405267

Side camera thinks the drivable space reaches the top of the barrier;

View attachment 405269


Front main camera:

no detection at all:

View attachment 405271

2 parallel traveling trucks became merged and turning


View attachment 405272


nonexistent pedestrian

View attachment 405273


Both front cameras and the radar think there is a car inside the truck on the right. (nope there is only one truck there)

View attachment 405274

These are great interesting examples. But it should be noted that they are output from a single image single camera, quarter resolution as input. It is not clear how a multiple camera, multiple frame, radar, sonar, full resolution NN would perform on the same tasks. Maybe it would be as bad, maybe it would be much better. If I would bet it would be that it would perform much better in these types of situations.
 
Well, no matter what, you can actuate the wipers and you're instantly looking through cleared glass. There's more than one way to skin a cat. But I can see that wipers and cameras will never satisfy you in terms of being adequate for FSD.

How do you do that at a 90 degree turn where the B pillar camera is the only sensor seeing the approaching traffic?

Everyone agrees Tesla has at least some redundancy upfront. Whether or not sufficient it is at least easier to believe it might be. The surroundings of the car in other directions are completely another matter though.
 
How do you do that at a 90 degree turn where the B pillar camera is the only sensor seeing the approaching traffic?

Everyone agrees Tesla has at least some redundancy upfront. Whether or not sufficient it is at least easier to believe it might be. The surroundings of the car in other directions are completely another matter though.

Should also be in arc for the front fisheye, though I'm not positive there's enough resolution to be useful given the very wide angle.
 
Should also be in arc for the front fisheye, though I'm not positive there's enough resolution to be useful given the very wide angle.

Nope, probably won’t be there until quite close. Put your nose to the windshield and try to keep far left... fisheye won’t help in T intersections.

At a 90 degree intersection turning right your car and the lane are also probably arcing away from the left further diminishing what the front fisheye can see towards the left.

A single B pillar camera with a heater but no wiper is all there is.

It is a similar story in reverse: backing away between cars means the very exposed single rear camera is all that can see rearwards as the side cameras are covered by cars left and right. Ultrasonics won’t help with fast approaching cross-traffic in that situation either though luckily speeds when reversing are usually (but not always) lower.

Again, corner radars could help in both scenarios but there are none.
 
Nope, probably won’t be there until quite close. Put your nose to the windshield and try to keep far left... fisheye won’t help in T intersections.

At a 90 degree intersection turning right your car and the lane are also probably arcing away from the left further diminishing what the front fisheye can see towards the left.

A single B pillar camera with a heater but no wiper is all there is.

It is a similar story in reverse: backing away between cars means the very exposed single rear camera is all that can see rearwards as the side cameras are covered by cars left and right. Ultrasonics won’t help with fast approaching cross-traffic in that situation either though luckily speeds when reversing are usually (but not always) lower.

Again, corner radars could help in both scenarios but there are none.

If you can get your nose to the windshield anywhere near the front camera block, you're much more flexible than I am.

The coverage should be the opposite of what you said - as you approach the intersection the whole thing is in the fisheye, and the closer you get the closer to the edge the critical areas become.

Rear crosstraffic is certainly a valid point; it would be great to have at least the camera washer GM has put on the Bolt and some other recent cars.
 
Without side camera wipers or side radar, I think a requirement of FSD will be a regular Rain-X coating on all lens coverings. That stuff is amazing when first applied, better for visibility than (but not a replacement for) wipers. It's also unrealistic to expect owners to do this with enough regularity... although they should. Even (especially?) human drivers. Whatever the solution, I do wonder how the Tesla Network anticipates maintaining ideal vehicle operating standards.
 
Last edited: