Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What Tesla autopilot sees at night

This site may earn commission on affiliate links.
Just got some footage in a fog. Not super dense (at about 1:00 is thick stuff starting), but still noticeable.



Here's a request. Maybe in the future you can do something like this. I remember something like this being done when ap2 demo first came out. Basically record from the center facing the windshield, with the steering wheel in full view like in the vid. Then superimpose the NN output from the main camera onto the entire windshield.

How does this help?
We can watch the steering wheel and can compare the outputs.
This way, we can know what false positives, false negatives and inaccuracies from the NN actually affect the steering actuation during autopilot operation.

Like below but obviously without the gray background.
Wouldn't mind some vids dedicated to this and having only the main camera on top of the windshield with the steering visible.
 
Here's a request. Maybe in the future you can do something like this. I remember something like this being done when ap2 demo first came out. Basically record from the center facing the windshield, with the steering wheel in full view like in the vid. Then superimpose the NN output from the main camera onto the entire windshield.

How does this help?
We can watch the steering wheel and can compare the outputs.
This way, we can know what false positives, false negatives and inaccuracies from the NN actually affect the steering actuation during autopilot operation.
Is the display of steering angle not enough? (in the main camera right bottom corner, the steering wheel icon) When background is blue, it means the car is doing the steering, when gray - human driver is doing the steering. I guess we can add a numerical angle representation too if needed.

Also don't forget autopilot wiggles the steering wheel all the time trying to determine if you are holding on to it or not.
 
Is the display of steering angle not enough? (in the main camera right bottom corner, the steering wheel icon) When background is blue, it means the car is doing the steering, when gray - human driver is doing the steering. I guess we can add a numerical angle representation too if needed.

Also don't forget autopilot wiggles the steering wheel all the time trying to determine if you are holding on to it or not.

But that's kinda too small of an icon even if you make it dynamic. I also sorta prefer a visual representation of the steering angle than numeric. Makes it easier to grasp without over thinking. But yeah I know it's hard to show 8 views and more additional info
 
Last edited:
But that's kinda too small of an icon even if you make it dynamic. I also sorta prefer a visual representation of the steering angle than numeric. Makes it easier to grasp without over thinking. But yeah I know it's hard to show 8 views and more additional info
It is already dynamic, take a look at the tail of the dragon footage (where the movement is kind of big). It is small, but I guess it could be made bigger, the problem is if it's too big, it's going to obstruct other things. It would still be much better than overlaying the cabin video because that means you need some super precise sync to an external camera too and that's kind of hard.
 
BTW there was a question like a year ago if the Tesla's cam coverage is enough for backing out of a parking spot with tall cars parked near you. I forgot who was asking for that because it was so long ago.

Well, I don't have this replicated 100%, but I think close enough on one side to give us an idea. Perhaps it might be somewhat enough if the car back out superslowly giving the other guys a chance to slow down and let the car out, but if they don't cooperate, Tesla would be at fault for any accidents still of course.

 
Ok, got some night rain footage(not very strong). Interesting that the wide dynamic range of the cams (could not be appreciated on the video by itself due to all toe dynamic-range cutting compression) actually works. Look at all the merged together headlights of oncoming traffic where there are still separate bounding boxes. Also at 4:27 there's a truck hauling a trailer i nthe right lane that does not have any illumination. It was noticed by the driver much-much later (easier to see on the video due to the green path thingie, otherwise it really did blend in), yet AP highlights it right away after the car behind it moves away.


Also no backup cam is not by choice. It just did not work at all on that day and here we were fully under impression 18.44 worked around whatever sync issues they had.
 
I wonder how well it can see cars without their headlights on?

Recently there have been a bunch of cases where people have been driving without their lights at night. To the point where at least one time I was really worried about changing lanes in front of him because I couldn't see his car clearly (a black car).

For some reason I didn't think of looking at the 360 degree visualization to see what it showed.

My guess is it saw the car better than I can.

Oh, and I have been keeping track of the rear camera. So far there have been zero failures in about a week of keeping track.

A guy with his headlights off came close to Tboning me the other day. I was going to turn left onto a Main Street from my side street at night in my neighborhood. There are no street lights and once I saw the visible cars pass I had my foot on the accelerator and was about to press it to pull out and make the left turn when I caught a very faint silhouette and instantly switched to the brake instead. About a second later, a black car with his lights off zoomed by me. If I hadn’t seen the silhouette st the last moment there’s a high likelihood he would have tboned me. I thanked god I didn’t opt for a darker tint for my front side windows and cursed the moron for driving in the dark with his lights off. I have no idea how he was actually driving like that.
 
The latest night time video is kind of awe-inspiring. The dynamic range of the cameras is amazing.

How does the dynamic range of the cameras compare to the dynamic range of your human eyes in that situation?

The green carpet, lane line interpolations, and 3D bounding boxes look awesome. I was just watching casually, but I didn't notice any errors. Seeing the software in action is so cool.
 
The latest night time video is kind of awe-inspiring. The dynamic range of the cameras is amazing.
Edit
How does the dynamic range of the cameras compare to the dynamic range of your human eyes in that situation?

The green carpet, lane line interpolations, and 3D bounding boxes look awesome. I was just watching casually, but I didn't notice any errors. Seeing the software in action is so cool.

Dynamic range of human eye is about 20 stops (2^20:1) although without adjusting pupil size it's only about 10 stops. Modern camera sensor has up to about 15 stops of native dynamic range and can be "pushed" many more stops electronically or digitally. Both "single scene" dynamic range and general dynamic range, each can be more relevant than the other depending on the driving situation, are easily won by the camera now.
 
Last edited:
well, there are different measures, some don't depend on the mile but rather on number of objects aroudn the car. E.g. see that Tokyo footage - the object at times disappear because I was assuming querying them up once a minute was ok, but apparently not when you have too many of them - so it really depends on the situation here. could be 8M/minute and could be way less.
gps metadata is relatively steady stream as is CAN data but I am too lazy to go and recalculate this bit into those metrics (time based I am sure).

It does not really matter because they juts have a fixed rolling buffer for everything (separate buffers of course) and justdump the contents to the mothership if the trigger asked for it, otherwise it just vanishes. So I think the most I saw was like 160 or so seconds of gps data, and like 100 seconds of can data. Object data could be as little as 15 seconds with the available buffers in v9 (more cameras to detect objects = more objects)