Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seeing the world in Autopilot V9 (part three?)

This site may earn commission on affiliate links.
So cool! Thank you for the time and energy it took to put this together, @verygreen.

It’s just damn interesting stuff to those of us who aren’t knowledgeable about this tech. I won’t get into the debate on whether it will all work someday (there seem to be plenty of warriors on that front already) but it’s really cool to watch and learn!
 
Thanks for the vid! Very interesting to see. Just looking at the cameras, I can see why Tesla doesn’t provide a surround view camera for parking. It does look like the side rear facing cameras do see the side of the car and the wheels, but I’m not sure how much overlap there would be with the rear facing camera, if there is any overlap at all, so stitching together the images to show a top down view of the rear of the car probably won’t work. But those side cameras would still be very useful in parallel parking situations to see the curb and its relation to the rear wheels, IMO. I’d love it if Tesla gave us an option to bring that view up in addition to the backup camera when parking so I don’t need to flick my eyes from the backup camera to the side mirrors and can just look at the screen instead when parallel parking.
 
theres a very noticeable calibration difference between the two side rear facing cameras. The left repeater clearly sees more of the car than the right. How much tolerance do the cameras/software have for those kinds of differences, I wonder? Is it something that’s fairly easy to account for using software, or will Tesla need to bring cars in to slightly adjust camera calibration to make them more consistent?
 
Excellent work! Seems that the pillars have issues with drivable space determination which we don't see in the other cams.
This small detail actually provides us curious outsider programmers with a great amount of info about the current state of the software.

Currently v9 clearly has a much greater contextual understanding of the content of a camera than v8, within a single camera. This detail demonstrates Tesla has yet to stitch all data together to provide contextual situation of the overall environment around the car. Such as "main camera tells we are here within this drivable area, with a barriere to the right (curb), hence the area that looks drivable to the B-pillar camera is not drivable because separated by the curb visible to the main camera".

Well.. actually technically it could still be implemented later in the pipeline after the data overlay in the video.

Great work guys, nice to get insight! :)
 
  • Like
Reactions: Joerg
I’d be interested in seeing a vid showing the repeaters combined with the rear view camera to see how much, if any overlap there is, but I get why you wouldn’t include it in these vids if the camera is finicky and you can’t pull much AP data from it.
Just go to the ap2.0 camera thread that has all 8 camera pictures from the same scene to compare? There were aplenty of samples good for this purpose.

I might attempt a backup-camera recording again eventually once it appears the operations are more stable.
 
Any idea on the processing pipeline downstream of the NN?

Recall from jimmy_d's post they are passing features from vision processing downstream Neural Networks

Wondering if they are running some kind of SLAM type algorithm with vision features for 3-d environment reconstruction...there will need to be some kind of environment/search space constructed around the car for path planning (auto lane changes) to take place.
 
rear radar
JAyA.gif
 
I did not take any specific measurements, but the temps are like 10C higher on the unit I think, so it is utilized more.

Measured with infrared externally or using nvidia-smi? Should be able to get actual GPU utilization if using nvidia-sim or is that mostly broken on the APE?

Edit: basically seeing if I want to try to fork over for the new hardware option and get it while it's cheap... :D
 
Measured with infrared externally or using nvidia-smi? Should be able to get actual GPU utilization if using nvidia-sim or is that mostly broken on the APE?

Edit: basically seeing if I want to try to fork over for the new hardware option and get it while it's cheap... :D

Tesla have already said new HW is necessary for FSD but not EAP. Probably tells you all you need to know in terms of HW resources. The end is likely close for what can be achieved with the current devices, and I guess they are hoping to fit it all in there for EAP at least, but that will be that.
 
Tesla have already said new HW is necessary for FSD but not EAP. Probably tells you all you need to know in terms of HW resources. The end is likely close for what can be achieved with the current devices, and I guess they are hoping to fit it all in there for EAP at least, but that will be that.

Utilization would be more indicative of the timeline of needing updated hardware and EAP becomes essentially legacy. (i.e. imminent or years from now)
 
  • Like
Reactions: Joel
Measured with infrared externally or using nvidia-smi? Should be able to get actual GPU utilization if using nvidia-sim or is that mostly broken on the APE?

Edit: basically seeing if I want to try to fork over for the new hardware option and get it while it's cheap... :D
temps from self report.

They do have some nvidia tools to measure utilization and they worked before, but it did not appear to me to use them on v9.

Since you have Model3 I am not sure what you mean by getting new hardware option while it's cheap, Model3 has the latest hardware (hw2.5). hw3 is not there yet for who knows how long and who knows what etra functionality would it allow if any. Probably safer bet to stay with what you have and just replace the whole car in due time? ;) As additional benefit you'd get whatever other new stuff will be shipping.