Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seeing the world in Autopilot V9 (part three?)

This site may earn commission on affiliate links.
temps from self report.

They do have some nvidia tools to measure utilization and they worked before, but it did not appear to me to use them on v9.

Since you have Model3 I am not sure what you mean by getting new hardware option while it's cheap, Model3 has the latest hardware (hw2.5). hw3 is not there yet for who knows how long and who knows what etra functionality would it allow if any. Probably safer bet to stay with what you have and just replace the whole car in due time? ;) As additional benefit you'd get whatever other new stuff will be shipping.

Yeah, I'm talking about AP3. It's being put in cars soon. Just trying to get an idea how long before it becomes required and EAP essentially is a legacy product with only minor updates possible. Model Y is my next Tesla. I want my hatchback back...
 
The flaw with side radar is that it does not have the resolution to determine if a car it sees is in your lane or one over, or behind you... especially around curves. Solving that problem is not much different from the problems with forward radar — you still need a vision algorithm to assign cars to lanes, and one to correlate radar signatures with a vision-detected car in order to determine if it’s relevant or not to your lane change.

That hassle seems worth it for forward facing radar because it allows for smooth and low latency distance regulation. For side / rear, you mainly just need a yes/no answer for whether or not it’s safe to perform a maneuver.

The real sucky thing about not having rear radar is that AP2 users have had to suffer for nearly 2 years with no appreciable blind spot detection when a $25 sensor would’ve been an acceptable crutch.
 
The flaw with side radar is that it does not have the resolution to determine if a car it sees is in your lane or one over, or behind you... especially around curves. Solving that problem is not much different from the problems with forward radar — you still need a vision algorithm to assign cars to lanes, and one to correlate radar signatures with a vision-detected car in order to determine if it’s relevant or not to your lane change.

That hassle seems worth it for forward facing radar because it allows for smooth and low latency distance regulation. For side / rear, you mainly just need a yes/no answer for whether or not it’s safe to perform a maneuver.

The real sucky thing about not having rear radar is that AP2 users have had to suffer for nearly 2 years with no appreciable blind spot detection when a $25 sensor would’ve been an acceptable crutch.

The ultrasonic sensors can be used for this purpose and augment the machine vision results.
 
  • Like
Reactions: Anner J. Bonilla
theres a very noticeable calibration difference between the two side rear facing cameras. The left repeater clearly sees more of the car than the right. How much tolerance do the cameras/software have for those kinds of differences, I wonder? Is it something that’s fairly easy to account for using software, or will Tesla need to bring cars in to slightly adjust camera calibration to make them more consistent?

Yes, they are definitely not symmetric - but look at the angle and position of the lane lines when on the highway, they are noticeably different and they could potentially use something like this to calibrate for small differences in camera direction

upload_2018-10-23_20-49-52.png
 
That's handled by the side camera
when cars are mostly aligned in their lane. you actually don't see the car behind you unless its a big truck or something.
We already know the back camera is problematic, with the additional issue that verygreen pointed out making it worse.

That means someone behind you approaching fast and then quickly changing lanes would be completely missed by the rearward cameras until it was too late. If you were attempting to change lanes at that time, it would be an automatic crash.

Funny enough, Tesla completely misrepresented the FOV of their rearward cameras. Interestingly, Mobileye put their rearview camera inside the car i guess to protect it from the elements.

cameras-side_rear@2x.png


Hardware capable of L5?
 
Last edited:
  • Informative
Reactions: AnxietyRanger
The flaw with side radar is that it does not have the resolution to determine if a car it sees is in your lane or one over, or behind you... especially around curves. Solving that problem is not much different from the problems with forward radar — you still need a vision algorithm to assign cars to lanes, and one to correlate radar signatures with a vision-detected car in order to determine if it’s relevant or not to your lane change.

That hassle seems worth it for forward facing radar because it allows for smooth and low latency distance regulation. For side / rear, you mainly just need a yes/no answer for whether or not it’s safe to perform a maneuver.

The real sucky thing about not having rear radar is that AP2 users have had to suffer for nearly 2 years with no appreciable blind spot detection when a $25 sensor would’ve been an acceptable crutch.

Personally I'm of the belief that it needs to have rear corner radars plus the cameras that are already there.

The rear corner radars can take into account low visibility conditions, and cases where something might not get registered by the neural network.

99% of the time it would simply confirm what one of the cameras was already seeing, and perhaps assist in speed estimation.
 
  • Like
Reactions: AnxietyRanger
Speculation: the front radar is used to validate vision based distance estimation. Later it will be used only for redundancy. Change my mind. :)

I think radar’s distance estimation is worse than binocular stereo vision. What about training Tesla’s front-facing trinocular stereo vision to do distance estimation using lidar as the ground truth / training signal? Teslas have been spotted with lidar before.
 
I think radar’s distance estimation is worse than binocular stereo vision. What about training Tesla’s front-facing trinocular stereo vision to do distance estimation using lidar as the ground truth / training signal? Teslas have been spotted with lidar before.

Radar’s distance accuracy is quite good, and constant with distance. Binocular vision has a depth error that scales with distance. Vision is quantitatively competitive only for objects within maybe 20 meters. But it doesn’t matter because the distance accuracy requirement for driving scales sub-linearly with distance so either one is good enough. This might be different if all the cars were robots, but typical vehicle operation is gauged to the limits of human drivers, who have binocular vision, 300ms reflexes, and the vehicle’s square of velocity kinetic energy as limitations. We design vehicles, highways, and rules around this set of boundaries so that’s what a self driving vehicle’s sensor capabilities have to match.
 
I think radar’s distance estimation is worse than binocular stereo vision. What about training Tesla’s front-facing trinocular stereo vision to do distance estimation using lidar as the ground truth / training signal? Teslas have been spotted with lidar before.

stereo vision is actually quite bad which is why no one uses it and hasn't quite improve since the darpa grand challenge days.
 
  • Informative
Reactions: AnxietyRanger
I'm ready.
I will refute every single hype post this update generates with un-disputable evidence even if it kills me!

r_1462326_ZgGUt.gif
New Autopilot can do the following:
  • Handle a lane split by itself
  • Handle exits without a dedicated lane
  • Handle exits with their own lane (by getting you in the lane of the exit around .5 to 1.9 miles before reaching it.
  • Handle split exits where one lane of the exit is fully dedicated and the other one is shared/splits
  • Suggest when to do lane changes on the highway for a "faster" lane
Any papers / sources you can recommend on these topics?
This is not exactly what jimmy_d was talking about but is along the lines of your previous comment.

https://papers.nips.cc/paper/5539-d...le-image-using-a-multi-scale-deep-network.pdf

I happen to have this tabs open. You should be able to find more by finding citations of this papers (google scholar should be able to help here) and finding newer papers improving or commenting on the performance of this papers.
They take a mixture of depth data and a camera image to create a network that given an image can estimate depth.

Here is one where they train it with stereo data(more relevant to your trifocal camera question)
mrharicot/monodepth (code for the next paper)
[1609.03677] Unsupervised Monocular Depth Estimation with Left-Right Consistency
 
I think radar’s distance estimation is worse than binocular stereo vision. What about training Tesla’s front-facing trinocular stereo vision to do distance estimation using lidar as the ground truth / training signal? Teslas have been spotted with lidar before.
Side camera still shows the distance of side objects (non-stereo vision).
Moreover, both right side cameras show different distances from same object in below picture. (66m vs 55m)

I would say the distance isn't accurate at all for this circumstance. Just a rough number.
Say you can simply define an 30-pixel-wide object as 60 meters.

1029.png
 
Last edited: