That's just speculation on his part. Why would they not use the higher res camera data?
Comment from @verygreen suggesting the vision-only stuff is training on the main and fisheye cams (no mention of the narrow forward cam)
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
That's just speculation on his part. Why would they not use the higher res camera data?
Comment from @verygreen suggesting the vision-only stuff is training on the main and fisheye cams (no mention of the narrow forward cam)
Comment from @verygreen suggesting the vision-only stuff is training on the main and fisheye cams (no mention of the narrow forward cam)
That's just speculation on his part. Why would they not use the higher res camera data?
Yes, but which cameras are they using to build that 360 degree view? Green's implication is they don't use the narrow view camera. From what I've found so far, the narrow view camera was implemented in HW 2.5.Isn't Tesla anyway creating one single 360 degree view and using that for everything ? Atleast, thats where they are headed.
What the hackers can figure out is just tip of the iceburg. I'd not take what they say as the absolute truth.Yes, but which cameras are they using to build that 360 degree view? Green's implication is they don't use the narrow view camera. From what I've found so far, the narrow view camera was implemented in HW 2.5.
My theory would be more costly (processing power + time), and not useful for the near range depth mapping. Probably also harder to create binocular vision using the narrow since it looks so far out. As humans, we don't really perceive much depth at a distance.That's just speculation on his part. Why would they not use the higher res camera data?
In their description of the autopilot hardware, Tesla specifically says the narrow view camera is good for high speeds.My theory would be more costly (processing power + time), and not useful for the near range depth mapping. Probably also harder to create binocular vision using the narrow since it looks so far out. As humans, we don't really perceive much depth at a distance.
It can see objects further at high speeds, but it cannot easily determine depth. I'd guess this is where radar really helped... there were 2 sources of truth, and the car used these two sources to decide whether to brake or not. Stuck with long range monovision, you don't have that advantage.In their description of the autopilot hardware, Tesla specifically says the narrow view camera is good for high speeds.
They are determining speed and distance by the change in the images between subsequent frames. The more detail there is the better, and the more accurate it is at distance. They say that vision is more accurate than the radar. Radar only added noise.It can see objects further at high speeds, but it cannot easily determine depth. I'd guess this is where radar really helped... there were 2 sources of truth, and the car used these two sources to decide whether to brake or not. Stuck with long range monovision, you don't have that advantage.
Radar adds noise, but is very useful as another data point. It's not an accident that radar + vision has faced less overall phantom braking.They are determining speed and distance by the change in the images between subsequent frames. The more detail there is the better, and the more accurate it is at distance. They say that vision is more accurate than the radar. Radar only added noise.
The accuracy of the narrow view camera is certainly not less than the main camera at a given distance.
You are at odds with Tesla on the benefits of radar.Radar adds noise, but is very useful as another data point. It's not an accident that radar + vision has faced less overall phantom braking.
All I'm saying is at a distance, with Tesla Vision, there is 1 source of truth with a single eye which cannot judge depth "well" at a distance, but can at closer ranges due to binocular vision and similar depths of field. You can do depth estimations with 1 eye, but it's a lot more work and more prone to error. This is a human thing, but does carry true to computing as well.
Instead of bifocals, some try monovision. But depth perception can suffer, Penn scientist says.
The reason for this optical illusion takes a bit of explanation, but it is significant enough that it could pose a public safety hazard, researchers say.www.inquirer.com
Wasn’t radar basically just forward looking ? FSD needs to figure out distance and speed of objects to the side - esp for unprotected turns ...Radar adds noise, but is very useful as another data point. It's not an accident that radar + vision has faced less overall phantom braking.
Oddly convenient that the removal of radar and the push for pure vision came at a time when there was a radar shortage. Meanwhile at that time S/X cars continued to get radar installed.You are at odds with Tesla on the benefits of radar.
I don't know that we can say with certainty that it is the lack of radar that is the cause of phantom braking in VO. FSD with radar has it's issues with false targets causing phantom braking. There could be other changes that have been implemented in the VO vs V+radar. I thought 10.4 was better than either 10.3 or 10.5 in this respect, so there is something they can tune to mitigate phantom braking.Oddly convenient that the removal of radar and the push for pure vision came at a time when there was a radar shortage. Meanwhile at that time S/X cars continued to get radar installed.
Tesla is choosing a novel approach, but the benefits of radar are well known, and in service today in the form of Waymo and Cruise. Yes, geofenced, limited use cases, etc. but they have a real robotaxi - Tesla does not.
I do think there's merit to Tesla's vision-only approach. Others, like Light (Light) have also started looking at this, using multiple cameras to create highly accurate depth maps. Tesla will get much better here very fast, but having had radar taken away from me for FSD, I can definitely say we have regressed.
But they are extremely geofenced, expensive non-consumer vehicles. Tesla is not. We have been talking about this for years, nothing new.Tesla is choosing a novel approach, but the benefits of radar are well known, and in service today in the form of Waymo and Cruise. Yes, geofenced, limited use cases, etc. but they have a real robotaxi - Tesla does not.
The more points of truth that exist for the vehicle to make a decision, the more confident it will be.Wasn’t radar basically just forward looking ? FSD needs to figure out distance and speed of objects to the side - esp for unprotected turns ...
BTW, I do think radar + vision being better is an accident of history. If the situation was reversed I.e. they added radar later, vision would have been than radar + vision at this point.
There is no singular correct approach. But we do know that what they're doing works quite well. I can hail a Waymo today with no driver and enjoy a true driverless ride, albeit with huge geographic caveats.But they are extremely geofences, expensive non-consumer vehicles. Tesla is not. We have been talking about this for years, nothing new.
We just don’t know whether Waymo and rest of the industry approach will turn out to be the correct approach or just dogma and herding.
Looks like what happened in those notorious Tesla crashing into objects situations.* Vision: Hey I think I see something in the road
* Radar: I don't see anything ahead
* Vision: I dunno man, it looks like a giant black object, I want to brake
* Radar: Trust me, nothing is there, I would have heard something by now
* Vision: How confident are you? I'm at a 51%
* Radar: I'm at 90%
* Vision: Ok, I feel better knowing that. Let's go
* Radar: Ok let's go
No you can’t hail Waymo in NYC. Infact you can’t hail Waymo in 99.99% of US. But you can drive with FSD beta everywhere.There is no singular correct approach. But we do know that what they're doing works quite well. I can hail a Waymo today with no driver and enjoy a true driverless ride, albeit with huge geographic caveats.
I never said infallible. We're still beholden to what was coded (or not coded).Looks like what happened in those notorious Tesla crashing into objects situations.
Sensor fusion is not a joke.
I did Waymo in AZ a few months back. It uses radar and lidar. It works. It doesn't solve Tesla's problem, but it does solve a real-world autonomy problem very well, without a driver.No you can’t hail Waymo in NYC. Infact you can’t hail Waymo in 99.99% of US. But you can drive with FSD beta everywhere.
These arguments are never ending - we are comparing two separate dimensions - geography and features. They can’t be compared to figure out who is ahead or better now.