RobDickinson
Active Member
To be honest computer vision has come on a hella lot since 2016.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
He went on at length about how important it was in many interviews at the time, pointing out the advantages of seeing cars ahead, the ability to see through bad weather, etc.... and between that his "pushing" the team as quoted they decided it could be used as a primary control sensor WITHOUT imaging/vision back then.
Nah, I'm very confident you're misinterpreting this.
Ever since V8, Tesla has done sensor fusion with radar and cameras
After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition
. Karpathy talks about the underpass / bridge problem in his most recent talk. That means that until 2021, Tesla has used radar as a primary sensor.
My original point was Tesla has changed their strategy 3 times since 2016. And they have.
2016: Radar moves from secondary to primary sensor- and cameras when used are individual frames of individual cameras. HW2 is PLENTY.
My main contention was when you replied to following with those articles:
I'd like to see proof tesla said radar would be the primary source of data for self driving.
If you don't want to take MY word Tesla has had 3 entirely different strategies, will you take Elons?
Dude that was me not knightshadeMy main contention was when you replied to following with those articles:
I'd like to see proof tesla said radar would be the primary source of data for self driving.
Yeah, I remember this was the case and the definition is above that it doesn't need camera to confirm (but not that it's the "main" sensor that you depend on, as obviously the cameras still serves that role).They have removed the blog entry but this article references it: Tesla Autopilot Upgrade Will Make Radar A Primary Control Sensor
After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition. This is a non-trivial and counter-intuitive problem, because of how strange the world looks in radar.
Everybody else is using high quality sensors and doing great with sensor fusion.
Humans are able to move their heads and they have access to other senses too.
Everybody else is limited to carefully pre-mapped and geofenced areas.
How do we know if they are doing great with sensor fusion or not anyway?
Because their perception is so good. And the reason their perception is so good is in large part because of their sensor fusion. They have been able to fuse data from the maps with the data from camera, lidar and radar to give the car accurate perception. That's a big reason they are able to do reliable autonomous driving with no human in the driver seat.
MUCH PERCEPTION! SUCH WOW!
Seriously though- it's a good point that you appear to be entirely ignoring.
"WORKS GREAT IN ONE TINY SPECIFIC HIGHLY HD MAPPED AREA-- AS LONG AS THERE'S NO MAP CHANGES AND A HUMAN REMOTE BACKUP IS AVAILABLE TO HELP IT" is not evidence of awesome fusion or perception.
Works -all over the place- would be.
They have not demonstrated that at all. Indeed the fact they (like tesla) are many years past their original deadlines to do so suggests it's not as good as you seem to think.
It's kinda funny how Karpathy throws in the towel and says sensor fusion doesn't work. Yet, every AV company is still using sensor fusion and deploying robotaxis with sensor fusion.
But I am not saying that Waymo's perception is 100% perfect all the time. I am disputing Karpathy absurd claim that sensor fusion is unworkable because of all the phantom braking and false positives and therefore vision-only is the only solution
. If that were true then Waymo, Cruise, Zoox should be phantom braking every 5 seconds with all the sensors that they have.
Clearly that is not the case. Waymo, Cruise, Zoox and others are proof that sensor fusion is viable. Heck, if sensor fusion is so unworkable as Karpathy claims, then how come Waymo is still using sensor fusion after 20M autonomous miles?
At this point, Tesla does not really have a choice IMO. They committed early to putting cameras on every car and claiming the hardware was capable of FSD. And they've sold hundreds of thousands of cars now with the hardware. They can't afford to upgrade ...
That's not at all how I read his remarks.
He was saying that the engineering effort it would require to have great fusion is better spent on vision- since you ultimately need to solve vision to ever have L5.
If sensor fusion is so WORKABLE why do they still not offer L4 service outside one tiny area of a perfect weather, simple road, city?
LIDAR has different failure modes compared to cameras as well (reflective trucks, wet reflective roads, reflective walls / tunnels, glass on trucks, etc.). They'll eventually run into the similar local maximums as Tesla.
How do you expect LIDAR to solve the reflective false positive / negative problems? We've actually seen Waymo cars have trouble (non-consequential steering hesitations) when driving next to shiny metallic trucks.
dumbing down your sensor array because making sense of multi inputs is 'hard' - that's just another way of saying 'we give up'.