It should not be surprising — it should be obvious — that L4 in a highly constrained environment with lots of crutches is a much easier problem than L4 in the wild, with basically no constraints.
For Tesla to achieve human-level L4 driving with their FSD software would be a vastly larger technical achievement than getting to human-level L4 driving within Waymo’s constraints.
It’s not clear to me which is more difficult: making a driverless robot work in Waymo’s playpen or making an L2 robot work in the wild. It’s possible they’re about equally difficult.
What we cannot accept as sound reasoning is that L4, irrespective of constraints, is better or more impressive or more advanced than L2 in the wild simply because 4 is a higher number than 2. That is folly.
Waymo’s technology could not support an L2 system in the wild because it depends on crutches that Waymo only has within its playpen. If you stripped away the crutches and forced Waymo employees to re-develop the software for L2, I reckon you’d (eventually) end up with something comparable to FSD Beta.
Conversely, if you took Tesla’s technology and built a playpen for it in Arizona with all the same crutches Waymo uses, I bet you’d eventually end up with something comparable to Waymo’s driverless proof of concept.
This is simply not true as have been proven, Tesla fails on the most simplest driving situation. You act like Phoenix is some cornered off box in the middle of no where. That it is not the "real world" or the "wild". Yet this is the same location that has 19 million tourist visitors every year. So clearly the system is robust and general unless these people/cars are in danger. Waymo would be a manice to society with all these millions of people going to phoenix from around the world that Waymo has to interact with. Waymo cars encounters these people and vehicles it hasn't seen before on a daily basis. Phoenix, Cali has 19/42 million visitors from all over the country and the world each year. If Waymo's perception system was brittle, each of those people are in danger of being killed/totaled.
Look at the gif below (there are hundreds of other simple situations like that where FSD Beta fail at and this occurs multiple times in a SINGLE drive). If what you said was true. IF i took this vehicle that FSD Beta was about to ram into because it didn't detect it, into phoenix and parked it. The driverless waymo should ram into it.
We can infer that Waymo has driven 100k+ miles with no driver in Phoenix. We know that humans don't shape shift when they drive to other cities. This is actually very important. Your cars don't transform like autobots. Again very important. This allows Waymo's NN to generalize.
If anything is going to break through the challenges in perception, prediction, and planning that continue to confound AVs, it will be the application of new approaches or new advances in old approaches — such as 4D vision, multi-task learning, self-supervised learning, imitation learning, and reinforcement learning — at the million-vehicle scale, with thoughtful data curation (using things such as active learning and shadow mode).
Once again only care about what ever Tesla is doing or what you speculate tesla is doing. When others are doing that and more things that are way more advanced and have been doing it for years. We know what Tesla is doing because we can clearly access the NN and their architecture.
Solving L4 in the wild with this data is a fundamentally different problem — a fundamentally easier problem — than solving L4 in the wild (not in a playpen) with the data you can get from a few hundred vehicles. It requires neural networks to generalize much less. It trains them with an amount of data commensurate with what we’ve seen in successful AI projects.
Do you transform into an alien when you go from city to city? Does your car transform into a UFO and levitate? Do you walk backwards like in Tenet? You should go to Phoenix if your logic is correct. Waymo perception and prediction sys will fail & should run u over and rear end u.. If your statement were true then all the millions of tourists who fly/drive into Phoenix are in danger of being run over/rear end as Waymo's perception & prediction is brittle, not general & will instantly fail.
Additionally, Huawei would not be ready to release a door to door advanced autopilot system that works anywhere in China in 6 months using just 500 test cars in development. In an environment that is orders of magnitude harder to drive in than the US. At a MPI that is up to 500x higher than FSD beta.
Same is the case for Mobileye, they wouldn't be ready to release a door to door system that was developed in Israel, then works in Germany and Detroit and is about to be deployed all over china in a few months.
Again its about LOGIC.
1+1 will always be 2.
Your logic doesn't check out.
This is why we have to look beyond shallow comparisons between Waymo and Tesla. It is too simplistic to say Waymo has more advanced AI because 4 is a bigger number than 2. We have to look at the size of the problem — its scope, its constraints, its crutches, and also the resources, i.e. the data, that a company can use to solve it.
Its not 4 > 2, its that Waymo has given ~100k driverless miles to the public and Tesla has done 0, zero, nil, zip, nothing.