Things we do know today: 1. sensor incompatibility is a giant problem
What do you mean by "sensor incompatibility?"
2.Update lags render radar ineffective.
Radar signals travel at the speed of light. There is no problem with "update lag," and I frankly don't even understand where you'd get such an idea.
3.. Lidar has exceedingly high sensitivity to precise and permanent labeling, making it both expensive and complex to use without both highly precise and highly accurate environmental data.
Lidar is just a sensor; it can be used with or without maps, in the same way that cameras can be used with or without maps.
Thus, I would bet on Andrej Karpathy and others like them who understand how to describe their process in terms that common people think they understand. They call these things like "Deep Learning" and "Computer Vision"
Literally every AV company uses deep learning and computer vision.
The people who sell lidar and radar is preferred solutions haven't really understood the problem they are trying to solve.
Why would you make such a claim? It doesn't even make sense. Every company that uses lidar or radar also uses cameras, and they're all trying to solve the same problem.
What bugs remain are essentially exclusively recognition and path planning issues... Detection of obstacles a-la LIDAR or Radar is, as far as we can tell within the sample size here (25k cars are running beta by my extrapolation), a solved problem.
Object detection and planning are almost the entire challenge of making an AV. It doesn't make sense to trivialize them. And those challenges are very much still present in the current version of FSD beta -- there are videos all over the place of it trying to drive directly into poles, bollards, construction signs, and many other objects.
After all Waymo still has people manually labeling exactly where a car should drive on every road and even alternate routes on parking lots.
That isn't true; mapping is essentially entirely automated. Changes in the environment vs. the map are even communicated between cars automatically.
Were you to have a more technical background you might understand the importance of different time in response (radar being slow) and able to function without precise and accurate known environment (Lidar cannot do that, by design).
I'm a subject matter expert. Radars aren't slow. Lidar is just a sensor, and it can be used with or without maps.
My experience with 10.5 in routine suburban driving is really, really positive.
There are already real L4 driverless robotaxis operating in suburban environments.
because once they start relaxing the driver supervision requirements then it becomes time to talking regulatory process, and that's not going to be fast.
There are many places where AVs are already legal, e.g. Arizona.
Tesla is rapidly approaching the base of the exponential improvement curve.
Contemporary machine learning techniques exhibit power-law scaling, which is a form of diminishing returns. There is no exponential improvement curve at Tesla or anywhere else.
But then, we'll start to see very noticeable improvements with every bi-weekly release as Dojo starts throwing its muscle at the problem.
Other companies, e.g. Google / Waymo, already have much more powerful computers than Dojo. If all you needed were more computing power, AVs would already have been solved.
And if you still haven't watched the AI Day presentation, GO WATCH IT!!! Yeah, it's a bit over three hours long. Eat that elephant in small bites if you have to, but eat that elephant! So much of the stuff the nay sayers are saying on this board is covered in depth in the AI Day presentation.
That presentation was aimed at college students, or maybe early grad students. It did not contain anything at all that could be considered novel by the machine learning community.
Waymo is a GPS-based virtual "drive by wire" system that only operates on a small "geofenced" section of Phoenix, that is moreover all clean and wide perpendicular roadways.
Waymo operates cars in 25 cities, and drives long-haul trucks on the freeways. Waymo seems poised to launch a robotaxi service in San Fracisco, one of the densest and most difficult cities in the US to drive in.
The Waymo model can operate urban taxis
Urban taxis are a huge fraction of the economic value of AVs.
Tesla cars use a standalone sensor set (mostly the cameras at this point) with a "Neural Network deep-learned" set of rules, and a (shrinking) residual layer of procedural code.
Every AV company uses neural networks and deep learning.
Companies like GM, that are new to the autonomy problem, seem to believe that they don't need the massive experience data set that (only) Tesla has acquired to "deep learning" train the AI. It seems feasible enough at first to refine procedural code to handle every situation. But it doesn't work; there are too many variables, and the code becomes unmanageable.
Self-driving is not a data problem. Deep nets that can run at real-time on hardware that can feasibly be installed in a car have at most a few million parameters. They do not have the capacity to gain anything from extremely large datasets.
Multiple companies have real, driverless robotaxis today, and they did it with less access to data than Tesla. They are existence proofs that data is not the bottleneck for making an AV.
There's an
interesting youtube clip where a Waymo "robotaxi" and a Tesla with FSD Beta 10.4 are tasked with driving to the same destination. The Tesla is, moreover, crippled with hard speed limits. The Tesla beats the Waymo by 2 minutes on a 12 (?) minute run. The interesting thing is why. The Tesla is free to pick and alter the route, and gets on and off the freeway for short sections. The Waymo is locked to its street grid.
The Waymo is 10,000x to 100,000x more reliable than the Tesla. The Tesla gets there two minutes faster. Which one do you put your children in?