I have not such a high opinion of The Economist as I think their articles are rather shallow. I thought I post it anyway.
Driverless cars show the limits of today’s AI
"In 2015 Elon Musk, the boss of Tesla, an electric-car maker, predicted the arrival of “complete autonomy” by 2018"
I don't remember Elon using the term "complete autonomy". I wonder if perhaps the article is thinking of "feature complete".
"The few firms that carry passengers, such as Waymo in America and WeRide in China, are geographically limited and rely on human safety drivers."
I would give this a "mostly true but misleading" rating if I were fact checking this article. The geographically limited part is accurate but Waymo uses safety drivers just in case the car gets stuck and to provide more peace of mind for passengers. Waymo cars don't rely on safety drivers for the actual daily driving tasks. In fact, Waymo has done some rides without a safety driver.
I think the article is right about the challenges of camera vision.
"One study, for instance, found that computer-vision systems were thrown when snow partly obscured lane markings. Another found that a handful of stickers could cause a car to misidentify a “stop” sign as one showing a speed limit of 45mph. Even unobscured objects can baffle computers when seen in unusual orientations: in one paper a motorbike was classified as a parachute or a bobsled. Fixing such issues has proved extremely difficult, says Mr Seltz-Axmacher. “
A lot of people thought that filling in the last 10% would be harder than the first 90%”, he says. “But not that it would be ten thousand times harder.”"
I feel like this probably summarizes pretty well why Tesla's FSD is taking longer than Elon thought. Elon probably thought that camera vision would be relatively straight forward, especially with Tesla's data, but in reality, it has proven much much harder to get to the reliability needed for driverless FSD. And I think this also explains why companies like Waymo adopted their approach of multiple sensors, including lidar and HD maps. They realized that to get the most reliable FSD system, it's best to give the car as much help as possible. And it's why the cars are geofenced. It's best to start small and get FSD working in one area and then build from there rather than try to go for general AI L5 right away.
"Dr Marcus, for his part, thinks machine-learning techniques should be combined with older, “symbolic ai” approaches. These emphasise formal logic, hierarchical categories and top-down reasoning, and were most popular in the 1980s. Now, with machine-learning approaches in the ascendancy, they are a backwater."
This seems to be a reference to neuro-symbolic AI, a new type of AI that seeks to combine symbolic AI with machine learning. Awhile ago, I shared this article that talks about it:
Why Neuro-Symbolic Artificial Intelligence Is The A.I. Of The Future
Some researchers are putting their hopes that this neuro-symbolic AI will overcome the limits of the current machine learning and therefore be able to achieve a better AI capable of better FSD.
I appreciate the article's conclusion of trying to set more realistic expectations for FSD. Yes, if you are expecting FSD like in Knight Rider, then we would need true general AI. But certainly a more limited FSD, like geofenced L4 or highway L3 would not need general AI in my opinion.
And I do think there is something to be said for developing smart roads and smart infrastructure that can make the job easier for FSD cars. A big reason why L5 is so hard is because our road system sucks. Yeah, it kinda works for human drivers because we have the general intelligence to figure it out. But the inconsistent traffic lights, poorly marked roads, strangely layed out roads etc are hell for a computer to figure out on its own. If we could build a new road system designed for FSD cars from the get-go, it would be much easier for FSD and worth it in the long term because of increased safety.