Are you saying Elon lied on the 22nd?
What I got from Elon's comments is that they are not planning to use HD maps as a
primary input to the driving software (e.g., path planning or determining drivable area). Obviously, they will continue to use maps (HD or otherwise) for navigation/route planning and, until road sign reading gets more robust, they can use maps to get speed limit info, identify carpool lanes, one-way streets, dead ends, train crossings, etc. I imagine they will also use maps to "see" around blind curves/corners to know how much to slow down (of course, roads should be designed to avoid this, but in the real world we are not always so lucky). Maps can also include "real-time" traffic data which can help with route planning, but might also be useful to change lanes to avoid a blockage that you can't yet see.
Elon's point (which I agree with) is that you cannot rely on maps (no matter how precise) as primary input, because they are always out of date. You have to solve vision to be able to handle all of the exceptions and dynamic situations that occur. We know vision is both sufficient and necessary, because that is what human drivers use (plus a bit of hearing, which is not essential*), and because roads are designed for sighted humans. Lidar and radar cannot read signs. (*Plus, there is a microphone in the car, which could theoretically be used to detect sirens, etc.) If you "solve" vision, you don't need anything else to be at least as good as humans. It remains to be seen how hard this problem is to solve (or what exactly that means). I think recent advancements in software (Neural Nets/Machine Learning) and hardware (faster, cheaper and more domain-specific CPU/GPU/ASIC) are helping to accelerate progress, and we are close to an inflection point where a lot of progress can be made in a short time. Clearly, Elon is betting on this.
Then there is the question of sensor redundancy. I think there are at least two facets to this: 1) handling hardware failures or blocked sensors and 2) providing a supplement or backup for limited capabilities.
As far as 1) goes, Tesla has chosen to have multiple cameras covering frontward view (main direction of travel for cars) and not much overlap on the sides and back. Arguments for more cameras: they are cheap, could be blocked by dirt/debris/precipitation. Arguments against: don't fail often, so not needed. I am much more worried about occluded cameras than failure. Some partial remedy might come from hydrophobic/oleophobic coatings, shielding, heating elements, and/or wipers. Coatings and shielding could be added using aftermarket products.
As for 2), obviously autonomous cars will need to drive in some level of rain, snow and fog. Cameras and lidar are not great in these conditions due to reliance on visible light (or infrared). Lidar also has fairly poor temporal resolution, although spatial resolution is good. Radar can see right through precipitation, and has good temporal resolution, but spatial resolution is not great. Elon said in the Autonomy Day presentation that he preferred radar since it uses a different frequency spectrum (i.e., not visible light) and therefore covers cases lidar can't. Tesla has decided a single front-facing radar is good enough. What I got from the presentation is that moving forward, they are planning to use vision as the main sensor input and radar as a supplement in cases where cameras struggle (inclement weather, and maybe poor lighting). They also talked about using radar to help train (as a ground truth cross check) new software to use only the cameras to accurately supply distance information and build a 3D point cloud view of the world (akin to lidar). This seems promising, but is not a done deal. Again, Elon is betting on this.
I think they are deliberately avoiding having a system with multiple sensor types as primary inputs (aka sensor fusion). If you have conflicting inputs from multiple sensor types, deciding which one is right can be tricky. Maybe this will be solvable with software 2.0 (NN/ML), but I notice that
Mobileye's fully autonomous prototype car is using only cameras.
TL;DR Elon's fundamental point is that vision is both essential and sufficient to solve FSD. Thus, focusing on other types of sensors (lidar) and inputs (HD maps) for the driving task are just distractions or crutches to help overcome (relatively?) short-term software limitations.
I think his overall reasoning is sound. Of course, the devil is in the details, so there are lots of potential stumbling blocks.
- The FSD computer (HW3) plus Karapathy's neural nets may not be good enough, or may take too long to get there. Predicting software development progress (especially in a rapidly changing area like machine learning) is hard. Elon has problems estimating timelines. In the next 2-3 months, we should start to see HW3-specific FSD features rolling out. The pace and quality of said roll out will give a good indication how well they are making progress. My gut feeling is that, since the hardware is well-designed and Karapathy is a smart and well-respected domain expert, we will indeed see some impressive progress shipping to actual customer cars in that time frame.
If progress is too slow, they may lose mind and market share to the HD maps and/or lidar proponents, who will continue to make progress for some time. Tesla has to make it through the short term in order to survive long term as a company.
- There could be a breakthrough in solid-state lidar making it cheap enough to use widely and soon in production cars. This will help some developers/manufacturers stay (or jump) ahead of Tesla, at least in the short term. I am having a hard time not comparing this supposedly imminent breakthrough to the weekly announcements of battery breakthroughs that rarely pan out, but we shall see.
- Another car manufacturer could start taking FSD software development seriously. I don't think they are doing so now, even though some want you to think they are. Ultimately, autonomous cars mean the world will need fewer cars, so the manufacturers have a strong disincentive to make that happen quickly. Instead, they will continue to be conservative (slow) in their approach citing safety, the need for more testing, and maybe even (curse the thought) regulation(!). Take a look at how the automakers are dragging their feet incorporating Mobileye's newer chips and software or geofencing (Super)Cruise, or restarting with new alliances (Germans), or doing very little (Japanese/Koreans, well maybe not Nissan). A couple of them will get antsy and forge ahead. Several will not survive. New companies will probably emerge (maybe Chinese automakers partnering with US software).
- Tesla could fail to manage carefully the transition from assisted driving to FSD. This is tricky. People will start making too many assumptions about what the car can do. Depending on the performance of software and the design of the user interface, it might be hard for drivers to intervene safely when the car makes mistakes, which it will. A few bad fatalities (children/celebrities) could cause a huge backlash from regulators and/or the public against Tesla or FSD in general. Responses will probably (certainly) not be rational.
- Tesla could just not make it as a company for other reasons. The FUDsters could still win. It could be something with the SEC or battery fires or too many "Autopilot crashes". I think this is a lot less likely to happen now than 3-4 years ago, but there are many vested interests betting against them.
Autonomous vehicles will be an extinction level event in the transportation industry. The dinosaurs will die off, and the auto sapiens will emerge. Elon and Tesla will probably stumble along the way, but I'm betting on them stumbling forward when they do.