I'd like to hear your thoughts on that. There is the discussion of Lidar which is certainly a hardware issue. Also for true autonomous activity, it is very likely that
the car will have to communicate with other vehicles and vice versa. You think all of this can be accomplished with some software updates?
My thoughts?
Tesla is going for the end goal of fully self sufficient autonomous driving. Due to that goal, they are taking way longer than other companies to develop the intermediate/ lower feature content versions of the system.
For example: a system where all cars are tagged and provide GPS position, heading, and speed is much simpler. However, it requires all cars to have that system (which is not likely any time soon) it also requires no other objects in the vehicle's path. Add a object in the road: all the car position data is no help in this case. So you have to have robust object detection. If you have robust object detection, then do you need the other car data (beyond smart traffic control and routing)?
Lidar is great in that it gives you a range to the nearest object as native data. However, it doesn't provide any context data. Traffic signals, turn signals, police lights, cross walks are not handled by it. It also can't tell you if the return is cinder block or a plastic bag. So you need to have a robust vision system to handle that type of data. If you have a robust vision system, is the Lidar still needed?
I drive without lidar or other car data so it can be done. I think one stumbling block people have is the level of precision cars need to function. From a basic navigational aspect, it only cares if there is an object that may collide, it doesn't care what the object exactly is. Similar to driving without glasses, it doesn't matter what the blob is, just don't hit it. Handling turns is more complicated of course, but lanes are just objects too.
On the side of useful things that do not incur a hardware cost are mapping strategies. Each car can report its position and speed. This provides a live map of traffic conditions. It also provides an ant-trail of normal vehicle paths. Repetitive sampling of intersections can provide a "most likely to be here" mapping of where the traffic signal are. This is great for providing hints, but in order to handle changing conditions, the vehicle must be able to navigate without those aids.
A huge advantage Tesla has is that all the cars are able to feedback route data. So baseline path and traffic control data can be processed back at the ranch if need be (current indication is that that are not doing this on the full fleet). They could set up the system such that once you have manually (EAP) driven a route 5 times, it has enough data to FSD it. Again, that is sort of cheating since the system needs to react to changes, so the baseline data should only be used as a gut check against low confidence vehicle processing.
Summary of my thoughts:
Tesla is jumping to the hardest version of the problem, which takes more work and does not yield much in the way of mostly working intermediate steps. However, once they get the vision system working, it covers (or can be updated to cover) all the edge cases. Current EAP is a limited set of lane assist and cruise control functions to provide some value to owners. Future EAP will be the FSD code set with extra driver interaction/ attention requirements.