Interesting presentation from Karpathy about Tesla autopilot development.
But for me not a very good news for the one expecting a quick FSD release.
Its the confirmation that they are focusing on software 2.0 for part of their driving policy, a lot of data, labeling, and automatic programming.
It seems very promising for a lot of field, but in the same presentation Karpathy is strangly giving all the good reason so that its not such a good idea : software 2.0 approach is not very good with edge cases and require a huge work of labeling. And the best exemple is auto wipe that could have been very simply engineered but Elon insisted should be done with AI, ending up with a much longer and painful process.
Waymo and Mobileye are going the classical way : use massively ai image recognition and HD map to get a very precise 3D representation of the world and its actors, then develop a classical 1.0 algorithm with a lot of special cases for the driving policy. And its working quite well, as Waymo is level 4 ready and Mobileye has vision only with effective driving policy operational in Jerusalem (to complete with a full redundant Lidar system).
So Tesla is focusing on different way to develop driving policy when a simpler and more reliable system is known to work and at the same time trying to avoid the HD map problem and has still a lot of work to have a reliable vision+radar view of world.
So all in Tesla is able to deliver a very good overall practical system that will work most of the time. They will be able to deliver impressive FSD feature, stoping at red light for instance, but they are far far away from level 3 and true level 4/5 FSD that imply full liability from the constructor.
For me the way Tesla choose for its autopilot development means its guaranteed to be level 2 for a very long time even with a lot of quasi self driving features.
Offering the most used automatic level 2 system, Tesla should have a moral obligation to offer the best driver monitoring system to ensure it stay in the loop for its own safety.
The autopilot screen presentation is a very good point to communicate with driver about what the system is doing, but the wheel nag system is a real pity, an annoyance and not a guarantee of eyes on the road.
Its all the more difficult to understand why Musk is dismissing pilot monitoring as a solution. Its perfectly fitting Tesla philosophy : a simple camera (already there in model 3 could be retrofitted on S and X) linked to a NN to understand what the driver is up to : attentive, sleepy, drunk.... It would allow a hand off eye on much more confortable, and could add terrific safety features (young driver monitoring...). This would add both safety and convenience : the system would ensure that you are concentrated and in the loop when needed (high speed driving) and could let you do other things in jam traffic.
MIT has already a very interesting study on exactly the same theme, how people are using Tesla autopilot :
and independent companies offer stand alone driver monitoring :
http:www.seeingmachines.com/industry-applications/automotive/
Tesla has a huge asset with its integrated computer system and OTA ability which allows to deliver incredible feature getting better and better when the other brand are stuck with rigid system, but it should not waste it with a bad overall FSD/autopilot approach.
No you can't avoid better driver monitoring that is guaranteed to improve safety and potentially save life at a very low cost, because you hope one day to deliver FSD when you are improvising very uncertain way to get there.
And no its not a good idea to use average accident statistic to prove autopilot safety and blame drivers when you are 100% sure that automated driving will always on average lead to less focus on the road.
Once again, I'm not a Tesla basher, a long investor very happy with the rally, but Tesla can and should do much better with its autopilot/FSD story.