I have never in my life needed human speech / voice command to follow public safety directions, i follow their hand gestures.
Based on the unbiased analysis of the system by Verygreen. We know it doesn't detect general object obstacles, cones, debris, barriers, curbs, guard rails, traffic lights, traffic signs, animals, road markings, etc.
Tesla's SFS for example is basic, while Mobileye has 15 different categories and they might have added more. It will tell you whether the edge of the road is flat, whether its a curb, whether its a concrete wall, whether its a guard rail, and whether its a concrete barrier.
Even eyeq3 had a network that detected potholes, animals and road debris. Infact Audi is using it for Automatic Active Suspension adjustment.
They also have Networks to do things such as lane detection and what lane you are currently in, and upcoming lanes detection from afar. NOA could use a network like that as the way it handles things currently are primitive. right now it looks like they are doing some sort of algorithm downstream that barely works. Which is why it detects shoulders as a lane and also not seeing lanes which leads to missing exits and attempting to take them way too late.
ME also has a lane segmentation network that tells you what it lanes means and leads to. Lane expansion, merge lane, lane split, lane collapse, exit lane, etc. All of this is in Eyeq4 and in production TODAY!
That remark is not from me, but from other reviews of NOA.
What is the disengagement rate of v9 on limited access freeways?
Again Elon boasted that AP2 chip was more than enough for Level 5 FSD and grandstanded that he could do cross country with his eyes closed and promised FSD feature to be released in early 2017. Have you forgotten all of this? How convenient!
Its not the NN that is the problem (when it comes to highway autonomy for the most part), its the motion planning and control algorithm. How is it that you people never seem to be able to differentiate the two?