The 5 second thing makes NoA almost entirely useless outside of the US.
In busy traffic, 5 seconds is simply not enough time to signal the intent, and complete the manoeuvre. The most dangerous part of the 5 second rule is that it will abort the lane change even if it has already started - but not completed - the lane change.
Cool, so they could likn the turn stalk into "change lanes eventually " instead of "change lanes now".?
Well - not really... a lot of the time you're relying on the indicator to signal intent to other drivers because you need them to leave you a space. That will often take more than 3-5 seconds. There are plenty of situations whereby you'd almost never be able to change lanes if you waited for a natural gap large enough for NoA to signal, and maneuverer.
To be fair, though, [AP1] is also doing recognition of a much smaller set of things (which likely means a much smaller NN) and only processes data from one camera instead of 8/9.
AP1/EyeQ3 doesn't use a neural network to perform image recognition. It does use custom ASICs to accelerate the traditional computer vision algorithms used. EyeQ3 "recognises" much more than AP2, although Tesla did not use all the features in AP1. This includes traffic lights, potholes/debris, quite an impressive array of
international signage, animals (yes), and more. And, it does it in something like 3 watts, on a SoC that sits on your windshield, and since 2014.
There is no real need to use a NN for the kinds of image recognition that Tesla is using for AP2. I can see why Tesla went down the AP2 route, and the NN route (quick, yet inaccurate results). It's not particularly great progress, overall - and it's a shame that we'll never see what a Tesla/EyeQ4 system would have been capable of.