This is my educated guess at how things stand, take them with a (large) grain of salt:
AP1 uses MobilEye to see objects, a simple linear radar, and processing that integrates and processes the sensor data, and drives the car. The Florida crash was caused by not detecting the truck, it being a stationary object in the path of the car. The MobilEye chip failed to see the truck likely because of contrast problems. The radar also failed to 'see' it. This is because the linear radar sends a pulse and reads the returning reflected signal. The late part of the reflected signal represents all stationary things in front of the car, including: signs, lamp posts, potholes, fences, and, unfortunately large trucks crossing perpendicularly to the direction of the car. If the car braked every time it saw a stationary item, you wouldn't get very far. So, in the truck crash, the radar 'saw' the truck but ignored it, just like it ignores all stationary objects.
Tesla negotiated with the radar manufacturer to obtain the raw feed from the radar, and this lets them 'see' in a more 3D manner, since the radar scans the space in front, and Tesla can compare serial frames of data and so build a 'point cloud' (this is similar to what Lidar does). This likely lets then detect large stationary objects (like a truck). There can still be false positives for stationary objects. However, Tesla plans to map these, so they can ignore persistent stationary items (like signs, overpasses, etc.), but still detect intermittent stationary items (like people, cars, etc.). However, this requires them to build a map of persistent items. They do this by mapping will cars repeatedly pass by the persistent stationary items to confirm they are really persistent.
AP2 is a completely new system that has the potential to be much more. However, since it uses wholly new hardware/sensors, and it uses a new technique for processing (deep neural nets), it needs to be developed from scratch. However, Tesla is now releasing the AI model for functionality close to AP1. The system still has to learn to calibrate the cameras, map the world, etc. so it can eliminate false positives, similarly to AP1. The system will be periodically extended in functionality.
The demo system shown in the video shows the *potential* and is *not* promised yet. Its internal model is likely not complete, and is not validated, and so cannot be released, yet. An analogy: consider AP2 to be a primer book, written, edited, well vetted, and close to being ready to be published. The demo is a university level text-book - much more detailed and complex, but still in the writing and editing phase, and not ready to be published. [[ Caveat: all analogies fail to convey the whole story. ]]