If I had to guess, Tesla wrote much of the "front end software" while Mobileye did the high-level object detection, vector/lane/path processing. I believe what Mobileye was doing was more difficult. For example, the EyeQ3 would process (overly simplified example) "right lane detected, curving left at 30 degrees" while Tesla would write the software telling the car what to do in that situation. Or, Mobileye would report "Vehicle detected at 1 degree ahead, 200ft away, moving at 45 mph" and the Telsa software would take that to (1) update the dash GUI (2) adjust the speed if necessary (3) help position Autosteer more accurately. Tesla was working on making the car drive comfortably and safely using that data.
So, I believe Tesla has a very good handle on what to do with the world it sees, when it knows what it sees, but it's now building the neural network to accurately turn the pixels from a camera (or 8 cameras) into categorized objects and vectors. I think we're seeing trouble in THIS part of the equation, since this was being developed by Mobileye for years on multiple generations of hardware. Until recently, Tesla's experience with Autopilot (again, my speculation) was limited to what to do and how the car should react when those objects/lanes/vectors were/were not detected by 3rd party hardware and software.
To be clear, I'm sure they're get there but any hiccups are likely from this development process.