I noticed something a little while ago about this subject which made me wonder if it was possible that it would be be fixed with further tweaking, or indeed if/when all of the cameras talk to each other in a more harmonious manner (if that is what the neural net is planned to do).
When overtaking a vehicle - lets call it the 'target', it can quite clearly be seen by the forward facing cameras placed in the windscreen and you yourself can see it on the display, where the target vehicle tracks straight (usually) in it's lane. But as you get a bit closer and within roughly a metre behind or start to pass, the B pillar camera starts to see the target vehicle more reliably, and I have repeatedly noticed on the display that the target vehicle appears to shift across it's lane.
As the Tesla AP has been programmed to first and foremost avoid collisions, then as soon as this new information is presented to the AP computer then of course it will take steps to prevent this.
This is probably why the slowing or braking effect (I've experienced a full on erroneous activation of AEB when passing another vehicle) is not seen as often in rain or darker conditions, as the pillar camera doesn't have as clear view.