I don’t have a hotline to developers. But I am aware of the science behind automation and FSD.
The simple way to put it across is, automation in vehicle is a complex task, for example driving in a motorway is easier for an automated vehicle than navigating a parking lot. Automation isn’t like coding - it is a lot of machine learning. For this you need real world testing data and it is challenging before the collection of data. What Tesla is doing is collecting these data of real time driving from all their cars (how a driver use fog lights, heater etc., for example) and process these information over a period of time to understand the best way to automate. Machine learning is more efficient when dealing with less variables including touches and user flexibility.
The reason Tesla uses vision cameras rather than sensors for headlights and wipers is the hardware needed to be part of the system to collect data before the software development is complete. I have heard people here saying why can’t they use the tried and tested sensors instead of using the unreliable vision cameras. Unfortunately they can’t as the hardware has to be part of it to collect data now for the implementation of full automation few years down the line. Advances in software remain the single greatest obstacle in vehicle automation unfortunately.