What your describing is basically lane keep assist with radar cruise control.
Following a white line is very easy, most sixth form students should be able to build a robot that can do this, however thats not even close to 'full self driving'.
If you look at what Tesla AP software since 9.0 is doing your quickly realise its much much more advanced than just sticking in lane.
The 'Tesla vision' is improving with every update, the car now can consistently 'see' surrounding vehicles, cyclists, even pedestrians (though less reliably). The car is also starting to process this info, after changing lanes to overtake I've noticed the car wants to move back into the slower lane much quicker if there is another car behind.
The current software also reponds to merging traffic well, and even when lane lines disappear at traffic lights/mini round abouts it makes a good 'guess' at which way to go without crashing into stationary traffic. This is far more advanced than anything else I've seen.
The suggestion is though Tesla vision is coming along nicely even on 2.0 CPU, its already taxing the processing power to 80% consistently. A move to 3.0 CPU is 100% needed to now actually action on the vision data the car now generates.
The next 6-12 months will be really exciting from the FSD point of view, and partly why I was happy to order it in the recent 'sale'.
Legislation may handicap things though, our X would have done the M6 trip from Stoke to Kendal including overtaking slower traffic, getting into lane for Mway junctions all by it self with 'Nav on AP' if the EU/UN laws weren't implemented
.