I'm a bit less pessimistic than you, though I agree about the over-confidence factor. So far, my experiences with FSD have been that its very timid .. it tends to mostly drive at safe speeds regardless of the set limit (it did it on the way home today, when it held at 29 even though the limit was 35 because it was unhappy about weather). Also, if I'm honest, a lot of the time when I take over because (say) the car appears to be cutting things fine, in fact the car was just cutting it closer than a human (me) would be comfy with, but is no big deal to the car.As FSD Beta gets better it absolutely will get more dangerous before it gets less dangerous. The usage will go up, and trust in it will go up. But, there will still be a lot of cases for mistakes to happen.
One thing I've seen time and time again with FSD Beta is my safety threshold is one thing, and the cars is different.
Too close to the curb = disengagement, but the teenager (the cars computer) is probably telling me "it was fine".
Too fast in a residential area = disengagement, but the teenager is telling me "you told me to go the speed limit" and has no understanding that I want to do 5 under in tight residential areas.
Too slow during a maneuver = take over, but the elder (the cars computer) is telling me to have patience for its old bones.
But yes, I agree that complacency by the driver is dangerous (its already happened with plain old TACC/NoA). Perhaps Tesla should program in a random "TAKE OVER" panic even when things are safe to make sure the driver is paying attention (not really, such things might actually cause accidents). My guess is the eye monitoring should take care of most of that, though perhaps I'm being over-optimistic.