But Tesla the manufacturer would still be liable for defects in the system (i.e. any at fault collision). There is no way for Tesla to limit their liability unless there are laws passed to cap damages involving autonomous vehicles.
Disagree because this all depends on the contracts involved.
What I was trying to explain with my earlier post is that society nowadays sees the use of motor vehicles as a risk that has to be insured against.
The fact that the vehicle is driven by a human or by software does not change the above. The main question is: what insurance company will be brave/dumb enough to insure a vehicle driven by software? Tesla Insurance, that's who (at first). And they'll only start doing this if they know damn well that the accident rate (at fault) of FSD software is many times lower than the accident rate of humans.
When an autonomous vehicle has a crash "at fault" (according to traffic laws), this does not automatically mean the autonomous software stinks. Some instances both parties are doing something risky but one is considered at fault but that is the risk of entering traffic.
(example: I drive behind a vehicle on a highway. The lead vehicle swerves for a piece of debris. Due to sunlight glare I don't notice the debris timely so I hit it and I puncture a tire, spinning me out of control and causing a crash with others involved. I'll be labeled "at fault" according to traffic laws (couldn't avoid road debris that another driver did avoid) but many humans and even possibly FSD would've made the same mistake. If FSD failed in this instance like a human could/would sometimes, it doesn't mean the FSD software is broken/defect.)
Taking part of traffic is a risk. Always. And that risk comes from propelling heavy blocks of steel (cars) at high speeds, a risk that is forced upon other drivers and VRUs. That's why countries made laws to make liability insurance for motor vehicles obligatory. (Not for VRUs, they are generally not seen as the contributor to/cause for high damage accidents)
To your point: if FSD is "faulty" (in other words the software is buggy) then I just don't think we're at a stage where autonomous driving without supervision will be insured. So it's kind of a moot point. The defects in the software will either be gone before autonomy is here, or they'll be so rare that the insurance companies just cover these edge cases.
(When I say "autonomy is here" I mean that you are legally ALLOWED to drive without supervision.)
In the case of Mercedes claiming they're responsible if their L3 software fails, this is a voluntary commercial claim they make in order to attract customers. Legally those Mercedes vehicles will have their liability insured by a third party (an insurance company) and that insurance company can then TRY to recoup their payments by sueing Mercedes if the accident happened due to the L3 software. But this will be a very steep uphill legal battle, since Mercedes has thrown in enough caveats (no sun glare, must have lead vehicle within x meters, no faster than, etc etc).
I don't think a trend will develop in which manufacturers will contractually guarantee you won't have "at fault" crashes whilst using their software. That is an impossible thing to promise. OK, they can cover any damages, but that won't happen with full autonomy. In the case of Mercedes it's easy to make a claim like that since the traveling speeds are so slow it'll always be small claims. If you were using Mercedes L3 and another car hits you (he's at fault), then Mercedes does not cover jack sugar. Only when their autonomous software can't stop for something in very specific conditions. That's quite a hollow guarantee. Tesla could do the same but wisely chooses not to.