In this case Tesla is probably insuring the Robotaxi and will only offer the service where they can get or provide insurance cover.
The criteria for a Robotaxi is "safer than a human" and in some cases FSD software might be "considerably safer than a human" before it gets regulatory approval, and any accident might cause a temporary suspension of the FSD service.
In any court case "safer than a human" will be part of Tesla's legal defence, as will all of the circumstances surrounding the accident, including what Tesla has done to prevent a reoccurrence.
In addition Tesla cars have good crash safety and in most circumstances FSD can predict an accident will often slow down, and perhaps pre-deploy airbags.
No one is saying accidents will not occur, nothing in life is risk free. FSD only makes financial sense when the rate of accidents is very low, and serious injury or death from an accident is an extremely rare event.
IMO what most of the debate is missing is that the average human driver is average, even the best human drivers sometimes make mistakes, and the worst human drivers are no better than "Russian Roulette". An additional factor is different human drivers make different decisions in the same situation, they can do sudden things, that are hard to predict in advance, and that can happen so rarely that it happens when you least expect it.
As well as being safer, FSD needs to be predictable, and that includes correct use of the indicators at all times. Sometimes humans signal with the indicator then change their mind.