Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Absent specific legislation, megacorp liability for an accident is ~1000x as much as individual liabiilty. By extension they'll need an at-fault accident rate 1000x better than humans.
Exactly - that why I don't think Tesla will ever get to L3 and take liability - unless they are compelled to by regulations or competition. People who continuously demand L3 on freeways (apparently instead of city FSD Beta) will not get their wish come true for a long time ...
 
Exactly - that why I don't think Tesla will ever get to L3 and take liability - unless they are compelled to by regulations or competition. People who continuously demand L3 on freeways (apparently instead of city FSD Beta) will not get their wish come true for a long time ...
Unfortunately the tort system in the U.S. is grossly dysfunctional, as is the jury system so I fear you may be correct.

It still goes to beg the question, though - what is Tesla's end game? Elon's professed goal is a fleet of robotaxis. If that's the goal then they will absolutely need to address the liability question. that being the case, they can just as easily (or not easily) address it for L3 on highways as they can for a robotaxi. The difference being L3 on highways is imminently achievable and will realize some near term financial and PR returns for Tesla.

It may well be that they are lobbying a state legislature (like TX) behind the scenes and are waiting to roll it out until they have some legal protections.
 
Unfortunately the tort system in the U.S. is grossly dysfunctional, as is the jury system so I fear you may be correct.

It still goes to beg the question, though - what is Tesla's end game? Elon's professed goal is a fleet of robotaxis. If that's the goal then they will absolutely need to address the liability question. that being the case, they can just as easily (or not easily) address it for L3 on highways as they can for a robotaxi. The difference being L3 on highways is imminently achievable and will realize some near term financial and PR returns for Tesla.

It may well be that they are lobbying a state legislature (like TX) behind the scenes and are waiting to roll it out until they have some legal protections.
Tesla says their goal is 10x better than humans. NOA is not at that level - not even at human level.

Either way the goal is to be competitive - a step or two ahead of competitors. I believe they will continue on that path.

If someone is foolish enough to jump ahead and offer L3 on freeways without testing for millions of miles, let them. They will provide some valuable info on what happens when there are accidents.
 
Since there is so much debate and arguments in this thread, I thought it helpful to educate people on common logical fallacies, several of which are used on these threads. Keep an eye out for them and use critical thinking when reading both sides of a debate.

Straw Man​

The Straw Man fallacy is an informal fallacy. This fallacy occurs when someone is misrepresenting the position of their opponent. This is done by replacing their position with a different position (a straw man), and then attacking that different position (attacking the straw man). Changing the opponent’s argument is called a Straw Man because a man made of straw is a obviously weaker and easier to defeat.

This fallacy sets up a false version of the opponent’s argument, and then works at knocking that down.

Meanwhile, the actual argument of the opponent hasn’t been addressed at all. Arguments cannot be conducted under these fallacious conditions because the content of the argument itself isn’t actually being addressed or contended with.

Example:

Mary says “This is the best Thai food restaurant in the city.” John responds with “You think this is the best restaurant in the city?”

How to avoid the Straw Man Fallacy:

Make sure that you understand your opponents position clearly. Restate it to the opponent and ask if what you stated is an accurate representation of their argument’s position. This will also prevent against them changing their position later on.

Begging the Question​

Begging the question is an informal fallacy. This occurs when someone re-states or reaffirms the premise (or premises) as the conclusion (without any further explanation or information).

The problem with this fallacy is that it never progresses the argument past the premise.

The premises are simply reasserted as the conclusion. Or, the conclusion is put into the premises, and then reasserted as the conclusion.

The premise of an argument has to be different in content and meaning than the conclusion. And the conclusion has to be separate in content and meaning than the premise(s), albeit related through logical coherence.

Example:

Mary says “John always tells the truth.” Bob asks “How do you know?” Mary responds “Because John says that he always tells the truth.” Of course John’s honesty is what’s in question, and John speaking on his own behalf begs the question. This fallacy is circular because the conclusion is really just the premise restated.

Ad Hominem​

Ad hominem is an informal fallacy. Someone uses the Ad Hominem fallacy when they’re attacking the person and not their argument. One manifestation of this fallacy is saying that the identity of a person disqualifies them from making or engaging in the argument itself. It’s attacking a person, such as their identity or character, instead of attacking their actual position in an argument.

Example:

Cliff cannot be correct when he says that squares have right angles because he is a bad person and has been known to steal ideas and credit them for himself. The position that squares have right angles or not has been left untouched by this fallacy.

You can see this playing out in the political sphere in modern American politics.

How to avoid the Ad Hominem fallacy:

Make sure that you’re not attacking the person and you’re actually contending with the content of their argument. Leave out any personal biases or irrelevant personal characteristics of the opponent that have nothing to do with the content of the argument. A person can be a bad person in any number of ways and still be logically correct in any given instance.

Post Hoc “post hoc ergo propter hoc” (after this, therefore because of this)​

The Post Hoc fallacy is an informal fallacy. This fallacy occurs when someone assumes causality from an order of events. Claiming that since B always happens after A, then A must cause B, is the fallacious reasoning. Order of events doesn’t necessarily mean causation.

Actual causation would remain unexplained by only attending to a sequence or order of events. The sequence of events needs actual causation to be understood in order for causation claims to be made.

Example:

Incidents of burglars breaking into cars rises whenever the sun is shining, and declines when it’s raining outside. Therefore, sunny days cause crime.

How to avoid the Post Hoc Fallacy:

The best way to avoid this is to think about whether you actually understand the causal agent or causal story, and that you’re not inferring causing from the order of events. If you realize that you don’t know the cause of the phenomena, it’s best to just suspend judgments until the cause is known.

Loaded Question Fallacy​

The Loaded Question fallacy is an informal fallacy. This fallacy occurs whenever a person asks a question which includes their desired outcome, against the position of the person answering the question.

Example:

The classic example of a Loaded Question is “Are you still beating your wife?” Whether the person answers yes or no, the person is framed as a wife beater, whether they are or not.

This is also a tactic often used with lawyers when they’re leading the witness by asking questions to guide the witness to certain conclusions that the lawyer is trying to attain.

How to avoid the Loaded Question fallacy:

This should be easy to avoid since it is usually done intentionally.

False Dichotomy (False Dilemma, Either/Or)​

A False Dichotomy is an informal fallacy. This occurs when the arguer is presenting only two possible options or outcomes to a position, when in reality there are more options.

It’s done to narrow the opponent’s position to only two possible outcomes. It’s an argument tactic designed to lead narrowed and specific options.

Example:

Mom tells her child “Do you want to go to sleep now or in 5 minutes?” The false dilemma is that there are more options than now or in 5 minutes, such as going to bed in 10 minutes. Most kids pick up on this tactic used by parents when they’re still in toddlerhood.

How to avoid the False Dilemma fallacy:

Think about whether the options you’re considering do indeed exhaust all of the possibilities, or if there are other legitimate possibilities to consider as well. Think about alternatives before the list of possibilities is narrowed to only two or one.

Appeal to Authority (ad verecundiam)​

Appeal to authority is an informal fallacy. Making an appeal to an authority in an argument doesn’t make the argument correct. An appeal to authority can be correct, or incorrect, depending on the substance of the claim that’s at issue.

There are experts (authorities) on opposing sides of court cases. They can both be right in certain domains, or within the same domain one can be more correct than the other. Being an expert on a given topic doesn’t mean that anything that the expert claims is therefore correct.

Example:

Mary says “The earth is flat.” Bob says “How do you know that?” Mary says “Because my geology teacher told me.” It’s doubtful that a geology teacher would actually teach this but it illustrates the fallacy.

How to avoid the Appeal to Authority fallacy:

Don’t appeal to any authority as the basis for the legitimacy of your claim.

Hasty Generalization​

Hasty Generalization is an informal fallacy. Making a claim about something without sufficient or unbiased evidence for the claim. If the evidence did support the claim, then it wouldn’t be called a hasty generalization, it would just be a generalization. The hasty description means that the generalization was done too quickly and without evidence.

This is a tricky one because there is no agreed upon threshold of what constitutes a sufficient number of examples or sample size to be considered as legitimate evidence in any given case. Is it more than 50%? However, it can usually be more easily determined as to what constitutes biased or unbiased evidence.

Example:

John says “You’re a musician, so therefore you must not have stage fright.”

How to avoid the Hasty Generalization fallacy:

Consider what the evidence is, and how large the sample size is, and whether they’re sufficient to be representative of the whole before making the claim or statement.

Appeal to Popular Opinion (Argumentum ad populum)​

Appeal to popular opinion is an informal fallacy. This fallacy occurs when someone is making an argument that a position is true because a great number (or the majority) of people hold to that position. The fallacy here is that the majority may be factually wrong as a result of being misled or having partial information and drawing wrong conclusions.

We’ve seen this in history, in which the majority of people have been misled by their media or by their government or by wrong scientific or philosophical assumptions.

Example:

Medieval John says “The sun revolves around the earth, and the earth is fixed in place.” Medieval Mary says “How do you know that the sun revolves around a fixed earth?” To which Medieval John replies “Don’t you know that everyone believes that the earth is fixed in place, around which the sun revolves? It’s common knowledge.”

How to avoid the Appeal to Popular Opinion fallacy:

Consider the merits of the statements on their own grounds without recourse to what others think about it.

(source: The Top 10 Logical Fallacies | Fallacy List with Examples)
You forgot the most important one. BS
 
Tesla says their goal is 10x better than humans. NOA is not at that level - not even at human level.
Are you sure?

I bet there’s a way to spin it to say the average number of accidents when using NOA is already 10 times lower than the average human.

I’m already seeing rationalizations saying the Beta is infinitely safer than humans since there haven’t been any reported accidents involving injury among the 100k Beta testers. Which conveniently leaves out that not everyone in the 100k uses the Beta much if at all, and those that do mostly watch it like a hawk.

You can cut it by miles driven, time, number of people who have access to a feature, reported injuries, fatal accidents, etc.
 
  • Like
Reactions: Ramphex
Are you sure?

I bet there’s a way to spin it to say the average number of accidents when using NOA is already 10 times lower than the average human.

I’m already seeing rationalizations saying the Beta is infinitely safer than humans since there haven’t been any reported accidents involving injury among the 100k Beta testers. Which conveniently leaves out that not everyone in the 100k uses the Beta much if at all, and those that do mostly watch it like a hawk.

You can cut it by miles driven, time, number of people who have access to a feature, reported injuries, fatal accidents, etc.
It doesn't matter. As a practical matter what will happen after the first injury is a zillion lawyers will swarm the person who was injured, then pay Dan O'dowd $$$ and he'll get up on the stand and spout some nonsense about how Tesla was knew the accident would happen and didn't care. A jury of 12 people, half of which barely graduated high school will then be charged with interpreting graduate level statistics, look at some guy who got 2 broken legs when he fell in front of a Tesla while in a drunken stupor, then look at the big, nasty corporation who made the car and promptly award the guy $50M.
 
Are you sure?

I bet there’s a way to spin it to say the average number of accidents when using NOA is already 10 times lower than the average human.

I’m already seeing rationalizations saying the Beta is infinitely safer than humans since there haven’t been any reported accidents involving injury among the 100k Beta testers. Which conveniently leaves out that not everyone in the 100k uses the Beta much if at all, and those that do mostly watch it like a hawk.

You can cut it by miles driven, time, number of people who have access to a feature, reported injuries, fatal accidents, etc.
Of course it would not be accurate to call it safer then human driving without also including how often one has has to disengage NoA because it doesn’t do something right.
 
e
Are you sure?

I bet there’s a way to spin it to say the average number of accidents when using NOA is already 10 times lower than the average human.

I’m already seeing rationalizations saying the Beta is infinitely safer than humans since there haven’t been any reported accidents involving injury among the 100k Beta testers. Which conveniently leaves out that not everyone in the 100k uses the Beta much if at all, and those that do mostly watch it like a hawk.

You can cut it by miles driven, time, number of people who have access to a feature, reported injuries, fatal accidents, etc.
Spin works fine when you’re trying to build hype and pump up the stock. But in the real world with Tesla taking over liability, that doesn’t hold up.

A better way to think of it is that every disengagement is a potential autopilot / fsd induced accident that was only avoided because of an attentive driver. Take away that driver, and it’ll likely be an utter disaster.
 
  • Like
Reactions: sleepydoc
Spin works fine when you’re trying to build hype and pump up the stock. But in the real world with Tesla taking over liability, that doesn’t hold up.

A better way to think of it is that every disengagement is a potential autopilot / fsd induced accident that was only avoided because of an attentive driver. Take away that driver, and it’ll likely be an utter disaster.
I don’t see Tesla ever taking over liability. They’ll say it’s equivalent to L5 but is legally an L2 system cause of those damn regulators and their silly requirement that Tesla be liable for accidents.

Which would conveniently leave out that that’s the whole point of L5. I don’t care if something’s practically L5. If I’m liable for accidents and potentially someone’s death, then I have to pay attention which makes it L2—maybe a really good L2, but still L2.

Tesla would easily be able to convince a jury that we can’t know what a disengagement means. It might mean the driver just changed their mind on where they wanted to go.
 
It doesn't matter. As a practical matter what will happen after the first injury is a zillion lawyers will swarm the person who was injured, then pay Dan O'dowd $$$ and he'll get up on the stand and spout some nonsense about how Tesla was knew the accident would happen and didn't care. A jury of 12 people, half of which barely graduated high school will then be charged with interpreting graduate level statistics, look at some guy who got 2 broken legs when he fell in front of a Tesla while in a drunken stupor, then look at the big, nasty corporation who made the car and promptly award the guy $50M.
What about us FSD beta testers? Are we exposed to any additional liabilities? Anything not covered by our insurance policies?
 
What about us FSD beta testers? Are we exposed to any additional liabilities? Anything not covered by our insurance policies?
FSDb is Level 2, meaning you are required to be paying attention and in control of the vehicle at all times. You are also liable for any accidents.

Legally, it shouldn’t be any different from using cruise control, even though it does more.
 
Look at the crowd sourced disengagement rates for freeways showing 1 in about 100 miles. Human level is probably 1 accident in 100,000 miles for freeways.

Though, they are not directly comparable - you get the idea.
A significant number (all?) of my disengagements for NoA are not for safety related issues, rather they’re for driving preference issues. I can’t be sure, but I don’t recall any actual safety issues with NoA beyond things like not getting out of a passing lane.
 
  • Like
Reactions: Sigma4Life
Exactly - that why I don't think Tesla will ever get to L3 and take liability - unless they are compelled to by regulations or competition.

they'll eventually be compelled to address the folks they sold FSD to prior to ~march 2019- to whom they explicitly sold a system that is at minimum L4.

I don’t see Tesla ever taking over liability. They’ll say it’s equivalent to L5 but is legally an L2 system cause of those damn regulators and their silly requirement that Tesla be liable for accidents.

Except L5 is already legal right now in half a dozen US states.

There's no "but regulators" excuse that actually exists there.




BTW- first drive on 10.69.1.1-- was worse than 10.12 on the two spots I noticed any difference at all... maybe .2 will fix.