ItsNotAboutTheMoney
Well-Known Member
Since there is so much debate and arguments in this thread, ...
... I'm going to add no content.
(Ironic, huh)
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Since there is so much debate and arguments in this thread, ...
Exactly - that why I don't think Tesla will ever get to L3 and take liability - unless they are compelled to by regulations or competition. People who continuously demand L3 on freeways (apparently instead of city FSD Beta) will not get their wish come true for a long time ...Absent specific legislation, megacorp liability for an accident is ~1000x as much as individual liabiilty. By extension they'll need an at-fault accident rate 1000x better than humans.
hopefully not that long:If all of us stuck on 10.12.2 are going to be here until the end of the month, at least give us a 10.12.2.1 update that just fixes the dashcam bug.
It's Elon time, so I'm always going to add 2-4 weeks
Unfortunately the tort system in the U.S. is grossly dysfunctional, as is the jury system so I fear you may be correct.Exactly - that why I don't think Tesla will ever get to L3 and take liability - unless they are compelled to by regulations or competition. People who continuously demand L3 on freeways (apparently instead of city FSD Beta) will not get their wish come true for a long time ...
Tesla says their goal is 10x better than humans. NOA is not at that level - not even at human level.Unfortunately the tort system in the U.S. is grossly dysfunctional, as is the jury system so I fear you may be correct.
It still goes to beg the question, though - what is Tesla's end game? Elon's professed goal is a fleet of robotaxis. If that's the goal then they will absolutely need to address the liability question. that being the case, they can just as easily (or not easily) address it for L3 on highways as they can for a robotaxi. The difference being L3 on highways is imminently achievable and will realize some near term financial and PR returns for Tesla.
It may well be that they are lobbying a state legislature (like TX) behind the scenes and are waiting to roll it out until they have some legal protections.
You forgot the most important one. BSSince there is so much debate and arguments in this thread, I thought it helpful to educate people on common logical fallacies, several of which are used on these threads. Keep an eye out for them and use critical thinking when reading both sides of a debate.
Straw Man
The Straw Man fallacy is an informal fallacy. This fallacy occurs when someone is misrepresenting the position of their opponent. This is done by replacing their position with a different position (a straw man), and then attacking that different position (attacking the straw man). Changing the opponent’s argument is called a Straw Man because a man made of straw is a obviously weaker and easier to defeat.
This fallacy sets up a false version of the opponent’s argument, and then works at knocking that down.
Meanwhile, the actual argument of the opponent hasn’t been addressed at all. Arguments cannot be conducted under these fallacious conditions because the content of the argument itself isn’t actually being addressed or contended with.
Example:
Mary says “This is the best Thai food restaurant in the city.” John responds with “You think this is the best restaurant in the city?”
How to avoid the Straw Man Fallacy:
Make sure that you understand your opponents position clearly. Restate it to the opponent and ask if what you stated is an accurate representation of their argument’s position. This will also prevent against them changing their position later on.
Begging the Question
Begging the question is an informal fallacy. This occurs when someone re-states or reaffirms the premise (or premises) as the conclusion (without any further explanation or information).
The problem with this fallacy is that it never progresses the argument past the premise.
The premises are simply reasserted as the conclusion. Or, the conclusion is put into the premises, and then reasserted as the conclusion.
The premise of an argument has to be different in content and meaning than the conclusion. And the conclusion has to be separate in content and meaning than the premise(s), albeit related through logical coherence.
Example:
Mary says “John always tells the truth.” Bob asks “How do you know?” Mary responds “Because John says that he always tells the truth.” Of course John’s honesty is what’s in question, and John speaking on his own behalf begs the question. This fallacy is circular because the conclusion is really just the premise restated.
Ad Hominem
Ad hominem is an informal fallacy. Someone uses the Ad Hominem fallacy when they’re attacking the person and not their argument. One manifestation of this fallacy is saying that the identity of a person disqualifies them from making or engaging in the argument itself. It’s attacking a person, such as their identity or character, instead of attacking their actual position in an argument.
Example:
Cliff cannot be correct when he says that squares have right angles because he is a bad person and has been known to steal ideas and credit them for himself. The position that squares have right angles or not has been left untouched by this fallacy.
You can see this playing out in the political sphere in modern American politics.
How to avoid the Ad Hominem fallacy:
Make sure that you’re not attacking the person and you’re actually contending with the content of their argument. Leave out any personal biases or irrelevant personal characteristics of the opponent that have nothing to do with the content of the argument. A person can be a bad person in any number of ways and still be logically correct in any given instance.
Post Hoc “post hoc ergo propter hoc” (after this, therefore because of this)
The Post Hoc fallacy is an informal fallacy. This fallacy occurs when someone assumes causality from an order of events. Claiming that since B always happens after A, then A must cause B, is the fallacious reasoning. Order of events doesn’t necessarily mean causation.
Actual causation would remain unexplained by only attending to a sequence or order of events. The sequence of events needs actual causation to be understood in order for causation claims to be made.
Example:
Incidents of burglars breaking into cars rises whenever the sun is shining, and declines when it’s raining outside. Therefore, sunny days cause crime.
How to avoid the Post Hoc Fallacy:
The best way to avoid this is to think about whether you actually understand the causal agent or causal story, and that you’re not inferring causing from the order of events. If you realize that you don’t know the cause of the phenomena, it’s best to just suspend judgments until the cause is known.
Loaded Question Fallacy
The Loaded Question fallacy is an informal fallacy. This fallacy occurs whenever a person asks a question which includes their desired outcome, against the position of the person answering the question.
Example:
The classic example of a Loaded Question is “Are you still beating your wife?” Whether the person answers yes or no, the person is framed as a wife beater, whether they are or not.
This is also a tactic often used with lawyers when they’re leading the witness by asking questions to guide the witness to certain conclusions that the lawyer is trying to attain.
How to avoid the Loaded Question fallacy:
This should be easy to avoid since it is usually done intentionally.
False Dichotomy (False Dilemma, Either/Or)
A False Dichotomy is an informal fallacy. This occurs when the arguer is presenting only two possible options or outcomes to a position, when in reality there are more options.
It’s done to narrow the opponent’s position to only two possible outcomes. It’s an argument tactic designed to lead narrowed and specific options.
Example:
Mom tells her child “Do you want to go to sleep now or in 5 minutes?” The false dilemma is that there are more options than now or in 5 minutes, such as going to bed in 10 minutes. Most kids pick up on this tactic used by parents when they’re still in toddlerhood.
How to avoid the False Dilemma fallacy:
Think about whether the options you’re considering do indeed exhaust all of the possibilities, or if there are other legitimate possibilities to consider as well. Think about alternatives before the list of possibilities is narrowed to only two or one.
Appeal to Authority (ad verecundiam)
Appeal to authority is an informal fallacy. Making an appeal to an authority in an argument doesn’t make the argument correct. An appeal to authority can be correct, or incorrect, depending on the substance of the claim that’s at issue.
There are experts (authorities) on opposing sides of court cases. They can both be right in certain domains, or within the same domain one can be more correct than the other. Being an expert on a given topic doesn’t mean that anything that the expert claims is therefore correct.
Example:
Mary says “The earth is flat.” Bob says “How do you know that?” Mary says “Because my geology teacher told me.” It’s doubtful that a geology teacher would actually teach this but it illustrates the fallacy.
How to avoid the Appeal to Authority fallacy:
Don’t appeal to any authority as the basis for the legitimacy of your claim.
Hasty Generalization
Hasty Generalization is an informal fallacy. Making a claim about something without sufficient or unbiased evidence for the claim. If the evidence did support the claim, then it wouldn’t be called a hasty generalization, it would just be a generalization. The hasty description means that the generalization was done too quickly and without evidence.
This is a tricky one because there is no agreed upon threshold of what constitutes a sufficient number of examples or sample size to be considered as legitimate evidence in any given case. Is it more than 50%? However, it can usually be more easily determined as to what constitutes biased or unbiased evidence.
Example:
John says “You’re a musician, so therefore you must not have stage fright.”
How to avoid the Hasty Generalization fallacy:
Consider what the evidence is, and how large the sample size is, and whether they’re sufficient to be representative of the whole before making the claim or statement.
Appeal to Popular Opinion (Argumentum ad populum)
Appeal to popular opinion is an informal fallacy. This fallacy occurs when someone is making an argument that a position is true because a great number (or the majority) of people hold to that position. The fallacy here is that the majority may be factually wrong as a result of being misled or having partial information and drawing wrong conclusions.
We’ve seen this in history, in which the majority of people have been misled by their media or by their government or by wrong scientific or philosophical assumptions.
Example:
Medieval John says “The sun revolves around the earth, and the earth is fixed in place.” Medieval Mary says “How do you know that the sun revolves around a fixed earth?” To which Medieval John replies “Don’t you know that everyone believes that the earth is fixed in place, around which the sun revolves? It’s common knowledge.”
How to avoid the Appeal to Popular Opinion fallacy:
Consider the merits of the statements on their own grounds without recourse to what others think about it.
(source: The Top 10 Logical Fallacies | Fallacy List with Examples)
Are you sure?Tesla says their goal is 10x better than humans. NOA is not at that level - not even at human level.
It doesn't matter. As a practical matter what will happen after the first injury is a zillion lawyers will swarm the person who was injured, then pay Dan O'dowd $$$ and he'll get up on the stand and spout some nonsense about how Tesla was knew the accident would happen and didn't care. A jury of 12 people, half of which barely graduated high school will then be charged with interpreting graduate level statistics, look at some guy who got 2 broken legs when he fell in front of a Tesla while in a drunken stupor, then look at the big, nasty corporation who made the car and promptly award the guy $50M.Are you sure?
I bet there’s a way to spin it to say the average number of accidents when using NOA is already 10 times lower than the average human.
I’m already seeing rationalizations saying the Beta is infinitely safer than humans since there haven’t been any reported accidents involving injury among the 100k Beta testers. Which conveniently leaves out that not everyone in the 100k uses the Beta much if at all, and those that do mostly watch it like a hawk.
You can cut it by miles driven, time, number of people who have access to a feature, reported injuries, fatal accidents, etc.
Of course it would not be accurate to call it safer then human driving without also including how often one has has to disengage NoA because it doesn’t do something right.Are you sure?
I bet there’s a way to spin it to say the average number of accidents when using NOA is already 10 times lower than the average human.
I’m already seeing rationalizations saying the Beta is infinitely safer than humans since there haven’t been any reported accidents involving injury among the 100k Beta testers. Which conveniently leaves out that not everyone in the 100k uses the Beta much if at all, and those that do mostly watch it like a hawk.
You can cut it by miles driven, time, number of people who have access to a feature, reported injuries, fatal accidents, etc.
Spin works fine when you’re trying to build hype and pump up the stock. But in the real world with Tesla taking over liability, that doesn’t hold up.Are you sure?
I bet there’s a way to spin it to say the average number of accidents when using NOA is already 10 times lower than the average human.
I’m already seeing rationalizations saying the Beta is infinitely safer than humans since there haven’t been any reported accidents involving injury among the 100k Beta testers. Which conveniently leaves out that not everyone in the 100k uses the Beta much if at all, and those that do mostly watch it like a hawk.
You can cut it by miles driven, time, number of people who have access to a feature, reported injuries, fatal accidents, etc.
I don’t see Tesla ever taking over liability. They’ll say it’s equivalent to L5 but is legally an L2 system cause of those damn regulators and their silly requirement that Tesla be liable for accidents.Spin works fine when you’re trying to build hype and pump up the stock. But in the real world with Tesla taking over liability, that doesn’t hold up.
A better way to think of it is that every disengagement is a potential autopilot / fsd induced accident that was only avoided because of an attentive driver. Take away that driver, and it’ll likely be an utter disaster.
What about us FSD beta testers? Are we exposed to any additional liabilities? Anything not covered by our insurance policies?It doesn't matter. As a practical matter what will happen after the first injury is a zillion lawyers will swarm the person who was injured, then pay Dan O'dowd $$$ and he'll get up on the stand and spout some nonsense about how Tesla was knew the accident would happen and didn't care. A jury of 12 people, half of which barely graduated high school will then be charged with interpreting graduate level statistics, look at some guy who got 2 broken legs when he fell in front of a Tesla while in a drunken stupor, then look at the big, nasty corporation who made the car and promptly award the guy $50M.
Look at the crowd sourced disengagement rates for freeways showing 1 in about 100 miles. Human level is probably 1 accident in 100,000 miles for freeways.Are you sure?
FSDb is Level 2, meaning you are required to be paying attention and in control of the vehicle at all times. You are also liable for any accidents.What about us FSD beta testers? Are we exposed to any additional liabilities? Anything not covered by our insurance policies?
A significant number (all?) of my disengagements for NoA are not for safety related issues, rather they’re for driving preference issues. I can’t be sure, but I don’t recall any actual safety issues with NoA beyond things like not getting out of a passing lane.Look at the crowd sourced disengagement rates for freeways showing 1 in about 100 miles. Human level is probably 1 accident in 100,000 miles for freeways.
Though, they are not directly comparable - you get the idea.
Exactly - that why I don't think Tesla will ever get to L3 and take liability - unless they are compelled to by regulations or competition.
I don’t see Tesla ever taking over liability. They’ll say it’s equivalent to L5 but is legally an L2 system cause of those damn regulators and their silly requirement that Tesla be liable for accidents.
You think Tesla or your insurance company will assume liability for FSD ? Think again. You are on your own.What about us FSD beta testers? Are we exposed to any additional liabilities? Anything not covered by our insurance policies?