Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

When can we read a book?

This site may earn commission on affiliate links.
If Tesla truly delivers a ten time as safe as human full self-driving on AP2 sensors (plus HW3 computer) and regulators will insist a driver remains in seat, I would see that as off the hook. I would also personally accept that as FSD delivered in my car.

There's a contradiction in your scenario: if Tesla's FSD in let's say Jan 2021 were really 10x better than the average human driver, then it would also surely pass any regulatory test with flying colours, hence be already licensed for driverless operation.

The only logical reason regulators would refuse to allow removal of the human overseer/crutch is if the system performed in general well, but failed miserably in testing for those infrequently-occurring edge/corner cases Tesla cannot be bothered to cover, such as my Firetruck Super-Destruction test under current AP [which for clarity, an average human driver will pass in 10/10 randomly-timed tries, but current AP fails on each occasion].

i.e. Tesla would have continued along the same slip-shod trajectory from current AP development to produce a system which superficially covers most of the required functionality to qualify as L5 FSD, but which remains fundamentally unsafe while performing a mundane driving task [not piling into obstacles at high speed] in the instance which pops up once every 10,000 miles [a leading vehicle moved aside at just the wrong moment on motorway], thus lulling users into an even more treacherous false sense of security than presently, due to the system's wider versatility.

Naturally no government in its right mind should allow such a lethal trap on the public roads, nor tolerate a company to do business in such a cavalier fashion, recklessly running the risk of its customers lives and those of third parties.

OTOH I understand why you would be willing to accept this, as in "something that mostly works is better than nothing", but still it is not what we were sold, so there is no good reason to settle for being short-changed. Which is not to say that it would not be useful to have such a more versatile Monitored Self-Driving system in the interim while still awaiting the eventual arrival of true FSD.

In my opinion though Tesla will have to upgrade all cameras and radar on the car to about x4 current resolution, restore the rear corner radars originally designed for, possibly also add a forward LiDAR and have the HW4 mainboard inserted before FSD will work in a fundamentally safe and reliable fashion which will gain approval for driverless general road use in Europe.

Which will, judging by their arthritic progress to date, be in 2025 at the earliest.
 
  • Like
Reactions: caligula666
According to this statistics crash risk increases about 3x at .08 BAC level. If you forbid people to drive for that will you want to forbid people with 0 BAC to drive without EAP or FSD when it is 3x or more safer with it? Or what is good for the goose is not good for the gander?

https://www.washingtonpost.com/news...yrocket/?noredirect=on&utm_term=.1f0aaacc972c

No, of course nobody wants to prevent people driving cars with automated systems to decrease collision risks. Or at least those who do would, I imagine, be highly unlikely to buy a Tesla and get involved in this discussion in the first place.

Re. the safety of AP, it seems to me that Tesla makes a very tendentious use of its own unverified statistics to ride the line towards misrepresentation, painting as rosy a picture as possible, in the interests of promoting sales, rather than following an honest and open process designed to rigorously increase safety.

For instance, from the last QSR [ Q3 2018 Vehicle Safety Report ] Tesla does not in fact make any claim but simply presents data from which one could infer that AP use produces an ~60% reduction in crashes compared to driving a Tesla without AP:

Here’s a look at the data we’re able to report for Q3:
  • Over the past quarter, we’ve registered one accident or crash-like event for every 3.34 million miles driven in which drivers had Autopilot engaged.
  • For those driving without Autopilot, we registered one accident or crash-like event for every 1.92 million miles driven.
But I suggest that would be the wrong inference to draw, because:
1. AP is engaged mostly only on the motorway, where less accidents per mile driven traditionally occur anyhow.
2. Tesla non-AP figure includes all off-motorway miles in those vehicles.
3. This pertinent info is deliberately omitted to invite the credulous to make an apples-to-oranges comparison which best fits Tesla's sales pitch.

Furthermore Tesla makes no mention of the type or severity of the accidents involved. My suspicion is that if the data were broken down with sufficient granularity it would likely confirm that AP performs very well in low-speed traffic jam scenarios for preventing the typical harmless fender-bender but actually increases the likelihood of a fatal pile-driving into stopped obstacle in path after lead vehicle step-aside at highway speed.

Thus, AP may in fact produce 60% less accidents in total, while still contributing to more fatalities in a specific scenario [cut-out to stopped vehicle in path] than would be suffered by a group doing the same highway mileage without it.

But we will never know until Tesla is forced to release the full raw data for independent expert analysis. In the meantime I for one shall continue to trust Mr 'tegridy Musk about as far as I can toss him by the [human] horn.
 
According to Tesla and the MIT study there have been more than a billion AP miles driven so far. National Safety Council study shows there are 12.5 fatality for every billion miles driven. Assuming every AP caused fatality is reported in the media, I have to think this is the case in this Tesla-centric media environment, it is indeed much better than when car is driven by a human. It's really not that big a deal either. NHSTA says 94% of auto accidents are caused by human error. It's not that hard for AP to do better than not so good human drivers.

Statistics can lie of course. The devil is always in the details. However with everything we know I don't think anyone can say it's a stretch to think that AP is already safer than human drivers. More than that there is no reason to think the gap will not continue to widen. So the question still is everyone seems to care so much of human life when it comes to self driving cars, when are we going to ban human driver if and when AP or FSD could save lives? We are all making emotional not practical decisions. That's just human nature.
 
Last edited:
Frankly I do not think you appreciate the difficulty of car responsible driving there.

There is so much a car responsible for the drive (Level 3+) has to be able to handle. Much more than just lane keep — reliability of which is suspect on Teslas at this time anyway.

Everything is simple with the driver as the crutch. Make the car responsible for the drive and everything is so hard especially without a Lidar that would at least pretty much be false negative free.

oh heyyyy AR is that u? I thought you were banned! Congrats & welcome back!!
 
  • Funny
Reactions: AnxietyRanger
According to Tesla and the MIT study there have been more than a billion AP miles driven so far. National Safety Council study shows there are 12.5 fatality for every billion miles driven. Assuming every AP caused fatality is reported in the media, I have to think this is the case in this Tesla-centric media environment, it is indeed much better than when car is driven by a human. It's really not that big a deal either. NHSTA says 94% of auto accidents are caused by human error. It's not that hard for AP to do better than not so good human drivers.

Statistics can lie of course. The devil is always in the details. However with everything we know I don't think anyone can say it's a stretch to think that AP is already safer than human drivers. More than that there is no reason to think the gap will not continue to widen. So the question still is everyone seems to care so much of human life when it comes to self driving cars, when are we going to ban human driver if and when AP or FSD could save lives? We are all making emotional not practical decisions. That's just human nature.

Not to be a pedant, but I think it’s important to keep in mind that the appropriate comparison today is “human with AP” versus “human without AP”. There’s no “AP without human” in the current mix.

I’m a human and when I drive I prefer to do it “with AP” rather than without.
 
According to Tesla and the MIT study there have been more than a billion AP miles driven so far. National Safety Council study shows there are 12.5 fatality for every billion miles driven. Assuming every AP caused fatality is reported in the media, I have to think this is the case in this Tesla-centric media environment, it is indeed much better than when car is driven by a human. It's really not that big a deal either. NHSTA says 94% of auto accidents are caused by human error. It's not that hard for AP to do better than not so good human drivers.

Statistics can lie of course. The devil is always in the details. However with everything we know I don't think anyone can say it's a stretch to think that AP is already safer than human drivers. More than that there is no reason to think the gap will not continue to widen. So the question still is everyone seems to care so much of human life when it comes to self driving cars, when are we going to ban human driver if and when AP or FSD could save lives? We are all making emotional not practical decisions. That's just human nature.

More accurately stated, statistics can be manipulated by the deceptive to convey a false impression to the credulous/ignorant, whereas overstressed CEOs with everything to loose can and do order this to happen.

Also, sadly, it is entirely unsafe to assume "every AP caused fatality is reported in the media" or indeed by the company: in fact Tesla for around 2 years suppressed information on the first AP fatality in China [Jan.2016] by pretending they could not [remotely] determine if the car had been in Autopilot, although it is clear from its perfectly-centred trajectory to doom in the dashcam footage immediately recovered from the wreckage, as you can see for yourself linked in my comment above. It was only long after the hullabaloo over the Florida decapitation [May 2016] had died down [pun unintended] that this China incident became more widely known and Tesla finally admitted the accident had happened under Autopilot [ Tesla confirms 'Autopilot' engaged in fatal crash in China ]. Which retroactively rendered Musk's rearguard claim at the time of Brown's demise [ A Tragic Loss ], that his was "the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles.” as patently false and IMHO purposefully deceptive, as it was designed to lead the public and investigating authorities to rationalise "Oh well, this system is at least as safe as the average human driver, which, despite this one-off mess-up, is actually an amazing achievement!"

In reality, however, Brown was the second AP fatality in 130 million AP miles, making its score 1:65M, compared to 1:94M for the humans, i.e. quite a bit worse than the average driver, which anyhow is a rather unambitious marker against which to measure, and again conveniently omits the consideration that off-highway accidents are more frequent, which must further skew the result against AP.

But this was not then "known" to the wider public, i.e. proven against or admitted by Tesla at that time, nor was the Chinese driver Gao Yaning strictly speaking a Tesla "customer", but rather the son of one, who had borrowed the car from his pop for a spin. So maybe with a close enough parsing of the PR a corporate shyster will get his employer Musk off the legal hook for this sleight-of-hand, should it ever come up, which it has not, but that does not make it right or decent conduct.

Furthermore, the same deceptive company habits seem to continue to this very day in other cases, such as that in Switzerland from May 10, 2018, wherein a German businessman driving home alone in the daytime slammed his Model S into a motorway construction zone barricade at high speed, flipping the car and comprehensively cremating himself in situ: Swiss prosecutors investigate fatal Tesla crash | Reuters

Precious little mention of this in the ensuing 7 months: no blog post nor even a full name for this forgotten statistic. Are we thus to presume he was *not* "a friend to Tesla", did *not* have "a loving family"? Judging from the photo it is fortuitous for Tesla's PR that no onboard data will have survived the inferno, so the mealy-mouthed excuses or shameful silence can probably never be proven against them, short of an FBI raid on the premises.

If, at it appears to me most likely, this was in fact another AP incident, that would mean 4 fatalities Musk has chalked up thus far, further damping his already untrustworthy statistics.

In short, it leaves a distinct taste of salt in the mouth!
 
Last edited:
[/QUOTE]
More accurately stated, statistics can be manipulated by the deceptive to convey a false impression to the credulous/ignorant, whereas overstressed CEOs with everything to loose can and do order this to happen.

Also, sadly, it is entirely unsafe to assume "every AP caused fatality is reported in the media" or indeed by the company: in fact Tesla for around 2 years suppressed information on the first AP fatality in China [Jan.2016] ...

I stopped reading rest of your post when it appears you have your agenda and rather to believe unproven claims usually spread by FUDsters than statistics coming out of government agencies. You really think media and authorities will let a company CEO to suppressed or manipulate information this important? Not to mention even if you add all those unproven FUD's they still do not add up to what would be the fatality rate of human driven cars.
 
Last edited:
I stopped reading rest of your post when it appears you have your agenda and rather to believe unproven claims usually spread by FUDsters than any facts that at least checked out by third parties.

Suit yourself entirely but this is my own research, nothing to do with FUDsters, based on reasoned argument and the sources quoted, which you have done nothing to refute [ "agenda" =/= an argument ]. In this case I am the original third party checking it out and finding Tesla's behaviour pretty disreputable.

You really think media and authorities will let a company CEO to suppress or manipulate information this important?

In the case of the Chinese fatality the tactic has already worked pretty well and the Swiss case is also on the best way to being quietly buried. Whether there are ultimately any consequences depends on someone pulling Tesla up on this behaviour, so it remains to be seen if they have totally gotten away with it.
 
Last edited:
I did not put the qualifier "reputable" in front of words third party but it should be easily understood is what I meant. If you really believe all those you said why don't you list ALL AP caused fatality you know and add best supporting evidences you can find. We can then see whether you narrative that Elon lied about AP is safer than human driver has any merit. Before that it's just another FUDster's opinion in my mind.

I did put National Safety Council data to support my conclusion. Unless you can dispute that or can find more than 10 or 20 verifiable fatalities that involve AP you really don't have a case despite the guerrilla attack.
 
Last edited:
I don't believe Tesla will accomplish L3 driving this year with HW2/HW2.5 vehicles with the HW3 computer.

As in at no time this year will the liability for an accident transfer from the human driver to the vehicle to allow for reading a book. A much more interesting question is will anyone accomplish L3 or above in a car you can buy? Where the L3 system can travel at highway speeds (at least 70-80mph) in the USA?

My answer to that is we won't see that either.

I do think we'll hash out the framework a bit more. Insurance companies are already getting ready for L3 vehicles where they're starting to insert language that says they won't cover the vehicle while it's doing the driving.

2020 is when I predict we'll be able to read a book. Will it be in a Tesla? Maybe. That really depends on whether Elon/Tesla makes enough progress this year to realize that the sensor suite in the HW2/HW2.5 vehicle isn't good enough for L3 driving. Where they then correct it through HW4.
 
I do think we'll hash out the framework a bit more. Insurance companies are already getting ready for L3 vehicles where they're starting to insert language that says they won't cover the vehicle while it's doing the driving.

Shouldn't it be the other way around? Why wouldn't they want something that makes car safer? Or they are doing everything they could trying to prevent implementation of the system that could run them out of business? Warren Buffett, he owns insurance companies, has listed self driving car in his annual report as a disruptive force. The premium income will be greatly reduced when cars becoming significantly safer. Elon has said a while ago Tesla is interested in providing its own insurance sometimes in the future. He thought premium structure is not, or will not be, fair to autopilot equipped cars. This guy thought of everything before others do.
 
  • Like
Reactions: Vitold
I did not put the qualifier "reputable" in front of words third party but it should be easily understood is what I meant. If you really believe all those you said why don't you list ALL AP caused fatality you know and add best supporting evidences you can find. We can then see whether you narrative that Elon lied about AP is safer than human driver has any merit. Before that it's just another FUDster's opinion in my mind.

I did put National Safety Council data to support my conclusion. Unless you can dispute that or can find more than 10 or 20 verifiable fatalities that involve AP you really don't have a case despite the guerrilla attack.

You unfortunately fell into the trap slyly laid out for you by Tesla, as outlined above, of drawing the apples-to-oranges comparison and misleading inferences from that:

"According to Tesla and the MIT study there have been more than a billion AP miles driven so far. National Safety Council study shows there are 12.5 fatality for every billion miles driven."

The suppressed problem with these numbers is that Tesla & MIT's figure for AP engagement entails mostly highway miles, probably >=95%, as that is where the feature is approved for use and actually useful. OTOH, the NSC number relates to miles driven on all roads, where fatalities occur at x6 the rate for highways in 2016 USA according to this source: Fatality Facts

Screen Shot 2019-01-05 at 00.30.33.png

where numbers from far right column are relevant, 5,006 : (22,807+7,279) = 1:6

Hence, if Tesla's 1 billion miles on AP has produced 4 highway fatalities and 12.5 f/Bm are produced on all roads, then we divide the latter number by 6 to make a like-for-like comparison on highways only = 2.1 f/Bm, we see that use of AP in fact doubles the user's risk of death. Q.E.D.

Even if you only accept 3 deaths due to AP so far, it is still considerably less safe than the average driver, as opposed to x-times better as Tesla misleadingly has invited you to believe without actually having stated as much, so as to slither off the hook if ever pulled up on the deceit.
 
Shouldn't it be the other way around? Why wouldn't they want something that makes car safer? Or they are doing everything they could trying to prevent implementation of the system that could run them out of business? Warren Buffett, he owns insurance companies, has listed self driving car in his annual report as a disruptive force. The premium income will be greatly reduced when cars becoming significantly safer. Elon has said a while ago Tesla is interested in providing its own insurance sometimes in the future. He thought premium structure is not, or will not be, fair to autopilot equipped cars. This guy thought of everything before others do.

I don't see them doing something that blocks self-driving cars. All they're doing is drawing a clear line between what they'll cover, and what they won't cover.

Anything L2, and below they'll cover. It's only while the car itself is driving that they don't cover it because they have zero control over it, and they feel like that's up to the manufacture. Most manufactures are on board with accepting liability while the car is doing the driving.

I like Tesla's approach to Insurance (assuming it's more than just an idea on paper). The reason is L3 is going to cause a lot of confusion in who is responsible. Like lets say I fall asleep while the car is driving in L3 so I'm not there to take over within X number of seconds that I'm given. An accident occurs as a result of my failure.

My insurance company might try to weasel out of it, and the car manufacture will blame me.

We're entering into a time where there is a lot of blending between human driving, and automated driving. Where it won't always be clear who's fault it was.

It's much easier to get insurance from Tesla, and let them sort it out. As long as I'm covered I don't care. The other reason I like the idea is the pool of people covered will be people that drive the same kind of car with the same technology. If something is going wrong then Tesla can push a SW update to mitigate whatever is going on. That's a pretty powerful tool.
 
  • Like
Reactions: CarlK
I don't believe Tesla will accomplish L3 driving this year with HW2/HW2.5 vehicles with the HW3 computer.

As in at no time this year will the liability for an accident transfer from the human driver to the vehicle to allow for reading a book. A much more interesting question is will anyone accomplish L3 or above in a car you can buy? Where the L3 system can travel at highway speeds (at least 70-80mph) in the USA?

My answer to that is we won't see that either.

I do think we'll hash out the framework a bit more. Insurance companies are already getting ready for L3 vehicles where they're starting to insert language that says they won't cover the vehicle while it's doing the driving.

2020 is when I predict we'll be able to read a book. Will it be in a Tesla? Maybe. That really depends on whether Elon/Tesla makes enough progress this year to realize that the sensor suite in the HW2/HW2.5 vehicle isn't good enough for L3 driving. Where they then correct it through HW4.

First of all, Model S/X do not have driver monitoring camera which you NEED for level 3, to stop the very own situation which you mentioned (fallen asleep). Secondly, I have not seen any current proof or evidence that Tesla will have a Level 3 highway speed system in 2020. They would have to ridiculously improve their vision system in 2019 and focus. Which is something they currently dont have. Infact i predicted level 3 highway in 2020 or 2021 two years ago. But i can see Elon trying to claim first. He's already claimed first for level 5 FSD in 2019.

As a startup, they should be able to, but the lack of focus could very well do them in.
 
Last edited:
I tried to do a quick look for my statement of L3 highway in 2020 or maybe it was 2021 for Tesla and could not find it.
But i found this post from early 2017 which have been pretty bullseye. Only minor changes have been BMW really heavily leaning on mobileye and working hand in hand which i didn't expect and Volvo dropping mobileye costing them 4+ years.


“Toyota’s main objective is safety, so it will not be developing a driverless car,” Seigo Kuzumaki, Toyota’s deputy chief safety technology officer, said during a conference.

Toyota—of All Companies—Defends Drivers, Says It Won’t Build a Fully Autonomous Car

Toyota is definition of clueless, they are not skipping L2 or L3, they don't have L2 or L3.

Now they are saying 2020 for L4, lol give me a break.



Neither Ford nor Toyota will have anykind of L4 in 2020 or 2021. You have to understand we are roughly about 2 from 2020.
2020 is not some long time away. Remember that cars that go into production take about 2 years of testing. A L4 car would need redundancy not only in sensors but in computer system, steering and braking. That entire system must be ready to be pre-production tested in 2018 to even make a 2020 date. This include the platform the system will run on, the exact playment of sensors, and the exact number of sensors. Toyota is still playing russian roullette with their sensors. They are not even close. They are probably 10 years away from L3 even.

This same thing applies to software. Look at the disnegagement from CA DMV. You have to have some kind of progress to even consider 2020/2021. GM for example has cruise which as of last month of 2016 are about 1 in 300 miles and rapidly increasing. Will probably be at a 1 in 1,000-5,000 by the end of the year give or take. Google ofcourse is at 1 in 5,000. Will probably eclipse 10k by years end. My point is, anyone who is trying to deploy something at 2020/2021 must have some progress right now.

There's no magic button that toyota will push and be L4 in 2 years.

You need the software and hardware mostly done now. The only people with the hardware are GM Cruise, Google Waymo, Volvo Drive Me and Audi.




Tesla doesn't have the redundancy and sensors for any kind of L4.
Tesla FSD development began late 2016, anyone who believes Elon will release a L5 that's better than humans with roughly only 1 years of full software development (2018) after their failure to match AP1 parity in 9 months is delusional.

Nio will use mobileye but there is still alot of software/testing that must be written plus they don't even have a car yet.
Lucid will use mobileye but doesn't even have a car yet.
BMW will use mobileye, but will fall victim to the same thing, there's just not enough time to hit 2020/2021, you need your platform ready now.

As the car industry calls it, "pencils down". You need to be pencils down by the end of 2017. You still can't be driving research cars with 20-30 sensors sticking out and aiming to release something in 2 years.

Ford..no

Mercedez is clueless, check their CA DMV report

Baidu already sorta gave in

nuTonomy...nope

Uber...lol they are at 1 disengagement per mile



True.

Delphi for example who says they will have a system ready for OEM in 2019 is still struggling according to their CA DMV report.
Same goes for Bocsh who is claiming L4 in 2020.

Everyone is saying 2020 because others are too, you don't wanna say 2025 if others say 2020 cause then say bye bye to your investors, stocks, etc.

The whole 2019/2020/2021 dates are just PR announcement. All these companies know that only one or two players will actually have any kind of L4 car in 2020. So when everyone fails, they can just blend in and say "see no one has it too".

Google, Audi, Volvo and GM are the one who will have any kind of L4 system in 2020.

When you look at these four systems, you see that everything they have in both software and hardware is mature. They have redundancy in sensors, steering, brakes and they have it in a production-like car (besides GM cruise, although their car looks very production like). They also have mature/maturing software.

After extensive search i think the closest quote i can find about Tesla system was

The full potential for this system is L3 on highways and L2 on urban roads.
 
Last edited:
You unfortunately fell into the trap slyly laid out for you by Tesla, as outlined above, of drawing the apples-to-oranges comparison and misleading inferences from that:

"According to Tesla and the MIT study there have been more than a billion AP miles driven so far. National Safety Council study shows there are 12.5 fatality for every billion miles driven."

The suppressed problem with these numbers is that Tesla & MIT's figure for AP engagement entails mostly highway miles, probably >=95%, as that is where the feature is approved for use and actually useful. OTOH, the NSC number relates to miles driven on all roads, where fatalities occur at x6 the rate for highways in 2016 USA according to this source: Fatality Facts

View attachment 366156
where numbers from far right column are relevant, 5,006 : (22,807+7,279) = 1:6

Hence, if Tesla's 1 billion miles on AP has produced 4 highway fatalities and 12.5 f/Bm are produced on all roads, then we divide the latter number by 6 to make a like-for-like comparison on highways only = 2.1 f/Bm, we see that use of AP in fact doubles the user's risk of death. Q.E.D.

Even if you only accept 3 deaths due to AP so far, it is still considerably less safe than the average driver, as opposed to x-times better as Tesla misleadingly has invited you to believe without actually having stated as much, so as to slither off the hook if ever pulled up on the deceit.

Very convenient but it's a big fail. Exactly what we've agreed one can message any statistics to fit one's agenda. First not all three accidents happened on interstates and freeways. We know at least Joshua Brown's accident did not happen on a freeway since it was caused by a cross traffic. There is no definite description of kind of road the Chinese accident occurred but either way accident rate in China is not the same as in the US and likely higher. There is just no data to back up your claim that Elon lied about AP is safer than human driver. Regardless of that those accidents happened with AP1 and/or V8. AP at this moment without doubt is safer than human drivers. We just need more data to come out to prove that.
 
Last edited:
Very convenient but it's a big fail. Exactly what we've agreed one can message any statistics to fit one's agenda. First not all three accidents happened on interstates and freeways. We know at least Joshua Brown's accident did not happen on a freeway since it was caused by a cross traffic. There is no definite description of kind of road the Chinese accident occurred but either way accident rate in China is not the same as in the US and likely higher. There is just no data to back up your guerrilla attack (you made the claim even before looking at any data) that Elon lied about AP is safer than human driver. Regardless of that those accidents happened with AP1 and/or V8. AP at this moment without doubt is safer than human drivers. We just need more data to come out to prove that.

Granted JB topped himself on a road with crossing traffic, so let's drop him for the sake of this analysis.

From the full Chinese video [ Gao Yaning, † 20 January 2016, Handan, Hebei, China ] the collision happened on a divided 4-lane highway with restricted traffic [ like Autobahn / Interstate ] into a very slow-moving or stationary object equivalent to a firetruck parked at an accident scene or the tail-end of a traffic jam, after a lead vehicle cut out >200m earlier, which is precisely the situation warned about re. AP weaknesses in the Tesla manual, thus this case is a completely valid comparator.

The Swiss case happened on the Autobahn and, until it is demonstrated otherwise, I am inferring from all the circumstances that it was in AP mode.

Walter Huang was on interstate.

It does not matter if the accident statistics in general vary somewhat between Switzerland, China and USA, because in each case we are dealing with essentially the same vehicle and type of road. In fact Switzerland is statistically much safer than USA, so it probably balances out China in any case.

Tesla does not break down its QSR numbers based on AP hardware or software, but both fatalities incurred in 2016 must have been on AP1 with probably v7 software.

However, there is [for me compelling] evidence that AP2.5 hardware running v8 2018.28.5 still failed the FSD-test on Autobahn in August 2018, when the Chinese impact scenario repeated itself using my own car, meaning Tesla's AP showed no improvement in that field in the 2.5 years following the first fatality, indicating they felt little pressure to fix the most glaring fork-up in the whole system.

To date the Tesla manual for v9 software {Oct.2018 v9 2018.44} still contains the same warning of potentially fatal collision in that scenario and there have been no reports of which I am aware that anyone has as yet tested this to show that AP on latest HW & SW now passes the FSD-test, which one should imagine would have been worth a headline.

That all leaves us with 3 out of 4 AP deaths with no crossing or oncoming traffic involved, i.e. sufficient data to back up my reasoned argument [ not an attack, guerrilla or otherwise ].

Putting it into numbers, 3 f/Bm versus 2.1 f/Bm = 1:0.7, i.e. use of AP increases risk of death by ((1-0.7)/0.7)*100 = 43%.

Or, if you accept only 2 valid AP fatalities = 2:2.1, i.e. use of AP moves risk of death by ~0%.

Or, if you accept only 1 valid AP fatalities = 1:2.1, i.e. use of AP drops risk of death by 52%. (halves it)

From the available data we can agree to disagree on which of the AP risk numbers [+200%, +43%, 0%, -52%] we finally prefer but IMHO your claim that "AP at this moment without doubt is safer than human drivers" is far from demonstrated and is rather the result of faith-based wishful thinking [a.k.a. cognitive dissonance], a condition with which I can completely sympathise, as no-one wants to realise he has been gulled into buying an exceeding expensive product which fails to work as advertised and moreover does so while placing his own life (and/or that of loved ones) in jeopardy. However, succumbing to the temptation leads onto the slippery slope of a false sense of security, which only increases one's risk factor.

I personally prefer the nicely round +200% and consider myself about x4 better than the average driver, hence every time I engage AP internally exhort myself to "remember you are now about x8 more likely to kill yourself testing this Beta gadget than without it!" That is my cunning strategery to err on the side of caution to the benefit of sparing my own or others lives, and to hell with Tesla's PR.

Of course I am always willing to be persuaded otherwise by better data, necessarily including video of edge case hazard-scenario testing under realistic controlled conditions (which Tesla nota bene has signally failed to produce).

In the meantime I wish you safe driving and thank for the discussion which in good faith has helped clarify matters [for me at least].
 
There could be some advantage I missing, like maybe it would increase adoption. Only about 10% of Tesla miles are on AP. Maybe people would trust it more if Tesla made some new claims, though personally it seems more likely to me that if the system is actually trustworthy then people will use it more. It certainly hasn't been trustworthy for much of the last 3 years and that probably contributes a lot to the low usage.
Where did this 10% usage figure come from? It sounds unlikely low. My Autopilot usage is probably about 90% of mileage.
 
Shouldn't it be the other way around?

Yes but insurance companies are run by flint-faced actuaries whose job it is to soberly digest the numbers and minimise losses to stay in business, irregardless of the PR needs of a certain ruthless egomaniac.

It could be that their analysis of the AP risk roughly agrees with mine, i.e. that in the current incarnation it is somewhat less safe than the average driver, added to the fact that Teslas are inordinately expensive to repair, which makes them bump up the premium or refuse coverage altogether.

Or they are simply another cog in the vast global conspiracy to do a good martyr down for no apparent reason [ confirmation bias meet persecution complex ] contrary to their own economic interest.

It's a real head-scratcher!