Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

When can we read a book?

This site may earn commission on affiliate links.
But in the Tesla sense, FSD does not seem to equate Level 3 or 4 or 5, even though AP2 was touted as Level 5 capable. It seems to mean driver-responsible automated driving... ie a driver’s aid that can handle potentially the full range of driving scenarios, but still using the driver as the crutch for any unexpected issues or events. This seems to be the coast-to-coast demo too, not car-responsible driving.

How that translates to the Tesla Network, details of which to be released ”next year”, harder to say.

For the vaunted "Tesla Network", ostensibly a driverless meshed taxi service wherein the car autonomously roves around for passengers, ever to be made to work, Tesla's ultimate understanding of FSD must equate to SAE L5, even if they obviously intend to use the driver as a crutch for as long as possible to avoid liability during its development.

This is the basis on which they sold me the car and I certainly intend to hold them to that contractual clause, even if that means installing [at their expense] a whole new sensor suite or any other necessary hardware upgrade to achieve the goal.

Currently, however, I worry more that they are neglecting basic safety features [such as not destructing stopped fire-trucks on the motorway] to focus on introducing more features [e.g. NoA] which only serve to further lull users into a false sense of security.
 
This is the basis on which they sold me the car and I certainly intend to hold them to that contractual clause, even if that means installing [at their expense] a whole new sensor suite or any other necessary hardware upgrade to achieve the goal.

This is of course perfectly fair and to be clear I was sold the same thing.

Currently, however, I worry more that they are neglecting basic safety features [such as not destructing stopped fire-trucks on the motorway] to focus on introducing more features [e.g. NoA] which only serve to further lull users into a false sense of security.

I think this is their current approach, add more and more features for a driver’s aid capable of handling more and more scenarios, but always with the driver responsible so less focus on robustness and more focus on versatility of the system.

I mean it is obvious. NoA came out before basic AP2 autopiloting was really robust (and still is not).
 
  • Like
Reactions: OPRCE
This is of course perfectly fair and to be clear I was sold the same thing.



I think this is their current approach, add more and more features for a driver’s aid capable of handling more and more scenarios, but always with the driver responsible so less focus on robustness and more focus on versatility of the system.

I mean it is obvious. NoA came out before basic AP2 autopiloting was really robust (and still is not).

My feeling is that Musk is so-to-speak "betting the farm" on the Hail Mary pass of HW3 and its expanded NNs combining suddenly to cure all that currently ails AP, while still running the risk of building masses of vehicles with current sensors which will need retrofitting if the FSD effort ultimately proves inadequate due to their deficiency.

I suppose it runs in his favour that probably less than 5% [my estimate] of all owners have bought or will buy the [now wisely hidden] FSD option, meaning less needing a sensor upgrade at his expense if that case eventuates.

The problem remains, however, for those who have bought EAP but not FSD, who will not be upgraded gratis to HW3, and will, as the software develops further along that strand, quite possibly be left behind with a car which will never pass my FSD test as proposed above.

Then I think only a *sugar*-kicking class-action lawsuit will bring Mr Musk to remedy the lacuna in his business plan and force him to implement an actually rigorous and demonstrable "safety first" approach to AP, up to and including free sensor upgrades for all with EAP.

Such a suit is IMHO bound to happen as soon as a few 3rd parties who have not consented to be part of any beta-testing are killed/maimed by long-standing AP flaws combined with driver negligence. The lawyers only have to pursue those with the deepest pockets for their contributory negligence of never having fixed the fundamental *sugar* they knew for years to be defective by design, hence they will do so energetically and with a great prospect of success in Tesla's case.

e.g. imagine in the case of Walter Huang a municipal worker had been mending that crash-attenuator at the moment he hit it, and had also been killed through absolutely no fault of his own. His family would rightly sue the living bejeezis out of Musk/Tesla and I sincerely hope win massive punitive damages.
 
Do y
My feeling is that Musk is so-to-speak "betting the farm" on the Hail Mary pass of HW3 and its expanded NNs combining suddenly to cure all that currently ails AP, while still running the risk of building masses of vehicles with current sensors which will need retrofitting if the FSD effort ultimately proves inadequate due to their deficiency.

I suppose it runs in his favour that probably less than 5% [my estimate] of all owners have bought or will buy the [now wisely hidden] FSD option, meaning less needing a sensor upgrade at his expense if that case eventuates.

The problem remains, however, for those who have bought EAP but not FSD, who will not be upgraded gratis to HW3, and will, as the software develops further along that strand, quite possibly be left behind with a car which will never pass my FSD test as proposed above.

Then I think only a *sugar*-kicking class-action lawsuit will bring Mr Musk to remedy the lacuna in his business plan and force him to implement an actually rigorous and demonstrable "safety first" approach to AP, up to and including free sensor upgrades for all with EAP.

Such a suit is IMHO bound to happen as soon as a few 3rd parties who have not consented to be part of any beta-testing are killed/maimed by long-standing AP flaws combined with driver negligence. The lawyers only have to pursue those with the deepest pockets for their contributory negligence of never having fixed the fundamental *sugar* they knew for years to be defective by design, hence they will do so energetically and with a great prospect of success in Tesla's case.

e.g. imagine in the case of Walter Huang a municipal worker had been mending that crash-attenuator at the moment he hit it, and had also been killed through absolutely no fault of his own. His family would rightly sue the living bejeezis out of Musk/Tesla and I sincerely hope win massive punitive damages.

Do not use AP if you do not want to pay attention to the road. In Uber case for example authorities determined that accident would have been prevented have the safety driver been not watching the Voice.
 
Do y


Do not use AP if you do not want to pay attention to the road. In Uber case for example authorities determined that accident would have been prevented have the safety driver been not watching the Voice.

Of course but we must remember Tesla called AP2 ”Level 5 capable” and talked of Tesla Network for autonomous Uber-like service and summoning your car from the other coast. These would require driverless driving.

It seems quite okay to discuss when and how Tesla might implement that and how their marketing and progress align on this. Even just Level 3 highway feature would allow not looking at the road but reading a book...
 
  • Like
Reactions: OPRCE
Of course but we must remember Tesla called AP2 ”Level 5 capable” and talked of Tesla Network for autonomous Uber-like service and summoning your car from the other coast. These would require driverless driving.

It seems quite okay to discuss when and how Tesla might implement that and how their marketing and progress align on this. Even just Level 3 highway feature would allow not looking at the road but reading a book...

Keep in mind that Tesla’s commitment to FSD also includes the clause about government regulations. I think if regulations would require certain aspect that is not in AP Tesla would be off the hook.
 
Do not use AP if you do not want to pay attention to the road. In Uber case for example authorities determined that accident would have been prevented have the safety driver been not watching the Voice.

I agree wholeheartedly and use AP a great deal on the motorway while paying rapt attention, nevertheless your per se correct statements are no argument against anything we have been discussing above regarding the weaknesses of the AP system and the way it seems to be developing.

In fact your interjection seems to me to amount to a classic non sequitur aimed at closing down a perfectly legitimate discussion which happens to make you uncomfortable, which, if true, would sadden me immensely! Nevertheless I resolve to carry on irregardless in the New Year's spirit, etc.!
 
Yes it is certainly telling of the past two+ years that we are into discussing how Tesla can get off the hook. :)

I am only interested in discussing how to keep them well and truly on the hook, including via legal action if necessary, until the promised L5 FSD and a fundamentally safe EAP are delivered within the next 3 and 1 years respectively.


"In my mind Tesla has always been off the hook right from beginning due to the government regulations clause."

No, this cannot save them, as it is downright impossible that all jurisdictions in which they have sold the vehicles will refuse to introduce regulations permitting the use of FSD.
 
  • Informative
Reactions: caligula666
@Vitold @OPRCE

It is interesting how to consider that off the hook clause.

I mean definitely if Tesla delivers ”Waymo-like” self-driving car (and better) with AP2 (plus HW3 computer) and regulators simply insist driver must remain attentive at all time that would be off the hook. They delivered reliable FSD, the limitation coming only from regulations.

But what if Tesla only delivers something so unreliable on AP2 that regulators won’t allow it because it is so bad? Would that be a legitimate case of off the hook?
 
It is interesting how to consider that off the hook clause.

I mean definitely if Tesla delivers ”Waymo-like” self-driving car (and better) with AP2 (plus HW3 computer) and regulators simply insist driver must remain attentive at all time that would be off the hook. They delivered reliable FSD, the limitation coming only from regulations.

But what if Tesla only delivers something so unreliable on AP2 that regulators won’t allow it because it is so bad? Would that be a legitimate case of off the hook?

In second case, definitely not, which is precisely why law-makers in US/EU are discussing introducing via legislation a battery of standard safety tests any proposed driverless system would have to pass [an autonomous driving test if you will] before being licensed for use on public roads. The ironic FSD-test I outlined above would likely be one such minimal requirement.
 
@Vitold
In second case, definitely not, which is precisely why law-makers in US/EU are discussing introducing via legislation a battery of standard safety tests any proposed driverless system would have to pass [an autonomous driving test if you will] before being licensed for use on public roads. The FSD-test I outlined above would likely be one such minimal requirement.

There is also a second metric. In 2016 Tesla was talking of ten times safer than human for AP2. Later they reverted to twice as safe as human.

Would Tesla still be off the hook if they deliver ”FSD” which is not meeting these metrics and regulators do not allow that just because they missed these metrics?

What if AP2 FSD is eventually only 0.2 times as safe as human? Is Tesla off the hook due to regulators not allowing?

What if Tesla never delivers full self-driving for AP2? Is Tesla off the hook due to regulators ”not allowing” a non-existing product?
 
Last edited:
There is also a second metric. In 2016 Tesla was talking of ten times safer than human for AP2. Later they reverted to twice as safe as human.

Would Tesla still be off the hook if they deliver ”FSD” which is not meeting these metrics and regulators do not allow that just because they missed these metrics?

What if AP2 FSD is eventually only 0.2 times as safe as human? Is Tesla off the hook due to regulators not allowing?

What if Tesla never delivers full self-driving for AP2? Is Tesla off the hook due to regulators ”not allowing” a non-existing product?


I think Tesla's marketing babble [ "x-times better than average human driver" is an unquantifiable metric hence meaningless] will become increasingly irrelevant as the upcoming laws will force them to either shape up on measurable safety/reliability or be shut out of their lucrative target market. The latter being not really an option for any self-respecting business.
 
I think Tesla's marketing babble [ "x-times better than average human driver" is an unquantifiable metric hence meaningless] will become increasingly irrelevant as the upcoming laws will force them to either shape up on measurable safety/reliability or be shut out of their lucrative target market. The latter being not really an option for any self-respecting business.

Perhaps.

But I am still pondering about their AP2 commitment. If Tesla truly delivers a ten time as safe as human full self-driving on AP2 sensors (plus HW3 computer) and regulators will insist a driver remains in seat, I would see that as off the hook. I would also personally accept that as FSD delivered in my car.

What if they only deliver twice as safe as human?

Half as safe as human?

When will Tesla start being on the hook for their marketing?
 
Last edited:
I think you will be able to read a book in limited scenarios like divided highways in good weather - very soon - as in months. FSD will take years to perfect - and I do not think anyone has figured out the path to get there.
It depends, how much time the car can give to hand responsibility back to person behind the wheel. If we think that car needs to be able to warn the person at least five seconds before (i’ve read that to be safe, the time should be at least 10 seconds), we are more than months away.
 
Perhaps.

But I am still pondering about their AP2 commitment. If Tesla truly delivers a ten time as safe as human full self-driving on AP2 sensors (plus HW3 computer) and regulators will insist a driver remains in seat, I would see that as off the hook.

What if they only deliver twice as safe as human?

Half as safe as human?

When will Tesla start being on the hook for their marketing?

Tesla's current approach to "proving" the quality of its AP depends entirely on a retrospective on [rather tendentiously selected] accident statistics, e.g. here's how many people we killed/maimed with it in the last quarter, compared to how many were self-destructed without the benefit of AP.

i.e. all the testing is carried out on and at the expense of the public, which IMHO is an unsustainable model which will not legally be permitted to continue much longer. It was already questionable whether the original AP would be permitted for use in Germany and they only got it in for all of EU via the more lax regulations in Holland.

Tesla will be forced to submit their vehicles in advance, like every other competing OEM, to a phalanx of mandated minimal safety and function tests for each SAE level the car claims, before being licensed for public use. At least that is how it would work in a sane regulatory system, as the alternative is to have every greedy fly-by-night cowboy company testing their developmental short-cuts on the innocent public, as the ugly case of Uber has amply demonstrated.

It will somewhat cramp Tesla's style of tossing off half-baked software OTA updates, if they have to go through the regulators for approval first, but that is no bad thing, and should ultimately help quality and cut fatalities.
 
Last edited:
Tesla's current approach to "proving" the quality of its AP depends entirely on a retrospective on accident statistics, e.g. here's how many people we killed/maimed with it in the last quarter, compared to how many were self-destructed without the benefit of AP.

i.e. all the testing is carried out on and at the expense of the public, which IMHO is an unsustainable model which will not legally be permitted to continue much longer. It was already questionable whether the original AP would be permitted for use in Germany and they only got it in for all of EU via the more lax regulations in Holland.

Tesla will be forced to submit their vehicles in advance, like every other competing OEM, to a phalanx of mandated minimal safety and function tests for each SAE level the car claims, before being licensed for public use. At least that is how it would work in a sane regulatory system, as the alternative is to have every greedy fly-by-night cowboy company testing their developmental short-cuts on the innocent public, as the ugly case of Uber has amply demonstrated.

This will somewhat cramp Tesla's style of tossing off half-baked software OTA updates, if they have to go through the regulators for approval first, but that is no bad thing, and should ultimately help quality and cut fatalities.

I believe the solution for Tesla is to remain at Level 2 (driver as the crutch) for a long time to come.