Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What will happen within the next 6 1/2 weeks?

Which new FSD features will be released by end of year and to whom?

  • None - on Jan 1 'later this year' will simply become end of 2020!

    Votes: 106 55.5%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 3.0 vehicles.

    Votes: 55 28.8%
  • One or more major features (stop lights and/or turns) to small number of EAP HW 2.x/3.0 vehicles.

    Votes: 7 3.7%
  • One or more major features (stop lights and/or turns) to all HW 3.0 FSD owners!

    Votes: 8 4.2%
  • One or more major features (stop lights and/or turns) to all FSD owners!

    Votes: 15 7.9%

  • Total voters
    191
This site may earn commission on affiliate links.
I do not believe that there will be an increase in deaths with autopilot. I feel EAP makes me a safer driver already. However, there will be deaths that would not have happened before. That's the dilemma: You'll save ten lives while killing one person who would not have died otherwise.

Right, and its that one death (out of ten or ten thousand) that will keep the Regulators and lawyers up at night (and IMO, significantly delay a higher Level of self-driving roll out).
 
The right thing to do right now would be for Tesla to refund the money with interest to all those who paid for robotaxi-level FSD, and let them keep the highest-level software features that their hardware can run, as compensation for the misleading promises.

Musk did say that FSD would be dependent on development and approval of the software, but IMO there was an implied promise

Please, folks. Tesla does not "promise" to do anything. You all know that Musk shares his plans, but to construe these as "promises" is a little too much excitement over nothing. At least Daniel, here, admits that it might have been an implied promise in his opinion.

If you need a promise, get it in writing and get it signed by Musk himself. Otherwise, it's all hot air.
 
I don't see how it would prevent more accidents than the same technology applied to automatic emergency braking.
It seems like proving that city NoA improves safety will be very tricky. When the inevitable accidents happen Tesla is going to have to have solid statistical evidence that it improves safety overall. So far misuse of Autopilot has only injured or killed the driver, it will be very different when a third party is injured or killed.

It is indeed going to be a PR challenge, convincing the public that the accidents they see are vastly outnumbered by the accidents that never happened because FSD prevented them.

... FSD will have a calming behavior. ...

Based on my own experience I really believe this is true: When I am driving I get angry at the stupidity of other drivers. When I'm a passenger I just sit back and relax. And when I have EAP engaged, I stay alert, but stupid behavior by other drivers doesn't anger me, because EAP is dealing with it.

Right, and its that one death (out of ten or ten thousand) that will keep the Regulators and lawyers up at night (and IMO, significantly delay a higher Level of self-driving roll out).

Contrary opinions have been expressed, but I really think that insurance companies will have a big influence on regulators and lawmakers, and once insurance companies, which are the world's experts on risk, see that FSD systems have become safer than human drivers, they'll push hard for regulatory approval. The real issue is that we're still at least a decade away from generalized non-geofenced FSD. As I keep pointing out, Tesla has nothing yet that does not require constant driver attention.

Please, folks. Tesla does not "promise" to do anything. You all know that Musk shares his plans, but to construe these as "promises" is a little too much excitement over nothing. At least Daniel, here, admits that it might have been an implied promise in his opinion.

If you need a promise, get it in writing and get it signed by Musk himself. Otherwise, it's all hot air.

The implied promise is the time-line: That if you pay for FSD you'll get FSD within the expected lifetime of your car. Because if you say "Buy this car and at some point in the future it will do X" if X is not available before normal wear and tear renders the car junk then your promise was a lie.

Before moving the goal posts and changing the definition of FSD, and including when I bought my car, Tesla and Musk said that if I paid for FSD (which I didn't) my car could operate as a robotaxi. My car will never be robotaxi-capable because when FSD does become available it will require significant hardware upgrades, which will not be realistically possible.

Then Tesla moved the goal posts and now "full self-driving" means the car will navigate city streets but will require a human driver to be alert and ready to take full control at any time, instantly, when the car fails to perform properly. Tesla lied, and in moving the goal posts has tacitly admitted that it cannot fulfill its original promise. By everyone else's definition, "full self driving" means the car does not require a driver. They never should have called their package "FSD." They should have called it EEAP (extended enhanced autopilot) and said that the $6,000 (or whatever) would get you all the new driver-assist features as they become available.

The Tesla Model 3 is the best car that's ever been built. It's really sad that they had to encumber it with an impossible promise.
 
  • Love
Reactions: pilotSteve
Then Tesla moved the goal posts and now "full self-driving" means the car will navigate city streets but will require a human driver to be alert and ready to take full control at any time, instantly, when the car fails to perform properly. Tesla lied, and in moving the goal posts has tacitly admitted that it cannot fulfill its original promise. By everyone else's definition, "full self driving" means the car does not require a driver. They never should have called their package "FSD."

Your mistake is that you think driver supervision is the end goal when it is the intermediary goal. FSD still means no driver supervision once the software is fully validated. That is still Tesla's end goal, ie robotaxis. Remember that Tesla is working on full autonomy. AP is only a driver assist now, temporarily, while Tesla finishes full autonomy. So, the requirement for the driver to pay attention is just a temporary intermediary step. When Tesla first releases automatic city driving, the software will still be beta and not be able to handle some edge cases, thus the driver will need to pay attention. When Tesla has data that proves that it is safe to remove the driver, then Tesla will remove the requirement for the driver to pay attention. If you are thinking, why doesn't Tesla make the software good enough so that the driver is not required BEFORE they release to the public, the answer is that Tesla needs fleet data to make the software good enough. So Tesla has to release the software and require driver supervision in the beginning.
 
  • Funny
Reactions: AlanSubie4Life
If you are thinking, why doesn't Tesla make the software good enough so that the driver is not required BEFORE they release to the public, the answer is that Tesla needs fleet data to make the software good enough.
Shouldn't they then pay the drivers who are helping them build the feature, rather than taking money from them for a feature that doesn't exist and may never work on their cars?
 
It is indeed going to be a PR challenge, convincing the public that the accidents they see are vastly outnumbered by the accidents that never happened because FSD prevented them.
I think it's an engineering challenge to make even a supervised city NoA safe. Take for example the case where the car is stopped at a light. The light turns green and the car starts moving across the crosswalk and into the intersection. There are a million reasons that it might not be safe to go. Tesla is relying on the driver to press the brake to prevent the car from moving. I'm skeptical that there won't be a significant portion of people who are looking at their phone instead of pressing the brake to prevent the car from moving.
There's also a challenge for statisticians to prove that the system is actually safer. Somehow you've got to count all the accidents and compare them to a vehicle not running city NoA and control for operating conditions, driver demographics, accident severity, etc. Then after all that I suppose you've got a PR challenge but honestly I think people will be way more accepting of self-driving vehicles than many around here fear.
 
I think it's an engineering challenge to make even a supervised city NoA safe.

Autonomous driving is a challenge but it is not impossible. After all, Waymo already has unsupervised safe city self-driving.

Take for example the case where the car is stopped at a light. The light turns green and the car starts moving across the crosswalk and into the intersection. There are a million reasons that it might not be safe to go. Tesla is relying on the driver to press the brake to prevent the car from moving. I'm skeptical that there won't be a significant portion of people who are looking at their phone instead of pressing the brake to prevent the car from moving.

I am not sure. Look at AP now. Sure, there are some drivers who don't pay attention on AP now but it is a relatively small number. It depends on how reliable people think traffic light response is. The less reliable, the more they will pay attention, The more reliable it is, the less they will want to pay attention. As others have said, the real challenge is when traffic light response gets to like 99.9% reliable because it's not reliable enough to actually stop paying attention but it is reliable enough to lull people into a false sense of security.
 
  • Like
Reactions: DanCar
I do not believe that there will be an increase in deaths with autopilot. I feel EAP makes me a safer driver already. However, there will be deaths that would not have happened before. That's the dilemma: You'll save ten lives while killing one person who would not have died otherwise. On another forum I frequent there is an occasional visitor who insists that this is unacceptable. If a machine will kill one person, he thinks it should not be permitted, even if it saves twenty thousand. But it's unavoidable: Computers do not make the kind of mistakes that people make, but they make other mistakes that people don't make.

Totally agree. Anyone who expects perfection will wait forever. I remember when airbags were new .. there were one or two horrific deaths caused by the airbags, which of course got in the news and (for a while) generated hysteria about how they should be disconnected and blah blah. f course, finally someone pointed to the far, far greater number of lives saved and all the nonsense died down.

I love the people who bleat on about "must be 100% reliable..." and then blindly jump into their cars are drive on a freeway packed with (not very good) human drivers, not seeing the irony. of what they do.
 
Autonomous driving is a challenge but it is not impossible. After all, Waymo already has unsupervised safe city self-driving.

We absolutely do not know this yet.

Hmm well now, that depends on how you define "autonomous driving". If you mean a car that can mimic all the behaviors of human drivers (well, the good ones at least), then we certainly are nowhere close to that yet, since the human brain is vastly more complex than anything we can yet build (or imagine building, since we still know very little about how the brain works).

OTOH, if you define it as being able to drive between two pre-determined locations while obeying all relevant traffic laws with a safety record that exceeds that of a human driver, then I would argue that (low) goal is eminently do-able. I don't think it would be reasonable to argue that such a thing is "impossible". Hard perhaps, or even very hard. Hard to do economically perhaps. But not impossible.
 
OTOH, if you define it as being able to drive between two pre-determined locations while obeying all relevant traffic laws with a safety record that exceeds that of a human driver, then I would argue that (low) goal is eminently do-able. I don't think it would be reasonable to argue that such a thing is "impossible". Hard perhaps, or even very hard. Hard to do economically perhaps. But not impossible.

Your "low" goal is not as easy as you think. To make a car drive between two points you choose, the car needs to be able to drive in all situations without human intervention, in all conditions. Basically what I'm saying here is Level 4/5 autonomy may not actually be a solvable problem. We simply do not know, because we do not know how complex a system would need to be to solve this massive problem set.
 
  • Funny
Reactions: diplomat33
I love the people who bleat on about "must be 100% reliable..."
I don't think I've seen anyone say anything like that on this forum. If someone did say something like that then they're a very small minority of people here.
I am not sure. Look at AP now. Sure, there are some drivers who don't pay attention on AP now but it is a relatively small number. It depends on how reliable people think traffic light response is. The less reliable, the more they will pay attention, The more reliable it is, the less they will want to pay attention. As others have said, the real challenge is when traffic light response gets to like 99.9% reliable because it's not reliable enough to actually stop paying attention but it is reliable enough to lull people into a false sense of security.
I think I've said that many times :p. I really want to see how they implement city NoA. I have a hard time imagining it.
 
So you think Waymo is lying?

I think they're fundraising. Just like Cruise, Aptiv, NuTonomy, VW, BaiDu, MobilEye/Intel, Bosch, Continental, Daimler, Uber, Tesla, and the four dozen other companies working on autonomous driving. Don't you find it odd that some of these companies have been working for literally decades, and we're still no closer?

So again I say, we do not know if this is a solvable problem or not.
 
Contrary opinions have been expressed, but I really think that insurance companies will have a big influence on regulators and lawmakers, and once insurance companies, which are the world's experts on risk, see that FSD systems have become safer than human drivers, they'll push hard for regulatory approval. The real issue is that we're still at least a decade away from generalized non-geofenced FSD. As I keep pointing out, Tesla has nothing yet that does not require constant driver attention.

There is going to be a point at which the insurers won’t want self driving because it will drive down the premiums they can charge, because there are so few accidents, at which point their business revenue is going drop due to competition, we could actually see them going against self drive in a kind of behind the back of the hand way