Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
My s85D (2015) just gave itself a heart attack. I was driving through an intersection (green light) and it started to beep like "collision imminent". A red car showed up on the dash at the same time. I believe some auto braking started to happen, but then as I passed through the intersection, all of the warnings went away. I'm sure glad it didn't lock up the brakes on me and that nobody was behind me. The only thing I could imagine was that a bad shadow fooled it, or it got into some sort of crazy corner case. 4 pm in the afternoon, westbound in the Phoenix area.
May be it saw a Waymo ;)
 
Incoming traffic is a great addition! That is a little piece of the FSD puzzle since incoming traffic will be key in city driving. Exciting!
Purely visualization terms, its an important piece. I'm sure they always had the oncoming traffic recognized - but not visually shown.

The visualization assures us that they know about this oncoming traffic - and might also tell us a little why there is phantom braking. I suspect it is because the car thinks some oncoming traffic is getting into car's lane.
 
  • Like
Reactions: tmoz and diplomat33
Purely visualization terms, its an important piece. I'm sure they always had the oncoming traffic recognized - but not visually shown.

The visualization assures us that they know about this oncoming traffic - and might also tell us a little why there is phantom braking. I suspect it is because the car thinks some oncoming traffic is getting into car's lane.

That would be a reasonable assumption. If correct, I am sure Tesla is tweaking the software so that the car can better distinguish between incoming traffic that is a risk and incoming traffic that is not a risk. Once done, that should reduce phantom braking.

Overall, these little teases make me more optimistic for V10. I feel like V10 will be a nice step forward towards FSD. Tesla is clearly putting FSD together piece by piece. If we get the FSD visualizations and the car is able to react to traffic lights and stop signs, we will certainly be one step closer to "automatic city driving".
 
This is going to be the cover page of Businessweek next edition

Screen Shot 2019-10-09 at 10.23.13 AM.png
 
  • Informative
Reactions: replicant
This is going to be the cover page of Businessweek next edition

View attachment 464218

Really ignores the concept of a Level 2 autonomous system, doesn't it?

By definition, human intelligence plus artificial vigilance will always be safer than human intelligence plus human vigilance; people are awful at paying attention. It's only worse when humans willfully ignore the fact that the system is level 2 and leave the entire driving task to the system.
 
This is going to be the cover page of Businessweek next edition

View attachment 464218
It boggles my mind that such fallacies keeps returning on so many subjects where so many people are dying every single day.

The general consensus appears to be that if:
- Current situation A causes 1000 deaths per month, and
- Considered new situation B fixes that, but causes 10 *different* deaths per month, then
- We must stick with situation A as it would be ethically unacceptable to send 10 people to their death

There was no actual logic harmed during this reasoning.

However it could just be that the cover is terrible. The underlying question is absolutely valid as illustrated above, and is commonly known as the Trolley Problem.
 
@EVNow unless I missed it the wiki chart at the beginning of this thread seems to be missing Summon functionality even though it is explicitly a part of FSD.

Perhaps something along the lines of "navigate parking lot and drive to owner/passenger" should be added to the "Parking" section. With smart summon, this basic functionality now exists for parking lots (but not covered garages yet).
 
  • Like
Reactions: diplomat33

Out of those 6, the number should be revised to 3 confirmed Tesla Autopilot fatalities.

Un-confirmed Tesla Autopilot fatalities:

1) Uber fatality is due to disabling automatic braking system which should be classified as an abuse.

2) The China case should be confirmed as Tesla Autopilot fatality by now but it's still pending on the driver's family legal team to submit 1) audio recording of the chime when the Autopilot was turned on or off (the video/audio were recording the trip but only the last footage were seen without the audio chime of Autopilot indicator) OR 2) car data log that the family's third party has already retrieved but has not submitted to the court for the past three years.

3) Osceola, FL: The referenced article did not claim that Autopilot was on. It wants to get the car log to confirm whether it was on.

It said "she veered into the eastbound lanes in an effort to pass traffic"

crop-640x360-000.jpg



If she only had plain 2019 autopilot for her Model 3, I don't think it has the Auto Lane function to pass the slower front car. If she bought FSD, I don't think the system has the function to cross a solid white line or solid double yellow lines as shown on the scene of the accident above.

Confirmed Tesla Autopilot fatalities:
The other 3 have been confirmed as Autopilot fatalities by the National Transportation Safety Board.
 
Last edited:
Devil's advocate response: "That's not what these 6 people are thinking right now". The case can be made that they would likely be alive and well if it wasn't for autonomous vehicles.
Them sure, but there are others that are still here because of it. What is the net number of lives saved should be the important number. Hopefully more saved than killed.

Having said that, I don't think anyone in their right mind should treat autopilot nor FSD (today) as something that replaces human driver concentration on the road. For the few people who think they can stop paying attention to driving (despite the warnings), this is a mistake that can cost them their lives. Does Tesla do enough to make sure people remain vigilant while driving, I don't know?
 
Last edited:
It actually turned out a fair and balanced well written article: Bloomberg - Are you a robot?

To the posters above - no need to convince me, I drive Autopilot on every road I think it can do and monitor it closely in all circumstances. I accept and understand that we will have to break some eggs to make the cake, and do my best to assist in fleet learning and feeding it useful disengagements. I was merely playing devil's advocate to illustrate that the counterpoints are not necessarily completely invalid, and knowing the 'enemy' is the first strike in winning the battle.
 
It actually turned out a fair and balanced well written article: Bloomberg - Are you a robot?

To the posters above - no need to convince me, I drive Autopilot on every road I think it can do and monitor it closely in all circumstances. I accept and understand that we will have to break some eggs to make the cake, and do my best to assist in fleet learning and feeding it useful disengagements. I was merely playing devil's advocate to illustrate that the counterpoints are not necessarily completely invalid, and knowing the 'enemy' is the first strike in winning the battle.
Yes, actually it is an intelligent article and balanced.
For example:
"Humans have shown nearly zero tolerance for injury or death caused by flaws in a machine,” said Gill Pratt, who heads autonomous research for Toyota Motor Corp., in a 2017 speech. “It will take many years of machine learning, and many more miles than anyone has logged of both simulated and real-world testing, to achieve the perfection required.”

But such a high standard could paradoxically lead to more deaths than a lower one. In a 2017 study for Rand Corp., researchers Nidhi Kalra and David Groves assessed 500 different what-if scenarios for the development of the technology. In most, the cost of waiting for almost-perfect driverless cars, compared with accepting ones that are only slightly safer than humans, was measured in tens of thousands of lives. “People who are waiting for this to be nearly perfect should appreciate that that’s not without costs,” says Kalra, a robotics expert who’s testified before Congress on driverless-car policy."
 
  • Informative
Reactions: willow_hiller
@EVNow unless I missed it the wiki chart at the beginning of this thread seems to be missing Summon functionality even though it is explicitly a part of FSD.

Perhaps something along the lines of "navigate parking lot and drive to owner/passenger" should be added to the "Parking" section. With smart summon, this basic functionality now exists for parking lots (but not covered garages yet).
Yes, that's the 1.2. I'm yet to update the chart after V10.
 
  • Like
Reactions: EinSV
The underlying question is absolutely valid as illustrated above, and is commonly known as the Trolley Problem.
A good example of "perfect is the enemy of the good".

ps : From the wiki page. Looks like in general, people will support autonomy if it can be clearly shown to be saving lives.

The trolley problem has been the subject of many surveys in which approximately 90% of respondents have chosen to kill the one and save the five.[24] If the situation is modified where the one sacrificed for the five was a relative or romantic partner, respondents are much less likely to be willing to sacrifice their life.[25]

A 2009 survey published in a 2013 paper by David Bourget and David Chalmers shows that 69.9% of professional philosophers would switch (sacrifice the one individual to save five lives) in the case of the trolley problem. 8% would not switch, and the remaining 24% had another view or could not answer.[26]
 
Last edited: