Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Another tragic fatality with a semi in Florida. This time a Model 3

This site may earn commission on affiliate links.
Status
Not open for further replies.
It’s hard to say in any particular accident involving AP if the driver was at fault, or if AP should have to prevented it. However, Tesla and especially Elon play really fast and loose with the capability of the car and bare some responsibility for peoples misunderstanding of how AP works and its potential deficiencies.

The public has seen Elon on morning shows giving demos of the car seemingly driving itself (his hands not on the wheel), and news of him talking about how robo taxis are just around the corner. I’ve been on test drives where the sales people at Tesla WAY oversell the self driving capability and give absolutely no warning about the deficiencies. Given this, it’s no surprise that I’ve had numerous people ask me about my car and are surprised when I tell them that the car’s autopilot should be considered advanced cruise control, but needs to be watched at all times and can’t be relied on.

When the public hears all the hype from Tesla about the autopilot capabilities and gets no real instruction on its limitations, I think the warnings in the car’s manual and onscreen can be mistakenly viewed as “wording from the lawyers”. Drivers are ultimately responsible for controlling the vehicle, but Tesla should be more forthcoming and proactive about the limitations of the system in its current form.
 
I don't think response time is the total fix. Look at those scenarios while you drive... did you see the stopped car? ...If you did, then the car can see it also. It then become a path analysis problem to fix.
Yeah the traffic stop is near home so I can test it regularly, where I leave it late to brake!
That’s why I think it’s a FSD 3.0 with its higher processing speed that will fix it
 
  • Informative
Reactions: jerry33
Aside from the technical issues involved, when examining this photo, it makes me wonder (barring any other option) whether or not the driver (with enough notice) could have ducked to the side to save his life. Sitting in my Model 3 performance car, I tried leaning right to see if I could get low enough below the strike line. I suspect that this would be somewhat challenging and that there are other factors at play here, including falling debris (including compressed glass, and metal) as well as trailer components that may have hung lower than what we can see.

View attachment 408659
It would have been a case of 99.99% of the time him being inattentive would be fine on autopilot
But this one time there was a semi trailer there & Tesla still haven’t fixed the shortcoming
 
It would have been a case of 99.99% of the time him being inattentive would be fine on autopilot
But this one time there was a semi trailer there & Tesla still haven’t fixed the shortcoming

But isn’t that always the case during fatal accidents? You’ve got some combination of failures (perhaps two or three) coming together to produce a fatality. This is the point I believe of the ARS article quoted above- being 99.99 percent just isn’t good enough because it trains humans to be negligent and provides a false sense of security. Humans are by nature risk takers, they are often more trusting than they should be and their attention is very easily diverted. The risk of attention deficit has grown in recent years due to technological advances, not decreased - to the point where statistics are now showing that distracted driving due to mobile devices are now six times more dangerous than drunk driving. Strangely enough, 99.99 percent uptime in a 24 hour period is almost exactly the 10 seconds of downtime required for this accident and we would need to get to 99.999 % to assure safety.
 
  • Like
Reactions: OPRCE and Kant.Ing
But isn’t that always the case during fatal accidents? You’ve got some combination of failures (perhaps two or three) coming together to produce a fatality. This is the point I believe of the ARS article quoted above- being 99.99 percent just isn’t good enough because it trains humans to be negligent and provides a false sense of security. Humans are by nature risk takers, they are often more trusting than they should be and their attention is very easily diverted. The risk of attention deficit has grown in recent years due to technological advances, not decreased - to the point where statistics are now showing that distracted driving due to mobile devices are now six times more dangerous than drunk driving. Strangely enough, 99.99 percent uptime in a 24 hour period is almost exactly the 10 seconds of downtime required for this accident and we would need to get to 99.999 % to assure safety.
Lex Fridman's work at M.I.T showed that Tesla drivers are more attentive while on autopilot
 
Lex Fridman's work at M.I.T showed that Tesla drivers are more attentive while on autopilot

If there is one set of cases that proves him wrong; this is it. I’ve grown to learn that one should never be heavily influenced by a single study, especially if it’s a small study that is not peer reviewed. There are plenty more studies available that track attentiveness in systems that build trust over time (like in the rail system with signals, and airport security) where they had to build in guardrails to cancel this natural affect. I don’t think Tesla drivers are any more or less attentive; they’re probably about the same yet we have regular lapses of security and fatal train crashes. The entire system is a competing mix of trust vs attentiveness, and we only need a very small area or overlap to result in a fatal mistake. Elon was right when he said that the system has to prove itself to be many times more safe than a human before we will see widespread adoption. We are not there yet and the positive messaging is contributing to the confusion and potentially causing users to take risks with their lives.
 
It would have been a case of 99.99% of the time him being inattentive would be fine on autopilot
But this one time there was a semi trailer there & Tesla still haven’t fixed the shortcoming
Tesla has always been clear that the driver needs to pay attention to the road at all times and keep hands on the controls. Driver in this case did not.
 
If there is one set of cases that proves him wrong; this is it. I’ve grown to learn that one should never be heavily influenced by a single study, especially if it’s a small study that is not peer reviewed. There are plenty more studies available that track attentiveness in systems that build trust over time (like in the rail system with signals, and airport security) where they had to build in guardrails to cancel this natural affect. I don’t think Tesla drivers are any more or less attentive; they’re probably about the same yet we have regular lapses of security and fatal train crashes. The entire system is a competing mix of trust vs attentiveness, and we only need a very small area or overlap to result in a fatal mistake. Elon was right when he said that the system has to prove itself to be many times more safe than a human before we will see widespread adoption. We are not there yet and the positive messaging is contributing to the confusion and potentially causing users to take risks with their lives.
One anecdote doesn't prove anything. Anecdotes are not data.
 
This is definitely not true. The human eye/brain system is far superior at perception and understanding to any of today's computer vision systems. Some people say AI just needs a few billion more neurons, others think there are still some fundamental reasoning skills missing that we haven't understood yet.
Human reaction time is fixed at about 0.3 seconds (if you're paying attention). Neural networks are much faster and always pay attention.
 
Aside from the technical issues involved, when examining this photo, it makes me wonder (barring any other option) whether or not the driver (with enough notice) could have ducked to the side to save his life. Sitting in my Model 3 performance car, I tried leaning right to see if I could get low enough below the strike line. I suspect that this would be somewhat challenging and that there are other factors at play here, including falling debris (including compressed glass, and metal) as well as trailer components that may have hung lower than what we can see.

View attachment 408659
In Europe, all trucks must have side skirts to prevent this all too common type of accident. US is lacking as usual in safety regulations due to industry pressure to save money.
 
One anecdote doesn't prove anything. Anecdotes are not data.

Tesla has the data and isn’t telling anyone. They aren’t releasing the raw data for fear that it will be interpreted or turned against them. The comparative studies in attention deficit in train transportation and security screening are not anecdotal. This story will also prove not to be anecdotal by the time the NTSB's report is finalized, and it will join a small number of cases they’ve investigated.

All the government (or private body) has to do is to subpoena Tesla’s data, which will show the number of users whose hands are not on the wheel during Autopilot driving events. That number is going to be a whole lot higher than drivers with cars that do not have Autopilot capabilities for the simple reason that it’s possible to remove your hands from the wheel. I suspect this evidence will show that users are not complying with guidelines or accepted best practices and it may even demonstrate some surprising results - such as an older population of adults taking risks equivalent to risks traditionally made by the teen population.

Compliance by humans is a funny thing, and we are terrible at it. Give a human the option to be out of compliance or make poor choices, and they’ll take it - sometimes to a horrifying degree. In medicine, close to 50% of patients don’t take their prescribed medications regularly. Give them electronic reminders, and studies have shown we can bring this number back up to 80 percent, but what about the remaining percentage of people? As the number of Tesla’s hit the roads that number (of people who remain out compliance) will increase- they are now in a race against time to contain and improve on what they’ve released. There is no way we can bear a growing number of AP related fatalities without it having some regulatory, commercial, or legal impact. Even if Tesla gets it right, or significantly improves its technology, it will continue to bear some risk for its legacy systems that are still on the road - and that is just a reality. Put another way, while Tesla sets out to improve driver safety, they will inevitably experience a periodic drop in safety regardless of its documented compensating controls. To the degree that this lies entirely in the driver's area of responsibility or whether they bear some responsibility remains to be seen. It’s feeling like it’s moving to a shared responsibility model which has been the path that other cases have taken in air transportation, medicine, security, and safety oversight.
 
Last edited:
  • Like
Reactions: OPRCE and afadeev
First, you have no idea what Teslas data shows so you are just talking nonsense in the first paragraph. Do not just assume they know things and are hiding them.... it makes you sound like Alex Jones.

Second, they have no way of telling if "hands are on the wheel". NONE! All they can tell is if you are applying torque. I get told to put my hands on the wheel all the time when they are already there.

It is fine to have discussions, but just inserting hyperbole does not help anyone.
 
Human reaction time is fixed at about 0.3 seconds (if you're paying attention). Neural networks are much faster and always pay attention.

But there is a difference between reaction time and "perception/understanding". Also, NN's may always be paying attention, but are they paying attention to everything in the way that it needs to be? There was a dashcam video of a user who had someone run a red light perpendicular to their direction of travel. Does a Tesla see(vision) that car coming from the right in time to slow down, yes. Does it see(radar) the car coming from the right in time to slow down ahead of time, maybe. A NN "always paying attention" is basically it looking at everything all at once right? So in this scenario it would be like a human staring to the right and seeing the red light runner coming. A human would see this, and perceive due to the speed of the car that it was going to run the red light. Therefore, the human wouldn't accelerate as fast if at all, to allow for the car to pass.

A Tesla, right now, though it has greater reaction time, does not always have as much awareness as a human does about the environment.

I've said it before, if you take a scenario and quickly take the blindfold off, the computer will beat the human every time, but that isn't real world.
 
  • Like
Reactions: OPRCE and Kant.Ing
First, you have no idea what Teslas data shows so you are just talking nonsense in the first paragraph. Do not just assume they know things and are hiding them.... it makes you sound like Alex Jones.

Second, they have no way of telling if "hands are on the wheel". NONE! All they can tell is if you are applying torque. I get told to put my hands on the wheel all the time when they are already there.

It is fine to have discussions, but just inserting hyperbole does not help anyone.

These are Elon's words, not my own. He has been asked this question (recently), and that was his response. That vendors want control of how their data is interpreted is nothing new. By way of example, the pharmaceutical industry here in the US spends a large number of resources trying to reflect positive numbers in their favor. Not surprisingly, when a third party does the same analysis, they come up with grossly different results. In the same way, we should not come to rely on the manufacturers for interpretation there, and we should not here. This is not a testimony to their validity or even their trustworthiness - it is just good science.

As to whether or not Tesla can reliably determine whether hands are on the wheel - that is a different discussion. There are alternative technologies available that can assess attention, but their deployment can be challenging. If Tesla were to admit that they have no way of determining whether or not a user has their hands on the wheel, then AP would likely need to be recalled.

The advertising industry uses one such technology. Here we are speaking of neurological sensors that are placed on the skull in the form of a cap, coupled with eye-tracking sensors. I went into one such study myself, and had the opportunity to speak with the designer of the study and what they were able to determine about the attention of the user was incredible. They knew down to the millisecond when attention was diverted, and they knew by how long, and where the focus was diverted to if it corresponded to directional vision changes. This is the kind of data I would expect to see in a real study, but its deployment in the field would be a challenge.
 
  • Like
Reactions: OPRCE
Tesla has the data and isn’t telling anyone. They aren’t releasing the raw data for fear that it will be interpreted or turned against them. The comparative studies in attention deficit in train transportation and security screening are not anecdotal. This story will also prove not to be anecdotal by the time the NTSB's report is finalized, and it will join a small number of cases they’ve investigated.

All the government (or private body) has to do is to subpoena Tesla’s data, which will show the number of users whose hands are not on the wheel during Autopilot driving events. That number is going to be a whole lot higher than drivers with cars that do not have Autopilot capabilities for the simple reason that it’s possible to remove your hands from the wheel. I suspect this evidence will show that users are not complying with guidelines or accepted best practices and it may even demonstrate some surprising results - such as an older population of adults taking risks equivalent to risks traditionally made by the teen population.

Compliance by humans is a funny thing, and we are terrible at it. Give a human the option to be out of compliance or make poor choices, and they’ll take it - sometimes to a horrifying degree. In medicine, close to 50% of patients don’t take their prescribed medications regularly. Give them electronic reminders, and studies have shown we can bring this number back up to 80 percent, but what about the remaining percentage of people? As the number of Tesla’s hit the roads that number (of people who remain out compliance) will increase- they are now in a race against time to contain and improve on what they’ve released. There is no way we can bear a growing number of AP related fatalities without it having some regulatory, commercial, or legal impact. Even if Tesla gets it right, or significantly improves its technology, it will continue to bear some risk for its legacy systems that are still on the road - and that is just a reality. Put another way, while Tesla sets out to improve driver safety, they will inevitably experience a periodic drop in safety regardless of its documented compensating controls. To the degree that this lies entirely in the driver's area of responsibility or whether they bear some responsibility remains to be seen. It’s feeling like it’s moving to a shared responsibility model which has been the path that other cases have taken in air transportation, medicine, security, and safety oversight.
Tesla publishes a quarterly safety report.
Tesla Vehicle Safety Report
Consistently show Tesla cars and drivers are much safer than average.
 
I see people on this forum say all the time that EAP is better than most humans. I feel like Tesla is partially responsible for creating that impression. You've got the CEO saying things like this:
“Well, we already have Full Self-Driving capability on highways. So from highway on-ramp to highway exiting, including passing cars and going from one highway interchange to another, Full Self-Driving capability is there."

I worry that if Tesla is not careful they will make EAP less safe (if it's not already) in real world usage and the system will be banned. None of us want that.
It's a false sense of security indeed. And Tesla insists on using Elonisms. Rating safety by accidents per mile AP is on and off, ignoring that AP can only be engaged in situations even my mom use to be comfortable to drive a full car with trailer with 4 bikes mounted : open roads, no side streets. She was as insecure as a driver gets but happy to floor our light Opel Kadett stationwagon down highways she'd never seen before. Yup, 130kph in the late 80's. Quite a big for such a car, heavy trailer, bikes, kids on board. Those open roads, AP is comfortable. This is not where more accidents happen per mile driven. Lots of cars use these roads. If there is an accident, it makes it to the newspaper, not only when it's a Tesla with its driver in the back seat doing whatever. Long story short, I actually doubt whether AP scores as well as human driver taking the miles driven on the roads used for the AP-engaged metric. The combination of anomalous AP errors and lack of attention demanded from the driver, cause for more accidents to happen from when the driver is actually driving.
I see no path for Tesla to move from today's skewed reality to a car that stands the slightest chance to convince a regulator in any time span that it measured in years, counted on one hand. Something needs to change, and not just time elapsing for genius to manifest.
 
  • Like
Reactions: OPRCE and afadeev
LIDAR would see a sold object in the path of the vehicle, regardless of the mounting height. It simply measure distance.

We were talking about a car that might be 2 or 3 cars ahead of the one immediately in front of your car, you may have missed that context.

The perspective a LIDAR system would have (i.e. how far beyond immediate objects it could detect) would depend on the mounting height. Obviously it would require an algo to understand that the difference in velocity approach between the immediate object (low) and distant object (fast) meant traffic ahead was slowing.
 
Yeah the traffic stop is near home so I can test it regularly, where I leave it late to brake!
That’s why I think it’s a FSD 3.0 with its higher processing speed that will fix it

I’m just not sure it’s a speed processing problem. It seems, and of course I could be very wrong with this assumption, to be a fairly simple scenario.

The cameras seeing a stopped vehicle and picking up a stationary object with radar should be a simple scenario to process.
 
What's frustrating (as far as any web-forum can be truly frustrating) is that in being skeptical about FSD's capability in the near term, doesn't diminish Tesla's advantage in the market place, which is very real. No one else is any closer than they are.
The advantage may be just from the perspective of consumers who want to do as little driving/paying attention as possible, TODAY. Other FSD players are not trying to put their best effort in cars today as some diluted driving aid. They are funded, they are working towards the finishline, not the various drink stations along the way.
Had Waymo for instance had an EAP competitor available to car makers, it may well blow all of their current driving aids out of the window and compete nicely with what Tesla is doing. We don't know, because they have only one goal.
 
Status
Not open for further replies.