Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
I do wonder if there have been serious accidents from other cars using similar technologies? Nearly all the new Honda's have lane keep assist that operate similar to AP1

Eh? I can’t speak to Honda or Ford but I am very familiar with GM and Toyota LKA and they have little in common with AP.

AP is applying torque to the wheel nearly all the time to keep the vehicle centered in the lane.

LKA only activates if you stray too close to the edge of the lane. And even then the GM and Toyota systems behave differently. I’d be curious to hear about Ford and Honda..

GM applies a good nudge back toward the lane center such that a distracted driver with hands on the wheel will feel the car nudging then back in the lane. Sometimes in curves I feel the steering ease up a bit as it starts contributing torque (if I happen to be too far on the outside of the turn). GM only beeps if lane departure despite corrective torque is imminent.

Toyota seems much weaker. Often times you barely feel it but you always hear it. Toyota beeps at you all the time.. annoying, but perhaps safer. I guess if hands were off the wheel it might be enough torque to nudge the car back into the lane. Then it actually applies a little counter torque to center in the lane. GM doesn’t do that. Hands free on GM results in a ping pong back and forth in the lane.. eventually with enough lateral speed that you leave the lane.

With hands on the wheel, GM does a good job of nudging back into the lane without a peep. But I found over time this actually made me a LESS attentive driver..
 
  • Informative
Reactions: pilotSteve
That's the meaning of imperfect, unfinished, beta product. Competencies are accomplished incrementally, not all at the same time.
Competencies is accomplished by way of prioritize in order of important. I don't understand WHY the safety is not at first and foremost before any automation? I am aware of the current software limitations but that does not preventing me from being perplexing and wondering, right? If you are satisfied with the safety comes after the automation than fatal accident like this is to be expected.
 
Competencies is accomplished by way of prioritize in order of important. I don't understand WHY the safety is not at first and foremost before any automation? I am aware of the current software limitations but that does not preventing me from being perplexing and wondering, right? If you are satisfied with the safety comes after the automation than fatal accident like this is to be expected.

Look at it as a range of accident types. Anything that reduces the number of accident types or frequency improves safety. Reducing crashes caused by cars leaving their lane is making things safer, potentially for multiple cars. If you say that no one can release a system unless it covers a specific 80% of accident types, are you making things safer?

Edit: article on features and accident rates
New report shows how many accidents, injuries collision avoidance systems prevent

.
 
  • Like
Reactions: MP3Mike
I think releasing a feature with known limitations is perfectly fine, and it's a standard practice across industries. However, most expect there being some type of safeguards to prevent the operator from operating the machine outside of the feature limitations. For example, fly-by-wire has variables control laws that it goes to when it encounter faults. Only with the very last backup is the pilot the sole safeguard of the airplane.

In the case of the Tesla Autopilot, the current safeguard appears to be the driver and the driver only. If the driver is not monitoring the car constantly, then the car can possibly operate outside of its limitation. Question then is that whether the driver should be the first and only safeguard, or should there be another layer or two of protections before a driver needs to intervene. We all know today's drivers are often distracted, so should Autopilot count on a potentially distracted driver to correct its mistake when Autopilot is supposed to be there to help distracted driver?
Very well said and you do have good understanding of safety engineering. My hope is Tesla's AI will place more emphasis on "absolute" safety philosophy. If not, there will be more dead bodies piled up in our public road due to "driver's" fault.
 
  • Informative
Reactions: TaoJones
I think things will get worse before they get better. The plant I worked at was highly automated. If a system was ~90% reliable you would watch it closely to ensure it's performing correctly. Any errors with the automation would usually be caught. But if it was ~99% reliable you wouldn't watch it as closely and even if it was more reliable overall errors that aren't caught would increase. The overall error rate won't fall until it's ~99.9999% reliable. I think Autopilot will get there but it's gonna get worse before it gets better and I don't think there's really any way around that.
 
Look at it as a range of accident types. Anything that reduces the number of accident types or frequency improves safety. Reducing crashes caused by cars leaving their lane is making things safer, potentially for multiple cars. If you say that no one can release a system unless it covers a specific 80% of accident types, are you making things safer?

Edit: article on features and accident rates
New report shows how many accidents, injuries collision avoidance systems prevent

.
This accident, based on Tesla's statement, is related to use of autopilot where the system (autopilot + driver monitoring) has FAILED!
 
This accident, based on Tesla's statement, is related to use of autopilot where the system (autopilot + driver monitoring) has FAILED!

I would say it is driver driving and AP/FCW/EAP monitoring given that these are assistance features. Drivers should never hit barriers.

The existing lanes, lane following, barrier design, and driver actions all combined in a region of the operation space that is not yet covered by the software.
 
...I don't understand WHY the safety is not at first and foremost before any automation?...

Remember the World War 2 radar that doesn't know which one is a very small harmless flying piece of aluminum chaff and which one is a big dangerous bomber?

If you read the article about the radar, the safest thing to do is to immobilize your car so it doesn't move an inch out of the safety of your garage.

That's how you can accomplish safety first.

But would anyone pay to have their car safely immobilized in their garage?

Probably not. Thus, if you want to move your car an inch, some one or some thing needs to know how to brake.

Scientists have been trying to perfect radar so it knows when to brake. However, it is still an imperfect patch work that can't reliably brake for stationary objects.

What are you supposed to do with a car with radar that cannot reliably brake every time?

Tesla thinks the imperfect radar should not be held back for early adopters--those who voluntarily pay to grow with the system.

However, Waymo/Google/Alphabets think otherwise. They think human is not to be trusted to drive. Thus, they do not believe in developing Advanced Driver-Assistance Systems (ADAS) but rather they only focus on how to eliminate human from driving by going straight to Autonomous Vehicles.

So, we are now at the fork of ADAS and Autonomous Vehicles.

If we take Waymo's way of Autonomous Vehicles, how long do we have to drive manually without ADAS in the mean time?

And even when non-Tesla Autonomous Vehicles will be perfected, they are not promised to be sold to the general public but to commercial companies only.

They left me no choice but to pick ADAS any day while waiting for Autonomous Vehicles.

With more than a year of using Autopilot, I believe it is very much safer than my prior manual driving.

But if I am inattentive, I would still die whether with the old manual driving system or new Autopilot system.

In summary, currently, for Autopilot, safety is the responsibility of drivers and don't count on the expensive hardware/software.

I've personally experienced it getting better and better with every update and I do enjoy using Autopilot very much!
 
Last edited:
Remember the World War 2 radar that doesn't know which one is a very small harmless flying piece of aluminum chaff and which one is a big dangerous bomber?

If you read the article about the radar, the safest thing to do is to immobilize your car so it doesn't move an inch out of the safety of your garage.

That's how you can accomplish safety first.

But would anyone pay to have their car safely immobilized in their garage?

Probably not. Thus, if you want to move your car an inch, some one or some thing needs to know how to brake.

Scientists have been trying to perfect radar so it knows when to brake. However, it is still an imperfect patch work that can't reliably brake for stationary objects.

What are you supposed to do with a car with radar that cannot reliably brake every time?

Tesla thinks the imperfect radar should not be held back for early adopters--those who voluntarily pay to grow with the system.

However, Waymo/Google/Alphabets think otherwise. They think human is not to be trusted to drive. Thus, they do not believe in developing Advanced Driver-Assistance Systems (ADAS) but rather they only focus on how to eliminate human from driving by going straight to Autonomous Vehicles.

So, we are now at the fork of ADAS and Autonomous Vehicles.

If we take Waymo's way of Autonomous Vehicles, how long do we have to drive manually without ADAS in the mean time?

And even when Autonomous Vehicles will be perfected, they are not promised to be sold to the general public but to commercial companies only.

They left me no choice but to pick ADAS any day while waiting for Autonomous Vehicles.

With more than a year of using Autopilot, I believe it is very much safer than my prior manual driving.

But if I am inattentive, I would still die whether with the old manual driving system or new Autopilot system.

In summary, currently, for Autopilot, safety is the responsibility of drivers and don't count on the expensive hardware/software.

I've personally experienced it getting better and better every update and I do enjoy using Autopilot very much!
Tesla said the advantage of autopilot is to improve safety. In this cast, Tesla! we have a problem!
 
So yes, it can function as designed, but with bad lane markings (like the construction barrier crash), the lanes are not what you want it to follow. In other words: lane following is not always the correct algorithm for the situation, but that doesn't mean the lane following algorithm did anything incorrectly.

It seems likely to me that AP did function as designed here. In other words, there was nothing different about this car's AP system, sensors and steering mechanisms than for any other car of the same build period and software version. So I'm guessing (and this is just a guess) that there was no manufacturing defect here, nor was there a part that was broken/damaged.

But that shouldn't really end the analysis. The next question is whether this combination of hardware and software is safe, given the way it responded to this sort or road, or whether something about the way it behaves indicates that there is a design defect (ie a problem with the design that is causing the system to behave in a manner that is unacceptable.

That's one of the questions NTSB wlll be looking at. Is this sort of behavior sufficiently safe, and (i)
should
AP's behavior be modified, (ii) should additional software enforced restrictions be placed on its use, (iii) should additional training or guidance be given to drivers about how it should be used, and, even, (iv) should be withdrawn (ie turned off) until Tesla can make this convenience feature behave in an acceptable manner.

Frankly, I don't think Tesla's existing practice of (i) barely telling users what the appropriate use cases are and then (ii) constantly making barely documented OTA updates that change AP's behavior is safe. It encourages Tesla drivers to be constantly experimenting with turning autosteer on in different situations, because there is no other way to get a sense of when AP is likely to function decently enough to be worth using. Furthermore, since the behavior changes with each update (and some Tesla drivers seem to think that they can "train" AP/the neural net by running AP in challenging situations), drivers are encouraged to keep "testing out" AP, even in situations where it did not work well in the past.
 
  • Like
Reactions: smac
There is no question that autopilot's inefficiency/incompetency is constantly improved whether there's a recommendation or not.

But the difference is: the honor of machine/hardware/software is well protected. Yes, the failure of the pitot tubes (airspeed indicators) were quietly corrected in the guise of being fair, non-biased, and all participating members were keeping it secret and silent before the final findings.

But once the bureaucratic system publicly issues the findings, all hell broke loose as it would drag the dead pilots through the mud, dishonor them and blame them.

That's how the system works so far in aviation.

Now, since NTSB wants to be fair, quiet and secret before the findings are issued, may be it will be different with automobile Autopilot!

I've read this several times, and am having a lot of trouble understanding what you are saying, especially with respect to "the honor of machine/hardware/software is well protected" . Could you clarify?
 
Look at it as a range of accident types. Anything that reduces the number of accident types or frequency improves safety. Reducing crashes caused by cars leaving their lane is making things safer, potentially for multiple cars. If you say that no one can release a system unless it covers a specific 80% of accident types, are you making things safer?

Edit: article on features and accident rates
New report shows how many accidents, injuries collision avoidance systems prevent

.

I think just about everyone agrees that more-or-less passive safety systems, such as Automatic Emergency Braking, Lane Departure Warning and the like improve vehicle safety. These systems leap in when the driver has made a mistake.

Although Tesla has kind of made some safety claims with respect to Autosteer, I don't think anyone has really proved that Autosteer (as implemented) actually improves safety. I view it as more of a convenience feature. If most of the crashes that it avoids are crashes that a human driver would also have avoided, but it is causing serious crashes that occur when it makes a mistake, but the driver fails to correct the mistake, then I think it is hard to justify AS as safe. We don't really have the data on this. It's something NTSB should look at.

Also, if a safety feature has elements that, as designed, are making the safety feature less safe than it would be if the elements were eliminated or modified, those elements should really be eliminated/modified ASAP, not on Elon Standard Time.
 
I think Autopilot will get there but it's gonna get worse before it gets better and I don't think there's really any way around that.

There is a way around it. Tesla could have delayed release of Autosteer, and completed more of its testing/development before turning the feature on for the public. No one forced Tesla to start selling AP 2 to customers before it was actually ready. Even if Tesla lost its ability to sell AP1, it could have just not offered/sold Autosteer features on AP2 cars until the software was better developed.
 
There is a way around it. Tesla could have delayed release of Autosteer, and completed more of its testing/development before turning the feature on for the public. No one forced Tesla to start selling AP 2 to customers before it was actually ready.

It doesn't work that way. AP isn't like a product that's developed and tested. It's a machine that learns. There's no way to replicate the real-world experience that AP is gaining. That's why Uber and other autonomous programs are out on the streets even with the risks as Uber discovered in AZ.
 
Tesla said the advantage of autopilot is to improve safety. In this cast, Tesla! we have a problem!

People paid good money for Autopilot in 10/2016 and they got NOTHING at that time!

That's because it has been a beta and it has still been.

There is no surprise in that fact.

How can anyone pay for an unfinished product and be shocked that it's not finished?

Beta is not for everyone. It is for early adopters who are problem solvers and not for those who ignore instructions.

Autopilot safety has been incremental from NOTHING to something today.

For those who are inattentive, I think they still die whether they have Autopilot or not.

For those who are willing to follow instructions during beta period, I think Autopilot makes driving much safer.

As for a whole pool of all kinds of drivers: good, average, wreckless..., those who use Autopilot and those who don't but they did buy Autopilot, Tesla's statistic says it's 3.7 times safer than those without Autopilot hardware.

Subjectively, it may be debatable but it is hard to beat the statistics!