Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
Are you saying AP worked as designed in this case? AP is designed to keep you in your lane. In this situation, it failed. In fact, it seems that it actively took the car out of its lane and drove it into a barrier.

Now are there limitations to how the system works? Obviously. Sun blocking the cameras, poor lane markings, other cars, etc will affect performance.

The design is stated every single time AP is engaged.

The design is to assist the active attentive driver who is ready to take over at any time.

If you can't get that you shouldn't use AP.

Specifically, the design is to detect lane markings and the vehicle ahead of you and to make steering and speed adjustments based on the lane markings and the moving vehicle in front of you.

Even when the lane markings and vehicle ahead of you are stable, you still need to pay attention be ready to take over to deal with potholes, road debris, swerving cars, construction zones, red lights, first responders affecting traffic etc.

If the lane markings are messed up or non existent, or if the car in front of you is messed up, or non-existent, you are even more likely to need to act. Which is no big deal because you are attentive -- and, again, ready to take over at any time.

Seriously, people who are struggling with this idea shouldn't use any < L4/5 driver assistance. Hopefully they won't have to wait too long for a good L4/5 system to accommodate their own unique human hardware design, functionality and limitations.
 
Last edited:
I'm willing to cut Tesla a little more slack after the 10.4 release, but suggesting that "be[ing] ready to step in" was sufficient attention in prior releases doesn't begin to describe the situation that most AP2 drivers experienced. I likened it to having a toddler riding in your lap holding the steering wheel. And by that I mean it wasn't normal alertness that was required. It was a much higher level of attention with the assumption that AP2 or a toddler could do something unexpected at any moment that could actually kill you.

Agree, I was rarely using AP2 Autosteer prior to the big update. Even on my perfectly straight freeway commute, it was dicey, so I only used it when I was feeling adventurous and there were no other cars on the road. It’s problem was it made very sudden moves when it felt it needed to move. That combined with the fact that my car takes quite a bit of force to overcome the steering and manually take over was not a recipie for relaxed driving at all. I was always fascinated by people who reported they used AP2 “all the time” and it helped reduce driver fatigue, as that was not my experience with it.

It is much better now, so I am now using it daily for long stretches at a time.
 
  • Informative
Reactions: sillydriver
...First, pilots are actually trained on autopilot...

Tesla Autopilot has never passed a driving test.

It has never got a certificate that it can brake timely to avoid a crash.

Tesla Autopilot that has no driver license and its incompetency is not in question here.

It's human driver who has passed a driving test that has proven that they possess the capability to brake timely to avoid a crash is the issue.

It's human driver who has a driver license and does not act like one is the problem.
 
Are you saying AP worked as designed in this case? AP is designed to keep you in your lane. In this situation, it failed. In fact, it seems that it actively took the car out of its lane and drove it into a barrier.

Now are there limitations to how the system works? Obviously. Sun blocking the cameras, poor lane markings, other cars, etc will affect performance.

Watch the recreation video buried somewhere up thread. The right side gore point lines are totally worn away at the beginning. With a close lead vehicle, the AP can only see the left gore line and the right actual lane line. Due to that, it performs its lane centering algorithm and heads into the gore point area. Once the right gore line is reestablished, the car is so offset that the right gore line may not register as valid.

So yes, it can function as designed, but with bad lane markings (like the construction barrier crash), the lanes are not what you want it to follow. In other words: lane following is not always the correct algorithm for the situation, but that doesn't mean the lane following algorithm did anything incorrectly.

With future developments, lanes will be defined by more than just the paint on the ground.
 
There should be a wide lane alert, as "wide lane" is not a valid state. It means "you did something wrong."

Yeah, a chirp to say, "The detected lane is currently more than 12 feet (max highway) wide, please use the wheel to select left or right lane following." would be handy for distracted drivers. The best will be map based defaults to handle left vs right exit lanes, since by the time the lane is too wide, the car maybe straddling too much (ie starting with 8 foot lanes and then there is an exit/ turn lane). Also need to handle if AP is used on non-highway, that would likely be defaulting to the appropriate shoulder side (unpainted two lane roads).

Doubt the optional lane markings will be added/ last long. They would make it much easier... exit.PNG
 
And if people keeps wondering why the name "autopilot"? That what it means: whenever hardware/software fails in aviation autopilot, human operators are to be blamed historically up to today.

That's not correct. If an NTSB investigation discovers that an aviation autopilot has behaved in an unexpected or incorrect manner, NTSB will most certainly find that that was a factor contributing to the accident and will make a recommendation for correction of the problem.

The fact that the autopilot was wonky won't relieve the pilot of legal responsibility to third parties hurt by a crash. And certainly pilot error can be a contributing factor to a mishap along with the autopilot's contribution. But if an autopilot doesn't behave as it should, the pilot can certainly have a successful legal action against the manufacturer. And, moreover, regulators can require that the autopilot be changed or that better instructions should be given to pilots regarding how that autopilot works.
 
That's not correct. If an NTSB investigation discovers that an aviation autopilot has behaved in an unexpected or incorrect manner, NTSB will most certainly find that that was a factor contributing to the accident and will make a recommendation for correction of the problem.

The fact that the autopilot was wonky won't relieve the pilot of legal responsibility to third parties hurt by a crash. And certainly pilot error can be a contributing factor to a mishap along with the autopilot's contribution. But if an autopilot doesn't behave as it should, the pilot can certainly have a successful legal action against the manufacturer. And, moreover, regulators can require that the autopilot be changed or that better instructions should be given to pilots regarding how that autopilot works.

Wasn't there a case where the indicator for altitude vs heading data was a little light which caused the pilot to set the wrong value?
 
...will make a recommendation for correction of the problem...

There is no question that autopilot's inefficiency/incompetency is constantly improved whether there's a recommendation or not.

But the difference is: the honor of machine/hardware/software is well protected. Yes, the failure of the pitot tubes (airspeed indicators) were quietly corrected in the guise of being fair, non-biased, and all participating members were keeping it secret and silent before the final findings.

But once the bureaucratic system publicly issues the findings, all hell broke loose as it would drag the dead pilots through the mud, dishonor them and blame them.

That's how the system works so far in aviation.

Now, since NTSB wants to be fair, quiet and secret before the findings are issued, may be it will be different with automobile Autopilot!
 
Yeah, you need a median to prevent the lane markings from being worn down in traffic jam reroutes. Without a physical barrier the marking would have tiles.

Or maybe they should use bots dots or reflectors for the sections where lane markings get worn off all the time. Or even make them have rumble strips. Or all of the above.
 
The design is stated every single time AP is engaged.

The design is to assist the active attentive driver who is ready to take over at any time.

If you can't get that you shouldn't use AP.

Specifically, the design is to detect lane markings and the vehicle ahead of you and to make steering and speed adjustments based on the lane markings and the moving vehicle in front of you.

Even when the lane markings and vehicle ahead of you are stable, you still need to pay attention be ready to take over to deal with potholes, road debris, swerving cars, construction zones, red lights, first responders affecting traffic etc.

If the lane markings are messed up or non existent, or if the car in front of you is messed up, or non-existent, you are even more likely to need to act. Which is no big deal because you are attentive -- and, again, ready to take over at any time.

Seriously, people who are struggling with this idea shouldn't use any < L4/5 driver assistance. Hopefully they won't have to wait too long for a good L4/5 system to accommodate their own unique human hardware design, functionality and limitations.


You're arguing a point that I didn't make. Where did I say the driver didn't need to be attentive? You think the design of autopilot is the warning it gives when you start it lol? That is simply a warning about how to operate Autopilot. Of course, the driver is ultimately responsible but that doesn't mean the system didn't fail. How do I know that the system isn't designed to put up any lines and drive into barriers? Because Teslas have driven by this same location thousands of times. Tesla even quoted how many cars have passed this same spot on Autopilot. Have some cars had difficulty? Obviously the dead driver did. But how many times did he drive by this location on his way to work? I would wager the number of times he drove by this location with Autopilot on, dwarfs the 7 to 10 times that he had difficulty with this spot. Which is likely why he drove by this spot with Autopilot on, and not paying attention.

This has nothing to do with whether the driver should be paying attention. I've already admitted the limitations of the system. But take this even further. You mention L4/L5, but guess what? There will still be accidents with level 4 and level 5 systems. Likely at similar locations as this, with similar circumstances. Their sensors and cameras will have interference from environmental factors in less than ideal conditions (poor lane markings, etc) and cause them to make a mistake. Just like a human driver.
 
  • Like
Reactions: daktari
I like the idea of more alerts. But does mean YOU did something wrong. On surface streets I see many wide lanes which is handling better. Of course I see them easily. But on the freeway I think the alert makes more since like in the case of this crash.

Yeah, those are actually multi-use unmarked shoulders. Sometimes bike lanes. Sometimes parking. Sometimes right turn lanes.

Moving traffic should stay left with the expectation that kids will be playing football, frisbee or dodge ball in the unmarked shoulder.

The pavement is wide, but the lane is not.

Mongo's alert still makes sense.
 
Perhaps you should watch the videos that Elon was pedaling at the time Tesla began offering FSD as a purchase option. No mention of it being pure vaporware. Just waiting on the government bureaucrats to bless it.

I don't think you're understanding the conversation. We are not talking about FSD or when it will be ready. We are talking about how anyone could get the impression that EAP is FSD. Specifically we're talking about how anyone would come away with this misunderstanding after taking a test drive. I don't see how anyone could go to a Tesla store, take a test drive and leave thinking your EAP car is going to stop at stop signs, intersections and never make make a single mistake. Five minutes into my demo ride it was clear that you have to keep your hands on the wheel and that it does occasionally get confused. However, it was the most impressive attempt I'd seen to date. I've driven an Acura MDX with lane assist and it has a hard time even staying within well marked lines.
 
Looks like Autopilot just isn't for you.

You're arguing a point that I didn't make. Where did I say the driver didn't need to be attentive? You think the design of autopilot is the warning it gives when you start it lol? That is simply a warning about how to operate Autopilot. Of course, the driver is ultimately responsible but that doesn't mean the system didn't fail. How do I know that the system isn't designed to put up any lines and drive into barriers? Because Teslas have driven by this same location thousands of times. Tesla even quoted how many cars have passed this same spot on Autopilot. Have some cars had difficulty? Obviously the dead driver did. But how many times did he drive by this location on his way to work? I would wager the number of times he drove by this location with Autopilot on, dwarfs the 7 to 10 times that he had difficulty with this spot. Which is likely why he drove by this spot with Autopilot on, and not paying attention.

This has nothing to do with whether the driver should be paying attention. I've already admitted the limitations of the system. But take this even further. You mention L4/L5, but guess what? There will still be accidents with level 4 and level 5 systems. Likely at similar locations as this, with similar circumstances. Their sensors and cameras will have interference from environmental factors in less than ideal conditions (poor lane markings, etc) and cause them to make a mistake. Just like a human driver.
 
Seriously, people who are struggling with this idea shouldn't use any < L4/5 driver assistance. Hopefully they won't have to wait too long for a good L4/5 system to accommodate their own unique human hardware design, functionality and limitations.
Exactly, right now any body can buy and become danger to themselves and others. NTSB/DMVs should find out many Tesla owners and other designated drivers understands all levels of autonomous systems and take away access from who doesn’t. Same process should be put for the new purchases too.
 
I guess we will see what happens when someone in a Cadillac is driving hands free and is killed. They advertise that it is the first vehicle to have hands free driving.

It doesn't work the same. One is a convenience system, the other is a safety system.

The Tesla system decides where a road is in real-time so you can drive it anywhere. This is for convenience. You get to decide where and when to use it.
The Tesla system uses hand torque on the steering wheel to decide if the driver is awake. This is convenient if you are reading or performing some other task that requires you keep your visual focus elsewhere.

The Cadillac system has the roads predefined. The Cadillac cannot end up in the gore zone with autosteer active since it's not a predefined as a road. It is less convenient. It cannot go wherever you want, whenever you want.
The Cadillac system uses your face direction to determine if you are alert to your surroundings. While this will not stop you from running into a gore point, you will be facing it if you hit it. This is inconvenient if you want to read while driving.

The big question remains though, will the Cadillac slam on the brakes in that situation with autosteering OFF? Obviously the Tesla will not.
My guess is yes, based on our CT6 with no autosteer (Super Cruise was released in 2018 calendar year). It does see threats at freeway speeds with ACC off or on.
I know at lower speeds it will stop completely before hitting pedestrian or a solid object. I was inside the car during testing against pedestrian and automotive targets. It slams on the brakes hard. It stops before impact.

Somebody will certainly die in a Cadillac. But it would be a mistake to think it uses immature safety technologies. It is only immature compared to what it will be tomorrow. The V2V is enabled but not being used. The autonomy system is not available in retail cars. The geomapping is still expanding. The sensors will get more sensitive. The price will come down.

I don't believe anybody is following the GM route. So it's a huge gamble. I could be a total failure like the EV-1 or a wild success like the electric starter motor. But most likely somewhere in between before the Chinese reverse engineer and clone it.
 
Last edited:
  • Like
Reactions: smac