Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X Crash on US-101 (Mountain View, CA)

This site may earn commission on affiliate links.
This is just a dream and probably impossible. Still today even speech recognition is far below that what a human can do. And this should be much easier to solve via neural nets.

And here we are talking about safety relevant functions. Such only be implemented if the input sensors are robust and fail safe (typically redundant). Check how tough an autoland for planes is. This is just for a straight runway!
Then after the sensors, the software implementation plays into it. There are so many different conditions, situations and unforeseeable things, software will never be able to reach the same outcome than humans.

Where are you getting that from?

The state of the art speech recognition has either matched or exceeded human capability.

Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition - The AI Blog

Now sure it's likely really debatable especially with the lack of diversity in training datasets. But, when it's something well represented in the dataset the AI does a pretty darn good job.

With Vision it's impossible to deduct anything about where things are from this accident. Tesla is so far behind that they don't even recognize signs to show you the speed limit. They don't use the side cameras to show vehicles next to you either.
 
Last edited:
If you do fall asleep, how long it would take for the AP to slow down and stop the car?
(I think the car would stay in the same lane, and may be turn the flashing lights on?)

Tesla driver passes out allegedly drunk in his Model S, tells the police the car was on Autopilot
Allegedly passed out drunk on Bay Bridge, Tesla driver claims car was on 'autopilot'

To my knowledge, there has been no confirmation that the driver had his Model S on Autopilot before it stopped, but presumably this is what would happen if the driver passed out, ignored all the Autopilot engagement warnings and the vehicle was able to maintain its lane until it slowed down and stopped.

How soon the car would slow down and stop varies according to many factors (as others have pointed out), including the type of road you're on (highway vs. city street), your speed and the last time you applied torque to the steering wheel.
 
  • Like
Reactions: e-FTW
Thinking that this could literally be down to what version of software the car was on. I have seen some dramatic improvements with AP in 2018.10.4 from previous versions, and it very much appears that the new mapping is being applied as my car will now clearly follow the road for a while even when the markings are incomplete or incorrect, the ability to manage crests being a case in point.
 
  • Helpful
Reactions: TaoJones
Where are you getting that from?

The state of the art speech recognition has either matched or exceeded human capability.

Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition - The AI Blog

Now sure it's likely really debatable especially with the lack of diversity in training datasets. But, when it's something well represented in the dataset the AI does a pretty darn good job.
Thanks for the link, didn't know that document. However, they have reached this under laboratory conditions so this worth nothing under real world circumstances. And again, this is an easy job compared to vision. And much more difficult is understanding as they also wrote. So I think my thesis that autonomous driving will not be solved by AI is still valid.
 
Another thought:
The crash attenuator is made from ridged sections for strength, not sure if the painted cover is metal or plastic; and is also previously damaged. If the cover is plastic could it be that the combination of the ridges confused the radar reflection?
Or could the barrier have been at a small angle to the vertical following the previous accident as the photos appear to show, therefore the flat painted face (if metal) deflected the radar reflection upwards away from the approaching Tesla so the radar basically never saw a return signal.
 
  • Like
Reactions: sillydriver
The other guy was in a lighter weight car which would have had an effect on his survival but of the same accord I am guessing that the Tesla Model X has a sturdier cabin and better safety rating than the Prius given all the head on collisions and accidents I've seen the Model X survive. Might be wrong but I'd be surprised if not. I'm glad the guy lived from his accident but also have to wonder if being DUI didn't factor into it.

Glad Dan is on the reporting. I hope Dan investigates how many other cars have crashed here and what the driver's outcome was.

Do we know how the Prius crashed? In other words, how was the kinetic energy of the vehicle (and driver) dissipated during the crash?

Coming to an abrupt stop in a head-on collision (especially when hitting an immobile object like the concrete barrier) puts enormous G-forces on the driver's body. And the heavier the car, the more kinetic energy it will have if it stops abruptly.
Good observation, @mongo. The original CHP tweet stating "driving at freeway speeds on the gore point dividing the SR-85 carpool flyover and the carpool lane on US-101 southbound collided with the attenuator barrier and caught fire" seems to indicate that they initially believed that the vehicle was driving in the gore point before colliding with the barrier. Whether it was AP or driver error that caused the vehicle to be in that non-lane, we don't know for sure. But based on this info, the front-end damage severity, and the trajectory of the wreckage I strongly believe that the car was not swerving or making avoidance maneuvers before impact.

The media tends to embellish things. I would not be surprised at all if the "driver lost control" statement was added by a reporter or news editor. Isn't that what they always say when a car collides with a stationary object?

I mentioned in a previous post that Walter (the driver) was a personal friend of mine. Now that it's out in the media about his prior complaints about AP, I feel comfortable sharing that I heard the same thing from a friends. Apparently he discussed with his wife and at least one other close friend as recently as the week prior to the accident that AP was drifting left at this exact particular junction on previous commutes at around the same time of day. I learned of this allegation a day prior to the I-TEAM news story. Frightening.

It's definitely shocking to me if he did indeed experience this issue with AP, why he would continue to rely on it here. I can only assume he was distracted or otherwise not paying close enough attention. This whole thing is very tragic.

First, my condolences to you and Walter's family. I've spent a lot of time thinking and reading about this accident, and I simply can't imagine what his friends and family are going through.

Second, I have a theory about why someone would intentionally continue to use Autopilot if they thought it had a serious safety issue in particular section of road.

NOTE: I am in no way suggesting that this is what happened during this crash, but I didn't see this presented as a reason why someone would intentionally do something that they know could recreate a serious safety issue, so I wanted to share this in case it's in some way helpful.

First, a couple stories about two times I've repeatedly tried to reproduce (less serious) Autopilot behavior.

#1. When we first got my wife's Model X (AP1) in late March 2016, we nearly got in an accident in the first month assuming that the vehicle would notice a stopped car (at a stoplight) ahead of us in city traffic, specifically on a 45 MPH "expressway" with the car maybe 10-20 car lengths in front of us around a bend in the road. Boy, were we wrong. Fortunately, I hit the brakes hard enough to prevent an accident. From then on, after most software updates, I would re-test this scenario (sometimes with my wife in the passenger seat, usually to her visible discomfort) to see if this scenario had improved. After getting my own Model X (AP2) in February 2017, I continued to test this scenario. I did not recall reporting this specific scenario to my Tesla Service Center (except maybe in passing), but I never asked them to investigate it. I assumed it was a limitation of the way Autopilot worked at that point in the beta. (BTW, as of 10.4, Autopilot on MX AP2 actually does recognize a stopped vehicle at expressway speeds, although it typically recognizes it "later" than I'm comfortable with and the MX brakes moderately hard when stopping, so I don't rely on it to "always" stop in time. I'll either disengage Autopilot or simply start reducing the TACC speed to "hint" that it should start slowing down sooner.)

#2. About 5-6 months ago, I noticed that Autopilot on my MX AP2 would recognize shoulders as lanes (both left and right shoulders in certain sections of specific highways), and that initiating a lane change would actually start the car moving into that "shoulder lane". This concerned me greatly because these "shoulder lanes" frequently narrow, especially when leading up to a bridge abutment. I was so concerned that I reported this issue to Tesla's NA Service email address (and my home service center) a few times, including taking a video with my iPhone showing the lane being detected during the route, the location and the time of day when this happened. (I never tried to change lanes while recording video.) Again, after most software updates, I would re-test the "shoulder lane" detection issue in the places I knew where I previously could reproduce it to see if it had been fixed. Fortunately, testing simply meant driving by that section of road to see if a "shoulder lane" was detected by glancing at the driver's console, nothing more. (Note: I didn't realize others had noticed this specific issue until I read this thread; I just don't have time to keep up with so many threads on the forums.)

--

So hypothetically speaking, let's suppose I found a really serious bug that I thought was a serious safety concern. Further, let's say I told my Tesla Service Center about it, and they tried to reproduce it but couldn't, or they investigated it but found no actions to take.

What would I do?

If I thought it was a serious enough safety concern, and I had seen it multiple times, one thing I would consider doing (prior to this accident) is trying to reproduce the same conditions in the interest of improving the safety for everyone driving a Tesla vehicle, and to prove to the Service Center that there really was an issue that needed to be addressed.

Why?

As a software engineer (I am one), we're either trained or we learn (rather quickly) that the fastest way to fix a software issue is to figure out how to reproduce it, and to capture information about it while trying to reproduce it in case we did reproduce it. Once a software engineer knows how to reproduce a software issue, they can fix it. And if one has evidence of it happening (such as a video), it's much harder to refute than a verbal description of the issue.

--

Again, I am in no way suggesting that this is what happened during this crash, but I didn't see this presented as a reason why someone would intentionally do something that they know could recreate a serious safety issue, so I wanted to share this in case it's in some way helpful.

Let's wait for the results of the investigation before jumping to conclusions about what actually happened. (I suspect that it might be over one year before we have preliminary results from the NTSB, though.)
 
I find it absolutely crazy that the car can't detect a large metal object in the middle of the lane and plows right into it at full speed. This could have been a stopped car, a piece of large debris, etc.

In my opinion, at this point in development of AP, it should be an absolute priority that AP detect stationary objects and provide a suitable mitigating control. I don’t really see the point to AP, unless this problem is reliably solved. Lane tracking, steering and TACC are useful features, but if Tesla is truly concerned about improving vehicle safety then they need their engineers to solve the problem of driving into stationary objects without mitigation, and without false positives and phantom braking.
 
I don’t really see the point to AP, unless this problem is reliably solved.

I don't care if AP spots the problem, or I do, but for sure with both of us on the lookout I am much better off than just myself on my own.

My hands are always on the wheel. Moving my hands from, say, my lap to the wheel would take an age is an emergency situation.

On long journeys (on AP) I arrive more refreshed (as in "hugely so") than back in the days of ICE, so I am now more alert on those journeys too.

For me its Q.E.D.
 
This has been exactly my speculation. I personally have been fully convinced this was an AP accident (which says NOTHING about blame, the data just clearly pointed that way). I also was very curious after watching my car start to adapt on the commute to the wide lane feature, it was "trying" a bit too hard sometimes to assume the lane was wider when it should not. IN particular, when driving the other direction on 101 when the 2 carpool lanes merge into 1. It was too quick to try and "get in the middle".

Also, fully noticed the faded right side gore line, that is dangerous for AP.. as we all know from construction zones.

Very sad... I have definitely spent time on AP distracted, even for 6 seconds.. no more.

I have experience now with the new "wide-lane" feature. I just spent over 6 hours (with one Supercharger break) driving my AP2 Model X with 10.4 software installed on a mix of limited access highways, divided highways, and two-lane roads.

My intent here is not to speculate whether the behavior I'm about to describe could or could not be a factor in this accident (we should wait for the NTSB investigation to conclude and the report issued), but I'm posting this because (a) this is the only thread I've been active in on TMC recently, and (b) folks have brought up the "wide-lane" feature that's apparently new in version 10.4 for AP2 vehicles.

In summary, if you have an AP2 vehicle with Autopilot engaged and Autopilot is detecting wider-than-normal lane markings, be extra alert and be ready to take control. It goes without saying that you should always be in this state when using Autopilot, but pay extra attention to wide-lane scenarios until you know how the vehicle will react.

  1. About midway through the trip, I was driving on a two-lane highway (one lane in either direction) that split into a four-lane divided highway (two lanes in either direction). Initially Autopilot thought I was in a wide-lane scenario and so centered the vehicle between the two lanes with no center lane marking. Suddenly center lane markings appeared between the two lanes, and Autopilot basically picked a lane (pretty sure it was the right lane; I'm writing this hours after the trip) and started immediately correcting to re-center itself in that lane (more aggressively than a highway lane change using a turn signal). At this point, I was unsure what it would do next so I disengaged Autopilot (steered out of it) until I had moved to the right lane on my own. I suspect I was traveling somewhere around 30-40 MPH when this happened, but I'm not completely sure. (I should have noted the location and time to ask the Tesla Service Center to pull logs.) What concerned me the most is that I had no idea why Autopilot picked the lane it did, or how to predict what it would do in similar scenarios in the future.

  2. In a later part of the trip, I was driving on a two-lane highway (one lane in each direction), with a very wide shoulder (for holding plowed snow in winter) and little or no visible right lane paint (possibly due to snow plows scraping it off over time when moving snow off the road). The center paint was mostly visible and Autopilot was able to track it. However, whenever Autopilot lost the right lane paint, it suddenly thought the lane was extra wide and would recenter itself to the point where the right tires were on (or to the right of) the right lane paint. As I had never driven this road before, I was very uncomfortable letting Autopilot continue in this state (not knowing what it would do if it "found" the right lane paint again or when the lane would narrow or if there was much debris in the lane while driving at night), so I always disengaged Autopilot (steered out of it), then moved back into the lane manually, and waited for Autopilot to be available again. I repeated this at least three or four times in this section of highway before I gave up and drove with TACC but not Autopilot.

I'm not entirely sure I prefer the new wide-lane behavior. The AP1 behavior of sticking to a reasonable distance from the "known" or "primary" lane marker seems to make more sense (and mimics what a human would do in most cases). Having said that, I know of at least one residential street (with right lane paint that creates an extremely strange lane geometry) where AP1 will fail, but AP2 now works with 10.4 apparently due to the wide-lane detection. I guess I'd prefer AP2 to "know" that it has extra lane space (for emergency maneuvers), but continue to stay a reasonable distance from the "primary" lane paint instead of constantly re-centering for lanes that are perceived to be very wide.
 
Do we know how the Prius crashed? In other words, how was the kinetic energy of the vehicle (and driver) dissipated during the crash?

Coming to an abrupt stop in a head-on collision (especially when hitting an immobile object like the concrete barrier) puts enormous G-forces on the driver's body. And the heavier the car, the more kinetic energy it will have if it stops abruptly.


First, my condolences to you and Walter's family. I've spent a lot of time thinking and reading about this accident, and I simply can't imagine what his friends and family are going through.

Second, I have a theory about why someone would intentionally continue to use Autopilot if they thought it had a serious safety issue in particular section of road.

NOTE: I am in no way suggesting that this is what happened during this crash, but I didn't see this presented as a reason why someone would intentionally do something that they know could recreate a serious safety issue, so I wanted to share this in case it's in some way helpful.

First, a couple stories about two times I've repeatedly tried to reproduce (less serious) Autopilot behavior.

#1. When we first got my wife's Model X (AP1) in late March 2016, we nearly got in an accident in the first month assuming that the vehicle would notice a stopped car (at a stoplight) ahead of us in city traffic, specifically on a 45 MPH "expressway" with the car maybe 10-20 car lengths in front of us around a bend in the road. Boy, were we wrong. Fortunately, I hit the brakes hard enough to prevent an accident. From then on, after most software updates, I would re-test this scenario (sometimes with my wife in the passenger seat, usually to her visible discomfort) to see if this scenario had improved. After getting my own Model X (AP2) in February 2017, I continued to test this scenario. I did not recall reporting this specific scenario to my Tesla Service Center (except maybe in passing), but I never asked them to investigate it. I assumed it was a limitation of the way Autopilot worked at that point in the beta. (BTW, as of 10.4, Autopilot on MX AP2 actually does recognize a stopped vehicle at expressway speeds, although it typically recognizes it "later" than I'm comfortable with and the MX brakes moderately hard when stopping, so I don't rely on it to "always" stop in time. I'll either disengage Autopilot or simply start reducing the TACC speed to "hint" that it should start slowing down sooner.)

#2. About 5-6 months ago, I noticed that Autopilot on my MX AP2 would recognize shoulders as lanes (both left and right shoulders in certain sections of specific highways), and that initiating a lane change would actually start the car moving into that "shoulder lane". This concerned me greatly because these "shoulder lanes" frequently narrow, especially when leading up to a bridge abutment. I was so concerned that I reported this issue to Tesla's NA Service email address (and my home service center) a few times, including taking a video with my iPhone showing the lane being detected during the route, the location and the time of day when this happened. (I never tried to change lanes while recording video.) Again, after most software updates, I would re-test the "shoulder lane" detection issue in the places I knew where I previously could reproduce it to see if it had been fixed. Fortunately, testing simply meant driving by that section of road to see if a "shoulder lane" was detected by glancing at the driver's console, nothing more. (Note: I didn't realize others had noticed this specific issue until I read this thread; I just don't have time to keep up with so many threads on the forums.)

--

So hypothetically speaking, let's suppose I found a really serious bug that I thought was a serious safety concern. Further, let's say I told my Tesla Service Center about it, and they tried to reproduce it but couldn't, or they investigated it but found no actions to take.

What would I do?

If I thought it was a serious enough safety concern, and I had seen it multiple times, one thing I would consider doing (prior to this accident) is trying to reproduce the same conditions in the interest of improving the safety for everyone driving a Tesla vehicle, and to prove to the Service Center that there really was an issue that needed to be addressed.

Why?

As a software engineer (I am one), we're either trained or we learn (rather quickly) that the fastest way to fix a software issue is to figure out how to reproduce it, and to capture information about it while trying to reproduce it in case we did reproduce it. Once a software engineer knows how to reproduce a software issue, they can fix it. And if one has evidence of it happening (such as a video), it's much harder to refute than a verbal description of the issue.

--

Again, I am in no way suggesting that this is what happened during this crash, but I didn't see this presented as a reason why someone would intentionally do something that they know could recreate a serious safety issue, so I wanted to share this in case it's in some way helpful.

Let's wait for the results of the investigation before jumping to conclusions about what actually happened. (I suspect that it might be over one year before we have preliminary results from the NTSB, though.)

Why do people think they need make AP do things it wasn't intended to do? Why would you think it's a good idea to try to make the system change lanes into a shoulder? It's just this cavalier attitude that is causing people to not pay attention and run directly into fixed objects that could have been avoided by simply paying attention and using the system as designed. I can't tell you how many threads I have read of people complaining that AP won't do some crazy windy road without crossing into the other lane.

It's really simple people. AP is not an autonomous driving system. Its cruise control that can stay within a lane. You could paint lanes in a parking lot and it will stay between them. It's not self aware. It's not aware of is surroundings. Out side of some basic map data that tells ap speed limits and whether lane change can be used, it simply stays between the lanes and follows other cars. I know it's sad, but that's all it does. Take your eyes of the road at your own peril. Use it on roads by your house with kids playing and risk the a worse outcome than this accident.

It's clear this guy enabled autopilot in as place he knew it didn't work well and took his eyes of the road and his hands of the wheel. He was rolling the dice with his life and paid the ultimate price. How does someone complain dozens times to friends and family and even Tesla then just decides to take his eyes of the road? I use AP every day and I know where I can be more relaxed and where I need to pay more attention then I would if I was controlling the car. I to have been distracted for 5 seconds but never in a place I knew not to trust. Never while merging from freeway to freeway, which is known not to be supported even though we have all seen videos of idiots testing this out on their own. Please stop trying to be Tesla's regression testing team. You are going to hurt yourselves. You are not smarter then Tesla's engineers, Even if you read all about lidar on a forum somewhere.

AP is a fancy cruise control. Nothing more. You wouldn't put cruise control on and start reading a book. This system is only marginally better then that.

The problem Tesla has is that AP is too good. It luls even very smart people into a false sense of security. Even I have been luled at times. Its accidents like this that makes me take me note of how I'm using AP. I too have tested AP, but never with something it's clearly but intended to do. For me it was turnouts and Hill crests. Both of which are much better on 10.4 and part of my daily commute. I feel like I'm done with the testing business and my intent was not to report bugs to Tesla but to know how my car would react before it darted in other places. My car would dive into the same turnouts every day. So did I pickup my cell phone to text a friend as I approached those spots? No. I paid very close attention and was ready to take over. Most times i would just preemptively take over because other cars were around and I didn't want them to freak out as the car darted in and out of the lane.

If you know a Tesla owner with AP, please explain to them that they don't work for Tesla and that are not responsible for testing AP as a proxy for self driving. They do not need your help, what they need is for you to pay attention to the road!
 
The problem Tesla has is that AP is too good. It luls even very smart people into a false sense of security. Even I have been luled at times.

This is definitely true. I bet a lot of us divert our attention while issuing autopilot in situations we consider "safe.". I think one issue that encourages this is the somewhat contradictory way in which autopilot is designed to work. The system tells us to keep our hands on the wheel, but if you make too much of a movement, it disengages. This probably encourages you to keep your hands off. It is actually on of my small peeves about the system. Our other car is a Volvo XC90 with pilot assist. Pilot assist is no where close to autopilot in terms of it's efficacy. However, one feature I do like is that I can take over the steering for brief periods during which the pilot assist automatically deactivates. However, as soon as the lanes are found again and I'm no longer applying pressure to the steering wheel, the pilot assist automatically re-engages (I can disengage by pressing a button or braking). It is a small thing, but in a way it does encourage me to keep my hands on the wheel without the over annoying alerts every 15 seconds (in Volvo). In the Tesla I almost feel like I'll screw up the autopilot if I hold the steering wheel too tight. Just one thing to consider.
 
whilst it might feel good to you and I dont dispute it, I can't imagine any argument that says AP (or whatever a system is called) automatically re-engaging is a good idea.

Will definitely take some getting used to coming from the current system, but it's not too bad. Having the Volvo first, I actually appreciate that part of the system and initially had to remind myself to re engage the autopilot on Tesla. The one area it really helps is actually on lane changes (which may have played a role in this accident). I can actively change lanes without relying on the Tesla system which is still a bit nerve racking at times (though much better with recent update).
 
Why do people think they need make AP do things it wasn't intended to do? Why would you think it's a good idea to try to make the system change lanes into a shoulder? It's just this cavalier attitude that is causing people to not pay attention and run directly into fixed objects that could have been avoided by simply paying attention and using the system as designed. I can't tell you how many threads I have read of people complaining that AP won't do some crazy windy road without crossing into the other lane.

It's really simple people. AP is not an autonomous driving system. Its cruise control that can stay within a lane. You could paint lanes in a parking lot and it will stay between them. It's not self aware. It's not aware of is surroundings. Out side of some basic map data that tells ap speed limits and whether lane change can be used, it simply stays between the lanes and follows other cars. I know it's sad, but that's all it does. Take your eyes of the road at your own peril. Use it on roads by your house with kids playing and risk the a worse outcome than this accident.

It's clear this guy enabled autopilot in as place he knew it didn't work well and took his eyes of the road and his hands of the wheel. He was rolling the dice with his life and paid the ultimate price. How does someone complain dozens times to friends and family and even Tesla then just decides to take his eyes of the road? I use AP every day and I know where I can be more relaxed and where I need to pay more attention then I would if I was controlling the car. I to have been distracted for 5 seconds but never in a place I knew not to trust. Never while merging from freeway to freeway, which is known not to be supported even though we have all seen videos of idiots testing this out on their own. Please stop trying to be Tesla's regression testing team. You are going to hurt yourselves. You are not smarter then Tesla's engineers, Even if you read all about lidar on a forum somewhere.

AP is a fancy cruise control. Nothing more. You wouldn't put cruise control on and start reading a book. This system is only marginally better then that.

The problem Tesla has is that AP is too good. It luls even very smart people into a false sense of security. Even I have been luled at times. Its accidents like this that makes me take me note of how I'm using AP. I too have tested AP, but never with something it's clearly but intended to do. For me it was turnouts and Hill crests. Both of which are much better on 10.4 and part of my daily commute. I feel like I'm done with the testing business and my intent was not to report bugs to Tesla but to know how my car would react before it darted in other places. My car would dive into the same turnouts every day. So did I pickup my cell phone to text a friend as I approached those spots? No. I paid very close attention and was ready to take over. Most times i would just preemptively take over because other cars were around and I didn't want them to freak out as the car darted in and out of the lane.

If you know a Tesla owner with AP, please explain to them that they don't work for Tesla and that are not responsible for testing AP as a proxy for self driving. They do not need your help, what they need is for you to pay attention to the road!

Thumbs up. Perfectly explained. I wish that the "Full Self Driving Capable" sales pitch and statements that the current generation of cars will Uber for you, or that
in 2019, the system will be so safe you could potentially sleep while driving.... would get dialed back. Tesla would still have 500k Model 3 reservations even if the "FSD"
pitch were stated as a driver assist system. Overpromise and underdeliver on how many cars they deliver... ok investors get hurt. Overpromise and underdeliver
on something going 70mph down the highway with a family inside... begging for disaster. "FSD" is many years away.

and by the way, I mentioned how bothered I am about Tesla revealing to the press what a driver was doing in his own car... but what is also crappy is how they keep throwing
out statistics on other drivers. It's Trump-ish what-about-isms. Yea yea our car drove into a barrier... but look at those unsafe drivers over there! Look at them!
Just own it
 
Glad the Wright Brother's didn't listen to you, or we would still be using coal powered steam engines to traverse the United States.

The Wright Brothers went after their goal very patiently and methodically, with a rich trove of both practical and scientific knowledge to build upon. I much enjoyed reading the book by David McCullough about their signal achievement.
 
  • Love
Reactions: kbM3
I don't understand AP. From all the posts I've read, it's frontal detection seems to be dismal. Calling it AP is a scam. It should be called BP (Beta Pilot) or GP (Guinea Pigs). Eon Musk is effectively crowdsourcing the testing of AP. This product should never have been sold.

Why would I want a product that is effectively the equivalent of teaching a teen to drive. Keep hands on the wheel, watch the road, correct errors. That's what a parent does when teaching a teen to drive!

I will never buy the AP product for my M3. And yes, I am an engineer, but the kind that addresses ALL scenarios.
 
Eye tracking without steering wheel torque sensing could invite a driver to have hands away from the steering wheel which is not ideal because it does take time to move hands back to grab the steering wheel in emergency.
I agree that there's an added reaction delay if your hands are off the wheel, but it's ultimately safer to be looking out the windshield with your hands off the wheel, than it is to be looking somewhere else with them on it. Especially since the wheel torque sensor is easily defeated. All the autopilot fatalities I've seen (perhaps even all the autopilot accidents, even), the Chinese instances, the semi, the fire truck, etc; could have been avoided if the driver had been looking forward. It doesn't matter what your reaction time is if you don't know you need to react.
 
  • Like
Reactions: Snobun