Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Ideas To Help Prevent Auto Pilot Accidents...It's Time To Get Real

This site may earn commission on affiliate links.
Tesla should do a video that goes as follows: when your child gets a ticket (with 3 people in the car in March) for driving 112 mph in a 50 mph zone and is forced to take aggressive driving classes... maybe you should take his car away? Or perhaps put the fear of god in him for driving with other kids? This is like aflluenza teen all over again, when parents sell a company for 125 million and can't keep a car out of their kid's hands? What a waste.
 
Tesla should do more:
  • Document in the release notes the status of implementation and testing for AP. If they are aware of areas that may be of risk, drivers should be aware of that - better than having drivers try to guess at what has changed after each release. For example, many people appear to believe lane change is detecting vehicles to the side and will only engage when the lane is clear - is this really true?
  • By default, AutoSteer should be enabled only for use on limited access highways. Drivers should be required to manually enable use on other roads in the settings.
  • Modify the graphic displayed on the dashboard to add the word "Beta" above or on the AutoSteer graphic, as a constant reminder that the feature is still in beta testing.
  • Use of accessories like the "buddy" should be detected (Tesla should be able to detect constant pressure), and AP should be disabled in those car until the accessory is removed
  • More could be done with visual and audible indicators to provide drivers more information on what AutoSteer is or isn't detecting.
  • The pop-up text reminder about keeping your hands on the steering wheel should be expanded to also remind the driver that AP is beta software and that the driver is responsible for monitoring operation of the software
Relying on every driver to read the car's user manual and the description & warnings on AP is unrealistic - many drivers likely use AP without ever opening the manual. While I'm not recommending Tesla post a window on the console warning about AP every time AP is enabled, Tesla should do more to ensure that all drivers are aware they are using beta software - that can't be trusted 100%, and that it should be treated more like a student driver than "full self driving".
 
I agree 200% with your first item. We should not be kept guessing what has changed or been fixed or still doesn't operate with 100% reliability. I also agree strongly with item 5, give us a visual indication of what the car is seeing or not seeing (doesn't display if not seeing). This is important feedback to learn what to trust and what not to trust about the sensors. Keeping us in the dark is asking for more accidents and negative feedback about Tesla AP. Help us be safer and better drivers when using AP.
 
  • Document in the release notes the status of implementation and testing for AP. If they are aware of areas that may be of risk, drivers should be aware of that - better than having drivers try to guess at what has changed after each release. For example, many people appear to believe lane change is detecting vehicles to the side and will only engage when the lane is clear - is this really true?
I have never had Nicki try to pull into another lane unless it's completely clear. Which is different from safe, she does NOT see idiots doing 100 MPH and a long ways back even though pulling in front of them is a problem.
  • By default, AutoSteer should be enabled only for use on limited access highways. Drivers should be required to manually enable use on other roads in the settings.
AP is always manual engage, you have to flip the lever. I think the current setup is best as it allows more beta testing.
  • Modify the graphic displayed on the dashboard to add the word "Beta" above or on the AutoSteer graphic, as a constant reminder that the feature is still in beta testing.
No problem with this, but I don't think it will change how people use AP. People will get complacent with or without the BETA label.
  • Use of accessories like the "buddy" should be detected (Tesla should be able to detect constant pressure), and AP should be disabled in those car until the accessory is removed
Not a bad idea
  • More could be done with visual and audible indicators to provide drivers more information on what AutoSteer is or isn't detecting.
IMHO this may be too distracting, you are driving, not troubleshooting AP.
  • The pop-up text reminder about keeping your hands on the steering wheel should be expanded to also remind the driver that AP is beta software and that the driver is responsible for monitoring operation of the software
Yes, a picture of Musk with caption "Thanks for beta testing Auto Pilot"
 
  • More could be done with visual and audible indicators to provide drivers more information on what AutoSteer is or isn't detecting.
@ The Duke: IMHO this may be too distracting, you are driving, not troubleshooting AP.
I disagree. It would be helpful to know if the sensors are seeing the cars around you or not. Whether it sees the motorcyclist, bicyclist, or pedestrian. I shouldn't have to guess when these sensors are engaged and working and it should be obvious when they either haven't been made to work or are not working as they should. Can the camera see the car in the other lane coming at 100mph? Can it compute speed based on size of image change rate? If it can't do that, then we should be able to tell at a glance. If it shows the image but no speed we know it sees it but hasn't been able to judge its approach speed. If it shows the image and a speed we have the information to know it is safe or unsafe to change lanes. No image, if it is supposed to show one, then a sensor failure, if not we need to do our own checking. Same for cars along side of you and in front and oblique. To assist a driver, the system must provide information to the driver.
 
  • Can it compute speed based on size of image change rate? If it can't do that, then we should be able to tell at a glance. If it shows the image but no speed we know it sees it but hasn't been able to judge its approach speed. If it shows the image and a speed we have the information to know it is safe or unsafe to change lanes.
Why? I have side and rear view mirrors which work. I'd rather glance at a mirror rather than take the time to analyze an image on the IC to see if it's safe to change lanes.

Once all cameras are in use, hopefully even that becomes moot. But, not holding out too much hope at this point for that.

EDIT: Should rephrase: Once all cameras and FSD is enabled...
 
Last edited:
Have the Tesla browser link to a 5 minute training video that explains what autopilot does, what it doesn’t do, and examples of crashes where people misused or misunderstood it. Once the video is complete, autopilot can be enabled.

In a perfect world, I would like a heads up display that shows a line where the car plans to drive me, that way I can have advanced notice I need to take over instead of the wheel jerking where I don’t want it to go.
 
Have the Tesla browser link to a 5 minute training video that explains what autopilot does, what it doesn’t do, and examples of crashes where people misused or misunderstood it. Once the video is complete, autopilot can be enabled.

In a perfect world, I would like a heads up display that shows a line where the car plans to drive me, that way I can have advanced notice I need to take over instead of the wheel jerking where I don’t want it to go.
That sounds like a great idea but It's probably overkill. You're simply trading watching the road 100% of the time for watching the HUD(and road) 100% of the time. Can you imagine driving 10 hours without taking your eyes off the HUD for that off chance AP is going to do something screwy. It's probably far easier and more cost effective if Tesla simply put their resources into improving AP.
 
I keep thinking you have 8 friggin cameras, how many more years before you start using more than 1? When will we see the first use of actual image recognition processing? The cameras on my iPhon X can recognize my face but my cameras on my Tesla can't recognize a fire truck or another vehicle, much less road signs which have all been assigned distinctive shapes. Maybe the programmers need driver's ed. You know: octagon = stop sign, vertical rectangle = information, upside down triangle = yield and so on.
 
  • Funny
Reactions: Aieukl378
I keep thinking you have 8 friggin cameras, how many more years before you start using more than 1?
No point in even using one camera if the computer cannot see with it. More cameras will be integrated when the images can be meaningfully tied together.

When will we see the first use of actual image recognition processing? The cameras on my iPhon X can recognize my face but my cameras on my Tesla can't recognize a fire truck or another vehicle, much less road signs which have all been assigned distinctive shapes. Maybe the programmers need driver's ed. You know: octagon = stop sign, vertical rectangle = information, upside down triangle = yield and so on.
Recognizing shapes is done, AP1 had it. False positives, however, are still around. People are much better at figuring out that that a stop sign is not for the freeway, just the side street even though it can be seen clearly.
Recognizing a fire truck is not the problem, recognizing the speed you are approaching the fire truck and that it actually that blocks your path is. Imagine a fire truck just to the side of the road on a curve. Humans see the hoses hooked up and know the fire truck will not move, but humans slow down because there are probably firemen out of view. Programming the billions of possibilities like that is not possible at this time and takes an almost fully human equivalent.
Should the car hit the brakes every time there might be a problem? The car would be very slow if it moved at all. AP is improving, but we have a LONG way to go and until then drivers need to recognize and handle "edge cases". AP works well doing the mundane tasks with no surprises, surprises need the driver.

We will be done with AP beta testing in maybe 3 billion miles, 6 billion definitely.
 
I have worked in the computer field for over 50 years so I understand the problems. It is called contextualizing. To properly interpret words or images requires understanding context and probabilities. I worked on AI problems back in the late 60s and early 70s in grad school. In your example, the computer may not understand the context fully but it should slow down, non the less until it does understand the context. We as humans do this all the time. We aren't sure if what we see ahead is a danger or not so we let off the gas, we change lanes to see if we are clearing the danger and only proceed at speed once we are certain it is not a threat to ourselves or anyone else. This, fortunately, doesn't happen with a high enough frequency to prevents us from making headway. It is not that difficult to know (i.e. build in knowledge) that a stopped fire truck is a danger whether on the side of the road or the middle of the road and that where there is a fire truck there are probably firemen. Same with police vehicles. Even with vehicles on the side of the road, it isn't a stretch to assume there are probably people near or in the vehicle. We as humans know to give it wide birth and this is not a difficult concept to anticipate. You really need to watch the latest Mobileye videos on presentations given at conferences to see how they handle the much more difficult situations such as actually merging in traffic when two lanes are reduced to one and traffic is bumper to bumper. How does the car get the other driver to let it in? This is hard for humans to do and many humans struggle with this scenario. Few of us struggle with how to handle the fire truck. By the way, it would be a rare stop sign that would be visible to human or computer on a limited access highway (on & off ramps). They would be visible on some four lane roads with cross traffic but then the car would know that based on context of its location and type of road and it could be programmed to know where there should be stop signs thus improving the probability of it making the right interpretation (guess) where a sign is no longer perpendicular to the street it services. We as humans do basically the same thing. Some links to watch:


This one on safety is excellent!
 
Autopilot is clearly not ready for prime time. It's a beta product with serious flaws. The best way to avoid an accident is by turning it OFF. The Autopilot-related accidents that have occurred would NOT have occurred had AP not been engaged. There is no data available to show that AP saves any lives at all or makes anything safer, but we have several examples of people losing lives because of something AP did. As it stands, AP is not saving lives - it's causing deaths that would not have happened otherwise.
 
  • Disagree
Reactions: Joe F
There is no data available to show that AP saves any lives at all or makes anything safer,

There is an inherent flaw in this statement. The only data that would show that AP saves lives is that an accident never occurred in the first place. For this, there is the data that shows how many miles that AP has driven without a documented accident. There is actually a lot of strong data here, but even there it cannot be used to fully show that it "saved lives". What data would suffice for you to demonstrate that AP is capable of saving lives?

The Autopilot-related accidents that have occurred would NOT have occurred had AP not been engaged.

Here I have to use your own argument against you. What evidence/data do you have that shows that these accidents wouldn't have occurred with AP disengaged? Like @Joe F mentions, any distracted driving could have yielded the same results in some of the more publicly discussed Tesla accidents, while using or not using AP.
 
If you watched the video on safety above you would understand that there is no statistical argument that can prove AP is safer and probably the converse is true unless the rate of accidents keeps increasing. Until AP can do two things very well: [ 1) Recognize the safe driving path through visual means 1. lane markings, 2. edge of roads without lane markings, 3. edge of roads with curbs, 4. parking boundary lines, 5. path based on driving rules, and 6. lane dividers, and 2) Recognize objects fixed and moving: 1. autos, 2. pedestrians, 3. motorcycles, 4. bicycles, 5. trucks, 6. buses, 7. trailers, 8. stop signs (octagon), 9. yield signs (inverted triangle), 10 warning signs (Diamond), 11. information signs (Vertical rectangle), and 15. traffic lights] it cannot approach full level 2 much less level 3 autonomous driving.

I understand the desire by Mobileye and others to add high resolution mapping to accurately locate oneself in the space and maybe this is a crutch that Autonomous systems will need to match human capabilities but until it can localize itself relative to other fixed and moving objects as we humans do and have done for years without any maps, it will not achieve level 5 autonomous driving ability. Moving around or stopping for objects is something we do reflexively without thinking about it. For example, we see cars parked along side the road and subconsciously we are aware that even though currently parked and even though we have the right away one of them might suddenly pull out into traffic. We watch for clues such as seeing a person in the driver's seat, a brake or backup light come on, etc. We are also aware that someone might open a door into our path requiring us to maneuver around if possible (no oncoming car in the other lane to hinder us?) or stop if not. We do this all by sight with no detailed map and without knowledge of our exact position on such a map, just using our vision and experience (knowledge). It has been over 2 years and as far as we can tell, AP has only semi-mastered the driving path's lane markings visual capability. At this point it does not appear to be able to use logic to know we intend to stay on the same path (i.e. road or highway) and use the GPS maps for that purpose when we come to a cross road or exit and the lane boundary line disappears. It becomes confused and often seeks a different path than the desired path because it makes no assumptions that we intend to continue on the same roadway whether the roadway goes straight ahead, or veers to the right or left. Until we add route planning into the equation, that would be the logical assumption.