Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot, garbage on the road

This site may earn commission on affiliate links.
In my case it doesn't actually disengage when it sends off the alarms to take over immediately. It tells you to take over, and it slows down, but it still keeps steering as best it can until you take over. Which is what it should do.

My experience exactly.
i
duno - our ap2 x nearly came to a halt, & I just use the accelerator to continue on. It looked like a large metalized cheetos type bag getting blown in front of us. Searching YouTube I found something similar. So ymmv. we occasionally get the same phenomena from cars traveling in the adjacent right hand lane next to us as we come up on them.
.

We found that same phenomenon was more likely to happen when it was a truck in the right hand lane. Our X would slow down to the speed of the truck until we just accelerated through. I wonder if this is the result of too much spread in the radar path. I believe the current radar reaches further than the previous radar. May be more difficult to coordinate with lane markings at greater distances. Also noted that the truck icon appeared to be on the lane line in the IC but visually, the truck would a foot +/- to the right of the actual lane line.
 
My wife tests software for a living...she is skeptical that full autonomous driving will ever be safe in our life time. Driver assist sure, but full atononomy is another matter. Too many out of the box cases to test the software for. I’m not as pessimistic as she is, but I think the term “autopilot” is interperted as full autonomous drive. In an airplane, auto pilot can hold altitude and course, and in many cases make an approach to landing, and avoid other airplanes in the air(to a degree...it alerts the pilot of traffic on collision course). But as far as I know humans are still taxiing airliners to the gate...where the most unexpected chaos exists.
 
My wife tests software for a living...she is skeptical that full autonomous driving will ever be safe in our life time.
I agree, there are way too many variables in play and the algos are currently not up to this complex task. the first baby step would be driverless cars in a closed course setting, maybe a huge office park or airport type setting. It will be a long time before elon's summoning your car from across the country will be possible
 
My wife tests software for a living...she is skeptical that full autonomous driving will ever be safe in our life time.

I think that with traditional software programming (and testing) methods she is correct. It would be totally impossible to program for every possibly situation and scenario that may come up. However this problem is being approached with AI and machine learning, which I think has a much better chance of success. I think it’s kind of hard for people who have been steeped in more traditional methods to get their heads wrapped around this... I know it is for me, at least.

Also, keep in mind that companies like google have been operating self driving cars for years now... the technology is getting close. I believe that the safety record of autonomous vehicles is much better than that of the general population right now.
 
  • Like
Reactions: aesculus
I think that with traditional software programming (and testing) methods she is correct. It would be totally impossible to program for every possibly situation and scenario that may come up.
you are right and until the day that a car can be programmed for every possible situation and scenario there will not and should not be any self driving cars on the roads.
 
you are right and until the day that a car can be programmed for every possible situation and scenario there will not and should not be any self driving cars on the roads.

I think you’re missing my point. I don’t believe that it’s possible to discretely program for every possible situation. That’s why we need a paradigm shift to AI and machine learning. The AI in the cars will teach themselves to drive and will be able to handle situations and scenarios that we wouldn’t have even thought to program in.
 
I'm thinking like a computer here. Eliminating all human drivers would help. Then, only environmental factors and pedestrians would be an issue. Every car would know what every other car was "thinking". A second thought, eliminating pedestrians would make it even better. In fact, if we allow the machines to do everything, it'll be just like all the great movies the predict what will happen when machines take over, oh wait...
 
Downed power lines is going to be a tricky one (especially vs those traffic counting strips).

If the system can detect {generalized object} and map its trajectory, then it can work in a fail-stop environment.
I do think that there will be some really tough keep-driving cases, such as the roundabout at State and Ellsworth in Ann Arbor. I kid you not, 10% of the times I drive through, someone does an illegal maneuver which requires braking and/or creating a new lane. As long as the system allows time/ distance to panic stop, it would avoid most accidents, but the ability to decide to drive on the shoulder to avoid a collision is an interesting programming case...
 
  • Like
Reactions: bhzmark
My wife tests software for a living...she is skeptical that full autonomous driving will ever be safe in our life time. Driver assist sure, but full atononomy is another matter. Too many out of the box cases to test the software for. I’m not as pessimistic as she is, but I think the term “autopilot” is interperted as full autonomous drive. In an airplane, auto pilot can hold altitude and course, and in many cases make an approach to landing, and avoid other airplanes in the air(to a degree...it alerts the pilot of traffic on collision course). But as far as I know humans are still taxiing airliners to the gate...where the most unexpected chaos exists.
I agree with what @BrettS said above. The comparison is between apples and oranges. Traditional software testing approaches cannot and should not be applied to AI systems. A new testing system would need to be developed.

As an example, an AI system could be tested 100 times with a particular scenario and do the right thing each time. Then on the 101th time, it inexplicably makes a poor decision. One cannot be 100% certain that the AI will always make the right decision and we need to introduce statistics and probability to describe how well the AI performs.

The problem is that most people treat AI systems like traditionally programmed applications, which causes all sorts of issues due to assumptions and implications that do not apply.
 
if FSD cannot be programmed for anything and everything then it cannot be allowed to be used on public roadways

The point of machine learning is it doesn't have to be. Taking from an article of the AI that beat a world champ at DOTA: Did Elon Musk’s AI champ destroy humans at video games? It’s complicated

As OpenAI’s Dota bot shows, he says, we don’t have to teach computers complexity: they can learn it themselves.

I'm sure all of us have encountered things on the road that were never taught or prepared for but drawing on past experiences we can usually (hopefully) prevent that unexpected event from causing an accident. I agree not every eventuality can be pre-programmed. The computer needs to understand and evaluate a situation on the fly. Machine learning still has a ways to go before I'd be comfortable with it driving, but that game match was a pretty good first step on that path.
 
The problem is that most people treat AI systems like traditionally programmed applications, which causes all sorts of issues due to assumptions and implications that do not apply.

Yah, the NN/ AI takes the space of all possible inputs over some span of time and whittles it down to an acceleration command and a steering command. Without the ability to see/ understand/ validate how it is doing that, it is difficult to know how sensible its logic is.

Starman: "I watched you very carefully. Red light stop, green light go, yellow light go very fast."

My take is that the driving could be broken into discrete things: object tracking, lane following, defensive driving, and those things can be tested individually. Show me the lane, show the object, show the unknowns. The Tesla test drive video showed the bounding box and lane lines, so you could see at some level how it was doing. With the large parallel computing systems, the algorithms can be run against a large data set quickly for verification, but knowing the how goes along way toward confidence.
 
  • Like
Reactions: vandacca
if FSD cannot be programmed for anything and everything then it cannot be allowed to be used on public roadways. there is just too many things that could go wrong and the price for errors could be too high (fatal)

I don’t think that will be ever be possible. Heck, even humans aren’t ‘programmed’ for every situation... especially right after someone passes their driving test. There are accidents caused by new drivers that could have been avoided by a more experienced driver. And there are accidents caused by experienced drivers that could have been avoided with different experience.

There will be situations where the AI makes the wrong decision and causes an accident. And there will be situations where the AI makes a wrong decision and kills someone. But you have to compare that to the humans that are driving the cars now. When AI causes fewer accidents and fewer fatalities than humans, then it is ready. We can’t wait for it to be perfect (or even assume that it ever will be)

There are just too many things that could potentially happen. Instead of being prepared to handle every possible situation, the AI needs to be programmed to recognize when it is in a situation where it is no longer in control and programmed to safely bring the car to a stop at that point until the issue can be resolved.
 
if FSD cannot be programmed for anything and everything then it cannot be allowed to be used on public roadways. there is just too many things that could go wrong and the price for errors could be too high (fatal)
That’s illogical. What you’re suggesting is that, unless you can achieve 0 fatalities that you would not accept a 50% or 90% reduction in fatalities.

I think where we will struggle in the shift is when AI does much better than humans in certain scenarios, but worse in other areas compared to humans. The whole “two steps forward, one step back” is unlikely to be something society will accept.
 
That’s illogical. What you’re suggesting is that, unless you can achieve 0 fatalities that you would not accept a 50% or 90% reduction in fatalities.

I think where we will struggle in the shift is when AI does much better than humans in certain scenarios, but worse in other areas compared to humans. The whole “two steps forward, one step back” is unlikely to be something society will accept.
People will accept human error far easier than they'll accept a technical error
 
  • Like
Reactions: FarmerDave
Driving an AP1 MX, it routinely slows or stops for cars. I'm not sure the posted video is proof the car wouldn't stop, because the car has incredible braking, but, no one should be stupid enough to find out. Brake when comfortable. I become uncomfortable with the braking feature if the following distance is set too closely for my personal comfort level. Too short of a following distance for personal comfort doesn't necessarily mean the car couldn't or wouldn't stop, it many only mean that it hadn't started braking within your personal tolerance distance.

Definitely have my doubts about Level 5 anywhere in the near future. Our governments are unwilling to invest in roads that are friendly to automated driving. If we got serious about creating and maintaining roads that were well marked and predictable, then full auto could be done quickly. Realistically, governments are cash strapped and this type of road design and maintenance is likely too expensive to ever happen.
 
If the roads and infrastructure (signaling, warning and traffic signs) are redesign to accommodate autonomous driving and the cars talk to one another then the algorythms could be simplified as well.
Definitely have my doubts about Level 5 anywhere in the near future. Our governments are unwilling to invest in roads that are friendly to automated driving. If we got serious about creating and maintaining roads that were well marked and predictable, then full auto could be done quickly. Realistically, governments are cash strapped and this type of road design and maintenance is likely too expensive to ever happen.
This has been part of my observation as well. That and the fact that cars, traffic signs and lights should be communicating with one another. We have a long way to go.
 
TACC seems to be fairly sophisticated for a cruise control - in a good way -. It seems to calculate acceleration of vehicles ahead instead of simply maintaining set distance away. For example: if on highway with TACC set to 7 car lengths, someone passes me and cuts me off just in front of my bumper but is accelerating, TACC does not reduce speed at all. It’s amazing like that.
Exactly... TACC is way better in Tesla then majority of other vehicles including MB and Audi which will brake hard when anybody cutting in front..