Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Auto Pilot problems and solutions

This site may earn commission on affiliate links.
Autopilot is still a work in progress, and Elon keeps telling us it will be perfected 6 months from now.
So far they have done a remarkable job of being able to follow car in front, reading speed limit and stop
signs, warning or preventing turning into the adjacent lane if there is a car there. They have not solved
the problem of detecting stationary objects in the road ahead. Teslas have run full speed into a parked
fire truck, lamp post, semi-trailers, and cement road divider. No attempt to slow down at all before
collision. Tesla has emphasized that the driver must remain alert and able to take over in an instant. I
think the best way to ensure the driver is alert, is to not let him use Autopilot.

Tesla uses an array of cameras, sonar and front radar to detect objects around it. Based on the above
accidents, it appears there is heavy reliance on the radar and little or no reliance on the cameras to
determine distance to objects ahead. Radar works best when its signal can be reflected back by a metal
surface. It does not work well on humans, wood, plastic etc. It does not work well if a steel street pole
in front if it is round because almost all of the reflected signal goes off to the side instead of straight
back. The fire truck was not detected because it was parked on an angle so the radar was deflected to
the sides. Tesla uses an array of cameras and the 3 forward looking ones are grouped in the top centre
of the windshield. They are long range, medium and wide angle. They are not arranged or used for
binocular vision.

Binocular vision is what we use to determine distance to objects. Try this little experiment. Using only
one eye, reach out with your arm so your hand is about a foot to the side of your monitor. Point one
finger at your monitor, and move your hand in to touch the top corner of your monitor. Most people
will miss on their first try, afterwards you learn how far to reach so second or third is successful, but
this is not due to improving vision, just remembering muscle position. Doing this with both eyes open
is easy.

Edge detection software can use triangulation with two cameras looking at the same scene to determine
distance. By mapping edges, similar shapes can be identified and the displacement of these same
shapes from one camera to the other is used to triangulate and calculate distance. This can be very fast
and accurate. With only one camera an object in front has to grow larger taking up more pixels in the
image and after a short time, using the rate of size increase, estimate distance. It is this time that is
unacceptable, in an emergency situation you cannot wait a second or two before deciding action needs
to be taken, but this is exactly what Tesla does. Their system is not accurate, and they actually delay
making a determination about what is observed to avoid false positives, which would otherwise cause
needless and disconcerting braking.

I believe doubling up on binocular vision would be best, with two pairs of cameras in the top corners of
the windshield. It is easier to map edges from two cameras close together but less accurate than far
apart. Placing them diagonally allows the pair to more easily detect vertical and horizontal edges. With
approximate distance known from each pair, it is easier to match edges with the opposite pair for
increased accuracy. I believe this system would be much better than LIDAR because cameras have
much higher resolution and higher scanning rate.

In my opinion, autopilot should not be used until the car can reliably determine that there are objects in
front and be able to take appropriate action. Binocular vision is the obvious choice, Tesla engineers
must have rejected it. Why? Cost? I think they need to re-evaluate.
 
...In my opinion, autopilot should not be used until the car can reliably determine that there are objects in front and be able to take appropriate action...

There have been 2 camps:

Waymo does not trust human's driving skills so they will not release their products to the public until their automation can replace human.

On the other hand, Tesla feels since human have earned a driver license, they can be trusted to operate an unfinished product, beta product called Autopilot.

Tesla trusts human skills to the point that they sold AP1 when it was not working at all and its Autopilot was not activated until a year later.

Same story with AP2. It didn't work at all because it wasn't activated initially until a slow incremental rollout later.

With any technology, there are risks and limitations. Gasoline cars have been burned up every day but we don't wait until it's perfected to make sure no one will die from a gasoline car fire.

Same here with Autopilot, if you don't accept the risks and limitations, then don't buy it!
 
  • Love
Reactions: ferdboyce
...Binocular vision is the obvious choice, Tesla engineers
must have rejected it. Why? Cost? I think they need to re-evaluate.

Tesla promises TeslaVision so it does not reject binocular vision. It currently has 3 cameras on the windshield and additional 5 more surround the car.

We need someone there 24/7 sleeping under their desk (the way Elon Musk does) to activate the rest of the cameras and get TeslaVision going.

The issue is limited human time and resources to speed up the program.
 
Elon has already indicated that they have at least two paths of solutions. I really don't equate much off of the current solution as part of future solutions. They barely are using any cameras today and have a "enough to keep people somewhat happy solution"

If you would read a number of articles in the past few months, the problem really isn't detecting stationary objects. It already seems to be able to do that pretty well. The issue is what do you do with stationary objects.

Let's say that you are on a busy interstate going 70 mph and a stationary object just appears. What do you do? You shouldn't really slam on the breaks, there are 5 cars within 100 ft behind you. there are 3 cars on each side of you.

Stationary objects is easy. What to do with them is hard.
 
  • Like
Reactions: HankLloydRight
ELet's say that you are on a busy interstate going 70 mph and a stationary object just appears. What do you do? You shouldn't really slam on the breaks, there are 5 cars within 100 ft behind you. there are 3 cars on each side of you.

Stationary objects is easy. What to do with them is hard.

So you think the best option is to keep the power on and drive straight into it???

Slamming on the brakes is the simplest option, you might argue a lane change (deftly into available space) would be better but that is not a good enough reason to do nothing.
 
Indeed!

Uber detected the lady walking her bicycle across the road but Uber's programmed to ignore her to promote a smooth ride by avoiding system sudden brakings.

The Uber automatic braking system had been deliberately turned off as they were testing some other new feature. The driver would have known that and should have been extra cautious. Instead of doing his job he was looking down at is phone.
 
Tam, I respectfully disagree. I do not believe that Tesla would deliberately drive without any slow down whatsoever into a cement barrier if they knew that was what was ahead. They also have rear sensors and could have known that there was no immediately following vehicle, so in this case stopping would be the obvious course of action.
 
"So well that Uber has to, as you say, "deliberately turned off" its automatic braking system."

I just realized I missed this important part of your statement. I said it was turned off, I didn't say anything about "has to" and do not understand how you conclude that the car would normally detect but ignored objects.
 
One more point. You agree that the software delays its decision to brake because it must filter out "false positives". I believe that because the sensors can generate false positives, that this proves they are not good enough.
 
...into a cement barrier...

I am no expert but this is my understanding: Currently, Tesla Vision is not activated so the system relies on radar primarily to brake for an obstacle.

Radar sends out a signal and many objects/obstacles bounce it back and the radar receives those bounced signals very well.

Because there are so many signals coming back so well, programmers need to write up which signals are non-threatening and which ones are deadly.

So, the trick to do that is for is to focus on radar signals from moving objects and ignore signals from stationary objects such as poles, signages, bridges, overpasses...

That trick works very well as long as stationary objects do not appear right in front of the car.

Well, what do you do when that deadly cement barrier suddenly appears right in front of the car?

Currently, the algorithm is to ignore it which of course, is deadly.

It's deadly but that is all car engineers can figure out right now.

Maybe in future, someone can write a better program for radar.

And also, remember Tesla Vision? Someday, it will be able to help out that radar flawed and deadly method and it would use its vision to supplement the radar.

The camera can detect the cement barrier right now (that's how it can appear on a picture), but someone needs to develop a program and label it as a very bad obstacle to be avoided.

But again, that is still in the future when you can hire enough engineers and time to do it.
 
  • Like
Reactions: HankLloydRight
How many engineers do they need? Maybe they could get a few more from GM, Volvo, BMW and Ford? They seem to have a handle on it, hardware AND software. Tesla has had enough time.

Other competing systems that have demonstrated good detection of objects is because they use LIDAR, not because they have better engineers or programmers. Tesla does not use LIDAR because of its high price. Cameras have better resolution than LIDAR, and in turn, LIDAR has much higher resolution than radar.
 
...good detection of objects...

The issue is what to do once obstacles are detected.

Radar was able to detect the deadly tractor-trailer in Williston, Florida and also the deadly concrete barrier in Mountain View, CA too!

The current radar industry algorithm is to avoid braking for detected stationary obstacles.

In Uber fatal autonomous case, NTSB clearly found that the pedestrian's bicycle was clearly detected.

The pedestrian died because the issue is what the software does once the sensors did their jobs of detecting an obstacle:

HWY18MH010-prelim-fig2.png


Uber solves the software's problem by hiring a safety officer as a driver so when the software fails, the safety officer would take over.

However, in so doing, the software codes would still not be fixed.

I suspect it is much cheaper to hire a driver than to fix the software with numerous software engineers.
 
  • Like
Reactions: HankLloydRight