Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot saving us (and them) against soft targets.

This site may earn commission on affiliate links.
Sorry to make a new thread - but I think my request has been buried in the old one...

Oh Deer.

As per the video below... I narrowly missed a deer this week. AP wasn't engaged - but emergency assisted braking was.

I managed to slam my brakes and slowed enough to avoid impact (by inches I suspect)... but going forward with autonomy Tesla needs to be able to identify these threats and activate braking automatically doesn't it? The car didn't respond to the deer at all - am I right in assuming it wouldn't respond to a human either? Why is that? Simply because radar is only used to find vehicles? Can the neural net be trained to react to these scenarios? Does it bring back the whole can of worms re LIDAR?

Anyway - how do you think Tesla is getting info on these case to develop the AP system? There wouldn't be any automatic upload in this case for sure. Bug report does not work on voice activation in my car (UK) and I am unclear on whether bug report would necessarily be reviewed back at HQ anyway.

So.. yeah... are Tesla working on this? How do they gather info? Is there a way to pass on data about these incidents that is helpful or am I expecting too much?

 
...The car didn't respond to the deer at all...

It's one of the limitations or issues being worked on.

...Why is that?...

If you watch NOVA program titled "Look Who's Driving", it explained after the 2016 Autopilot death that the system is designed for traffic going in the same direction, and the Semi-trailer truck was turning left, across the traffic in front of the Tesla so there's no Automatic Braking for that deadly scenario just yet.

That deer should learn from that scenario and should run in the same direction as your car and not across the traffic.

...Simply because radar is only used to find vehicles?...

RADAR is imperfect. In World War 2, it couldn't distinguish tiny harmless aluminum chaffs and big dangerous bombers. That lack of differentiation continues with the latest case when Iran accidentally shot down a big passenger airplane because it doesn't know that it was not a smaller military dangerous missile.

...Does it bring back the whole can of worms re LIDAR?...

Yes.

LIDAR people say they can do it well but what's consumer choice right now?

Consumers might have to sell their houses and move to 50 squared miles in Chandler, Arizona and sign up for rideshare to enjoy the LIDAR from Waymo.

...Anyway - how do you think Tesla is getting info on these case to develop the AP system? There wouldn't be any automatic upload in this case for sure...

Tesla runs on shadow mode whether you use Autopilot or not. It compares its own theoretic driving with the driver's own driving in the background constantly.

...Bug report does not work on voice activation in my car (UK) and I am unclear on whether bug report would necessarily be reviewed back at HQ anyway...

You can log into your web account and report it under the "contact" form.

I think in general, your reports are ignored because these are known issues that they are trying to work on.

It's just like an unfinished house is all dark because there's no electrical wiring just yet. That's a known problem because it's an unfinished product.

...or am I expecting too much?...

Yes. It's an unfinished product called "beta". Consumers should expect there are limitations.
 
The autopilot, like human drivers, is designed primarily around visual identification. Hence why there are 8 cameras and only one radar. The radar, as far as I know, is mostly used for adaptive cruise control (like many other cars on the market over the last decade). The cameras give the car 360 degree vision at all times, and 3 forward facing cameras are all different angles (wide to narrow) and scan from 70 meters to 250 meters, if I recall right. As a programmer, I am VERY impressed by the AI's algorithm. Now, the biggest problem with using AI neural nets to learn how to drive is there is a crap-load of tail cases to solve for. Things like a deer jumping across the road, or a plastic trash back that the wind is pulling towards you. As humans we already know what the deer is and that if we hit it, both us and the poor animal will suffer. We also know that the plastic bag, while scary and fast moving, is harmless and it's better to hit that, versus risk swerving on a freeway with busy traffic.

The big difference is humans screw up both those scenarios every day causing crashes, where as one autopilot crash because of either scenario would be international news and have huge fallout for Tesla. So they seem to be designing FSD in vertical slices, tackling one problem at a time (cars first, then road lines, then cones, then stop signs, then trash cans, etc). As far as I know, deer, dogs, and trash bags are not reacted to by the AP system, whether it notices them or not in shadow mode is a different question. This video gives you a glimpse into how the AP works, at it's pretty cool. As you can see it drives like a human, primarily based off of visual input:

 
Dumb question: How do people get videos such as those in posts #3 and #4 above, with all the overlays of what the computer is interpreting?
Some Tesla owners have managed to access the debug and developer views on their cars. I assume this is frowned upon by Tesla and perhaps warranty-voiding modifications may need to be done to the ICE computer. This channel is a great source of AP clips with debug view on: greentheonly
 
  • Funny
Reactions: ElectricIAC