Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AP can't recognize semis?

This site may earn commission on affiliate links.
A Tesla with Autopilot sensors should be able to see semis (or any other vehicle) in an adjacent lane regardless of what the visualization shows and regardless if you turn Autopilot on or not.

Here's how to test this safely, even if you haven't engaged Autopilot:

Pull up alongside a semi (or any other vehicle) on your right and turn on your right directional signal (obviously don't pull into the right lane.)

If your car sees the vehicle it will set off an audio alarm warning you not to turn right into the vehicle and the visualization will probably also provide some warnings, i.e red lane markings, red-colored vehicles.

If the car doesn't sound an alarm, then it's true that it can't see adjacent vehicles. If it sounds an alarm, it can see adjacent vehicles regardless of what the visualization shows and regardless if you turn Autopilot on or not.

Larry
 
  • Informative
Reactions: SDRick
I consider reliable detection of massive objects right next to the car mandatory to even think about using any of the autonomy functions. Because I value my life. And other’s lives.

Right now Tesla’s system can’t do this. How the UI informs me is secondary.
 
I consider reliable detection of massive objects right next to the car mandatory to even think about using any of the autonomy functions. Because I value my life. And other’s lives.

Right now Tesla’s system can’t do this. How the UI informs me is secondary.

Lol.
it sees cars and trucks beside you.
The display may show the vehicles wonky, but it knows they’re there.
 
Lol.
it sees cars and trucks beside you.
The display may show the vehicles wonky, but it knows they’re there.

Yes, it can see it almost all the time. But not 100% and I’m sorry, for autonomy, 100% detection of a massive object is mandatory. Not 99.9999999%. I’m not talking about a bizarre self driving corner case but about a fundamental function.

And no, your and my Model 3 cannot detect a semi in the next lane with 100% accuracy. And it’s not just the display. When a semi on the display teleports from one lane to the next, it’s not a rendering problem but a fundamental problem with the vision AI, even if you don’t like it.
 
Yes, it can see it almost all the time. But not 100% and I’m sorry, for autonomy, 100% detection of a massive object is mandatory. Not 99.9999999%. I’m not talking about a bizarre self driving corner case but about a fundamental function.

And no, your and my Model 3 cannot detect a semi in the next lane with 100% accuracy. And it’s not just the display. When a semi on the display teleports from one lane to the next, it’s not a rendering problem but a fundamental problem with the vision AI, even if you don’t like it.

It’s a display issue only. The car sees it. :)
 
Given that many 9s would be better than human drivers do, no, 100% is not mandatory.

Nothing in life is 100%

No. When I look at a semi and I see a semi. 100%. If you don’t then you should return your drivers license.

And frankly if the 100% requirement apparently doesn’t apply to basic object detection then FSD is not going to ever happen. The less than 100% reliability is about corner cases, NOT basic object detection.
 
No. When I look at a semi and I see a semi. 100%. .

And if you had 360 degree vision that comparison would be worth discussing, but...


Anyway, back in reality, FSD doesn't need to be (and won't be- nothing is) 100% perfect.

it just needs to be statistically significantly better than human drivers. Which the 99.9999999% you cited would be- by quite a margin.

Humans- as you might not be aware- frequently fail to notice things in other lanes and get into serious accidents in large #s on a daily basis.



In case you're still fuzzy on the math here-

A vehicle that is 99.999% “safe” crashes once every 100k miles on average. Each nine we add after the decimal increases safety by an order of magnitude.

NHTSA says humans crash a car roughly every 500,000 miles. So even 99.9999% would be twice as "safe" as a human driver (since it'd be 1 crash every 1 million miles)- Indeed that (twice as safe as a human driver) is one of the benchmarks Elon originally mentioned as a target for self driving cars.


The 99.9999999% you cited would be 1 crash every 1 billion miles on average. 2000 times safer than a human driver.
 
Last edited:
I think you are talking about something entirely different.

To make it simple: If the NN is presented with the picture of a semi it needs to detect it with 100% certainty.

You are talking about 360 degree vision, decision making, human error in decision making or forgetting to look etc.

I’m talking about a *FUNDAMENTAL* function that my neurons can do with 100% accuracy. I’m surprised yours can’t?!
 
  • Like
Reactions: duanra