Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Does Tesla have a big math problem?

This site may earn commission on affiliate links.
Why are people insisting on "ridiculous detail"?

Last time I checked, none of us drive around with the visual acuity of owls and yet we manage to drive safely at high speed.

Human eyes have incredible detail on where they are focused. A human will look to where the road is in the distance and see high detail of what is approaching from that exact position.

A single side camera has a wide view of the world, so all the pixels are spread over the entire scene. There is no way to focus in on just where the road in the distance is. Keep in mind the road in the distance will appear in different parts of the image, depending on curvature of the road and elevation changes. i.e. it might appear high up to the Tesla if the road is coming from a hill.

This is why you need incredible detail from the sensor, because you first need to determine where the road goes in the distance, and then focus in on that particular area for processing.

Using low resolution images to try and determine objects at distance is subject to significant corner cases. For instance cars that are a similar colour to the road may appear to be just the road when sampled at low resolutions.
 
1 - I don't think I've ever seen a road with a 75 MPH speed limit that has people entering the road via a stop-sign. A car going 75 on a road with stop signs is probably going 15-20 MPH over the speed limit.

Very common to have a 70mph speed limit (so many cars travelling >75) and uncontrolled cross or merges here in the UK. Hope FSD hardware has not been spec'd to work only on perfect US roads.

A common set of intersections on UK roads. A27 (70mph limit) Arundel to Fontwell in West Sussex.

Screenshot (149).png
 
Very common to have a 70mph speed limit (so many cars travelling >75) and uncontrolled cross or merges here in the UK. Hope FSD hardware has not been spec'd to work only on perfect US roads.
I take it you've never been here? haha. Our roads are pretty far from perfect and uncontrolled crossings on high speed roads are very common.
 
In the UK we has a written part to the driving test. People are asked to highlight potential hazards from images like this.

The resolution of that image, 960x640.

You don't need massive resolution to drive, you do need intelligence and be able to predict what each vehicle is about to do.

As advanced/police drivers how they are taught to survey the road, they aren't reacting to the road situation right now, instead they are planning ahead. That is the real challenge for Tesla and FSD.

s960_hazard-perception-test.jpg
 
In the UK we has a written part to the driving test. People are asked to highlight potential hazards from images like this.

The resolution of that image, 960x640.

You don't need massive resolution to drive, you do need intelligence and be able to predict what each vehicle is about to do.

As advanced/police drivers how they are taught to survey the road, they aren't reacting to the road situation right now, instead they are planning ahead. That is the real challenge for Tesla and FSD.

s960_hazard-perception-test.jpg

You don't need massive resolution but I think there are cases where sufficiently high resolution is important. For example, detecting small hazards. Without high enough resolution, you might miss those small hazards. You just need high enough resolution to detect and track all relevant objects. After that, I agree that planning and prediction are critical. Some objects like pedestrians or other cars can sometimes make sudden or erratic movements. So predicting those paths can be very challenging.
 
I think the resolution will be the Achilles' heel of the system. At roundabouts and cross roads it's going to struggle to see traffic coming from a reasonable distance.

If Tesla can really get their AI top notch I cannot see it been an issue, the picture I posted, look at the lower centre of the picture on the right, next to the white car. Can you 'see' the white van coming down the road. It must be less than 10 pixels in size, but any human can tell straightway what it is, and which direction is likely coming from even on a still image.

Resolution is like LIDAR, just because you can gather more sensory data doesn't make the system better at driving. What the system has to understand is how does it react. Nearly all the disengagements we see with FSD beta has nothing to do with lack of vision, and everything to do with the car not knowing which lane it should be, how to drive around a roundabout, why it shouldn't try to drive into a train at level crossing etc.
 
What evidence do we have that FSD runs at the camera output spec?

In other applications, it is pretty common for images to be downsampled before they are used as input. So you could be running a hardware suite of 4K HDR cams running at 120fps (aka "marketing specs") but only be feeding the NN 640x480x8 at 30fps.

For FSD, I would expect a similar kind of abstraction so that the camera specs can be changed without having to rebuild the NN.
 
  • Informative
Reactions: jdr93
Recall the front facing camera cluster includes a telephoto that covers 35°. So that's about 37 pixels per degree or 0.027 degrees per pixel. If I remember my trigonometry, at the rated 250 meters, that's a resolution of 12 cm per pixel.

To put things in context, 20/20 vision can read a letter at 5/60 of a degree. That would only be about 3 pixels for that camera, so it might not qualify for 20/20, but you don't need 20/20 to drive. In California, the requirement is 20/40 in at least one eye.
 
  • Helpful
Reactions: mikes_fsd
There's definitely a math problem as far as the 80m is concerned. It's simply not far enough to avoid a crash at typical highway speeds when turning 90 degrees from a side road.

I posted the maths in another thread here:

Take the slowest FSD car. Model X 60D which can accelerate 0-100km/h in 6.2 seconds. That equates to acceleration of 4.5m/s²

I'm going to be kind and assume that the Model X can turn 90 degrees in 0 seconds. Something that is impossible...

But anyway...

A car on the road is travelling 100km/h which is 27.77m/s

Assuming the car is 80.1m away and can't be detected.

Solving for when the cars crash is pretty easy.

t x 27.77 - 80.1 = 1/2 (4.5) t²

That gives the quadratic function 2.25t² - 27.77t + 80.1 = 0.

The cars crash 4.595 seconds after the ModelX 60D decides it's all clear and turns..

That's assuming instant decision making and an instant 90 degree turn. So the reality is the cars crash earlier. Also note that this is 100km/h (62 mp/h), so if you're looking at 75 mp/h it's even worse...

So the side cameras have to either see much further than 80m, or FSD is never going to work on these roads (and will need to be limited to specific intersections and roads with lower limits). I also have no idea how they are going to deal with sun glare at sunrise/sunset with a single side camera. Maybe limit the hours the car can drive itself too...

I agree that the hardest case is probably motorcycles due to the small size and limited number of pixels to work with. Also potentially the worst outcome if the Tesla pulls out in front of one.


Wouldn’t the car you are pulling out on slow down?
 
Wouldn’t the car you are pulling out on slow down?

Probably. But most people wouldn't pull out in a situation that required the other car to take action in order to avoid a crash. They'd wait for a bigger gap.

the math and analysis is so wrong it's not worth bothering with.

What's wrong with the math? It's basic first principles for two objects in motion. Perhaps the 80m range quoted by Tesla was incorrect and they can see further to the sides with the latest FSD. Time will tell as always.
 
Last edited:
Probably. But most people wouldn't pull out in a situation that required the other car to take action in order to avoid a crash. They'd wait for a bigger gap.
Uh, no? You should read the ntsb report about the Tesla that was canned by a 18 wheeler. The driver of the 18 wheeler indicated that he expected the Tesla to slow down, but because the system did not see the truck and the driver wasn’t paying attention, their was a crash that caused a fatality.
 
Uh, no? You should read the ntsb report about the Tesla that was canned by a 18 wheeler. The driver of the 18 wheeler indicated that he expected the Tesla to slow down, but because the system did not see the truck and the driver wasn’t paying attention, their was a crash that caused a fatality.
Isn't this an argument that you shouldn't pull out and expect other vehicles to yield?
 
  • Like
Reactions: rowdy
Isn't this an argument that you shouldn't pull out and expect other vehicles to yield?

sure. But we aren’t talking about what you should do in a perfect world. We are talking about real life driver expectations. Drivers expect you will slow down. FSD will still be better than a human driver even if the fail rate of this particular task is the same as a human.
 
What's wrong with the math? It's basic first principles for two objects in motion. Perhaps the 80m range quoted by Tesla was incorrect and they can see further to the sides with the latest FSD. Time will tell as always.

First, you use the wrong root of the equation. The correct collision time is 7.8 seconds, unless you are assuming the Tesla turns the wrong way and accelerates directly TOWARD the car. Second, you assume the car is incapable of corrective action at the start of making the turn (it can, and in fact has done so several times during the beta). Third, you assume the first car does not slow down. Finally, you assume there is no merge lane on the road .. highly unlikely for an uncontrolled turn into a road where the speed limit was 60+ mph.

I also find it unlikely that a human driver would be either patient enough or have good enough judgement to figure out if there was an 8 second gap in the traffic. In reality they would look for a "big enough" gap (whatever that means) and hope the approaching car slowed down enough.
 
Last edited:
  • Informative
Reactions: pilotSteve