Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Does Tesla have a big math problem?

This site may earn commission on affiliate links.
First, you use the wrong root of the equation. The correct collision time is 7.8 seconds, unless you are assuming the Tesla turns the wrong way and accelerates directly TOWARD the car.

7.8 seconds is an impossibility. After 6.2 seconds the Tesla will have reached 100km/h. If they haven't crashed by 6.2 seconds they never will as both cars will be doing 100km/h.

To verify the crash occurs at 4.595 seconds you can calculate the distance traveled by both vehicles in 4.595 seconds.

Tesla (from stop)
d = 1/2 (acceleration) time²
d = 0.5 x 4.5 x (4.595)²
d = 47.5m

Car on road (constant 100km/h)
d = speed x time
d = 27.77 x 4.595
d = 127.6m

So in 4.595 seconds the Tesla makes it 47.5m down the road. In the same time the car on highway travels 127.6m. The car eats up the initial 80.1m gap and then crashes in to the Tesla.

I should note that I've left out lots of variables to simplify the example. The reality is the cars would crash even earlier. For instance the Model X is 5m long so the car on highway really only has to go 75.1m before hitting the rear of the car (crash would occur at 4.0 seconds). There are other things to consider like time taken to turn 90 degrees (traction/g-force thresholds), slower acceleration at lower battery levels, other cars driving above the speed limit. All things which mean the Tesla doesn't make it quite as far down the road and the crash occurs earlier.

For completeness, if the Tesla drove directly toward the other car they'd have a head on crash in 2.413 seconds. You can substitute 2.413 into the above equations to see the Tesla would travel 13.1m and the car 67m ( = 80.1m for head-on crash).

At the end of the day all this maths is just to show that the initial figures provided by Tesla had a math problem when it came to the side cameras with 80m range. That's not to say they can't work around it or install higher resolution cameras or other hardware if it proves to be an issue. Or they can just limit FSD to certain areas and intersections that are within spec on the slower Teslas. There are many ways this can be addressed.
 
Last edited:
I think that for a self-driving car to have any hope of being safer than the average human it has to be able to handle any situation that can be solved by math. Computers are very good at math problems and you wouldn't want to have any of your accident budget wasted there.
There's also the issue that failing to yield to traffic on a highway is illegal and self-driving cars are required to obey the law in every jurisdiction where they are legal that I'm aware of.
Maybe this is the reason they're coming out with the Plaid Model S? :p
Anyway, no idea what the actual reliable range of the cameras is but @rowdy's math looks correct.
 
The facts the OP pointed out are some of the many reasons I know the current hardware will never do level 5 as Elon promised.

It will be capable of something but will always need a driver to take over and fault will always be the drivers even when fsd screws up. You can't have true autonomy until the software and hence the company is liable for any at fault injuries/fatalities.
 
There are many intersections with blind turns where you can't see fast oncoming traffic until you are in the road. In this situation it doesn't matter what resolution or framerate the camera is at because even a human can't see oncoming traffic until the vehicle moves forward into a lane. Humans navigate these not just by evaluating the initial conditions but by responding appropriately as conditions change.

If an oncoming car is detected late, the key feature will be avoiding the potential collision by accelerating, changing lanes, or potentially stopping if there is still time. Oncoming traffic should also be aware of intersections on high speed motorways and be prepared to slow down. Especially as more vehicles are equipped with AEB, this scenario will become less risky for both human and autonomous drivers.
 
Finally, you assume there is no merge lane on the road .. highly unlikely for an uncontrolled turn into a road where the speed limit was 60+ mph.

It really isn't that uncommon.

Here is an example that the Tesla nav system likes to try to get me to take: Google Maps (Eastbound on MN-50 trying to go south on US-52. Rather than staying on MN-50 and taking a ramp to US-52, it suggests a gravel road [222nd St] that leads to a stop sign and a right turn onto US-52.)
 
I don't think it is clear what exactly those ranges mean. Take a look at the (compressed) front dashcam (with a 150 m quoted range camera). You can easily identify vehicles at 450 meters. With multiple frames, moving vehicles seem detectable at 1000 m or more.

If we triple the 80 m for the side camera, you get 7.8 seconds. Plenty of time, right?

I just confirmed the same thing with the repeater cameras. Unfortunately we can't check the side cameras until they are recorded to TeslaCam. Will be very trivial to test then...

Looking at a recent random repeater cam video, a car at 150m (measured on google maps between intersections) occupies a 10x10 pixel box and is easily recognizable within 10 frames of the video as it moves 20 pixels to the left.
 
Found a side camera video in a very similar situation from greentheonly. He got it from a salvage wreck so the video is from a crash caused by humans driving, to be clear nothing to do with autopilot. The oncoming car is clearly visible beyond 110m. Note this video has been compressed once by Tesla, once by Twitter, plus anything green did, so the original source video that the computer processes would be higher quality.

Watch the 5 second video here
https://twitter.com/greentheonly/status/1258599345731129346

At roughly 1 second the white car appears across the highway to the right of the side rail. Much easier to see in video
https://i.imgur.com/fOxZXkr.png
https://i.imgur.com/olIzkd5.png

Lines up with this intersection on Google Maps. Measured to 110m being very conservative.
https://i.imgur.com/K2ekZz3.png
https://i.imgur.com/2jmLtsB.png

edit:
You can also barely make out an oncoming car in the other direction from the left pillar footage https://twitter.com/greentheonly/status/1258712314360070146

Using the same technique I guesstimate the dark oncoming car is visible around 190m based on using the upcoming intersection signs as landmarks and comparing to google street view.

After watching these videos I think far off oncoming vehicles are actually relatively easy to see and detect as long as you can look at the video and not still images. Stands out very clearly against the background and seems like something a technique focused on labelling moving objects would have little trouble with.
 
Last edited:
I’ve been thinking. I wonder if Tesla really does have the correct hardware to do FSD.

Does as a 1280x960x36fps have enough resolution and speed to operate at 75mph? That’s 110 feet per second or 3 feet per frame.

For safety reasons, the car has to take action in less than one second so it has a limited number of frames in which to take action.

Less frames means more processing power and bandwidth required to get through all the neural net iterations.

So, while big frame per second numbers look good, is it really good enough?

Second question, is 1280x960 enough resolution? People see at significantly higher resolution and we can differentiate objects at further distances than a camera of this resolution. That means Tesla has less frames in which it even has a chance to identify a car coming at you in T intersection at 60-70mph.

At 80m advertised side camera range (not a long way), a car coming at 70mph will reach you in about 2.6 seconds. That means the car has to analyze the situation, react, and make it past the perpendicular opposing lanes in 93 camera frames. Think of all the times where you had to gun it so you weren’t stuck at a T intersection for minutes.

HW3 sounds like a big step along the way but I don’t get how the current camera setup can handle high speed differential situations in a variety of conditions when there are no stereo side cameras to assist in image processing.

Can someone tell me how I’m wrong?
I think it comes down to depth of deep learning and process of object detection and classification. Humans go outside in and could care less about many of the visual features of objects these deep learning systems spend time on. Humans can see better but need far less data beyond silhouettes for example.
 
Found a side camera video in a very similar situation from greentheonly. He got it from a salvage wreck so the video is from a crash caused by humans driving, to be clear nothing to do with autopilot. The oncoming car is clearly visible beyond 110m. Note this video has been compressed once by Tesla, once by Twitter, plus anything green did, so the original source video that the computer processes would be higher quality.

Watch the 5 second video here
https://twitter.com/greentheonly/status/1258599345731129346

At roughly 1 second the white car appears across the highway to the right of the side rail. Much easier to see in video
https://i.imgur.com/fOxZXkr.png
https://i.imgur.com/olIzkd5.png

Lines up with this intersection on Google Maps. Measured to 110m being very conservative.
https://i.imgur.com/K2ekZz3.png
https://i.imgur.com/2jmLtsB.png

edit:
You can also barely make out an oncoming car in the other direction from the left pillar footage https://twitter.com/greentheonly/status/1258712314360070146

Using the same technique I guesstimate the dark oncoming car is visible around 190m based on using the upcoming intersection signs as landmarks and comparing to google street view.

After watching these videos I think far off oncoming vehicles are actually relatively easy to see and detect as long as you can look at the video and not still images. Stands out very clearly against the background and seems like something a technique focused on labelling moving objects would have little trouble with.
Impossible! No camera could possibly see what my eagle eye vision can see. Autopilot will never be safe enough!!..

So many amateurs who think they have disproven Tesla’s entire team of engineers by a single thought experiment. Default position should be
1) I am probably wrong
Not
2) hundreds of experts are wrong, billions have been wasted, rampant fraud
 
Oncoming car speed detection, acceleration requirements, problem avoidance with vehicle placement decisions (how far to pull out and how to properly use protected middle zones if there), motorcycles, visual obstructions, etc.

Then read this thread. I doubt those commenters read my thread here and yet they arrive at the same conclusions I have.
 

Attachments

  • B2F60E02-7291-4F86-941A-70A52356A4AE.png
    B2F60E02-7291-4F86-941A-70A52356A4AE.png
    205.1 KB · Views: 69