Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD with current great? Not possible.

This site may earn commission on affiliate links.
Right. I was looking beyond 60 meters. Between 60 and 80 meters, beyond the range of the wide camera, there is a very small gap.

It's just a schematic, not an engineering document. That gap is probably there just to make clear where the camera's view angle ends. If they didn't put a gap in, it would be harder to understand the schematic. In any case, cars are tracked from one camera to the next. The computer has no trouble understanding a car moving from one camera to another, they are not separate views to the NN. The way the neural net learns, it would be like one big 360-degree camera.
 
The car can pull forward to get a better view.
If HW3 doubles the camera resolution, does that mean side camera view distance goes from 80 meters to 160 meters?
Unclear. Nobody knows what the 80m are based on. It's probably just a rough estimate since they weren't even using the side cameras when that illustration was posted. It remains to be seen what their algorithms can actually extract. Vision-based distance and velocity estimation is notoriously inaccurate at longer distances (which is why everyone is using at least radar in the front).
 
I simplified it just to show how inadequate the sensors are for situations like this.

You didn't "simplify" the math, you butchered it. Of course 3 seconds after the car has pulled out into fast moving traffic, it's not still going to be at the intersection traveling 0 mph. The way human drivers deal with this type of scenario, particularly if there are limited sight lines, is by accelerating harder. I fully expect FSD cars to do the same thing. If you're going to try to make a case that the hardware is not up to the task, you have to model the task accurately.

How do you think a truck with a human driver does it when sight lines are limited? A Tesla will have no problem. There are no slow Tesla's and Tesla's do not get slower and more gutless with age like fossil cars and trucks do.

All this amounts to FUD (fear, uncertainty and doubt). Plant the seeds and it will grow. Regardless of the truth of the matter. Humans are funny that way, which is why they tend to be terrible drivers. Ironically, most drivers will tell you they are God's gift to expert driving, LOL!
 
  • Disagree
Reactions: sdoorex
I agree that you will be accelerating and moving forward, but the first part of your acceleration only moves you across the lanes in the road toward the lanes that go in the direction you want to complete your left turn.

Until then, the car will have no/little motion in the direction that actually moves you away from the oncoming traffic. If that takes 2 seconds from a standstill, you have just moved into the lane of oncoming traffic and into an impending collision within fractions of a second.

Also, I don't think you can design an automatic driving system that relies on the cars in the oncoming traffic, some driven by humans, to all brake successfully every time. With enough occurrences, many collisions will result.
That's covered in what I wrote - we need to know 3 constants to figure out whether it's achievable or not.
 
As I have said before, I highly doubt that Tesla engineers strategically placed 8 cameras around the car, and have advertised for years now that the hardware can support full self-driving, but somehow forgot to check something as basic as the side cameras can't see objects coming from say the 10 o' clock position. Maybe the hardware is not good enough for full self-driving, but the cameras not seeing a car from a specific angle, won't be one of the reasons. Heck the whole reason Tesla is not using LIDAR, is because they made sure the car had 360 degree camera coverage instead.
 
You didn't "simplify" the math, you butchered it. Of course 3 seconds after the car has pulled out into fast moving traffic, it's not still going to be at the intersection traveling 0 mph. The way human drivers deal with this type of scenario, particularly if there are limited sight lines, is by accelerating harder. I fully expect FSD cars to do the same thing. If you're going to try to make a case that the hardware is not up to the task, you have to model the task accurately.

How do you think a truck with a human driver does it when sight lines are limited? A Tesla will have no problem. There are no slow Tesla's and Tesla's do not get slower and more gutless with age like fossil cars and trucks do.

All this amounts to FUD (fear, uncertainty and doubt). Plant the seeds and it will grow. Regardless of the truth of the matter. Humans are funny that way, which is why they tend to be terrible drivers. Ironically, most drivers will tell you they are God's gift to expert driving, LOL!

Accusations and dismissals as FUD are not helpful. The problem is 3 seconds into the turn you may be moving, but you may still have little motion component in the direction that you need to be traveling in order to avoid the collision. Often you need to cross 1/2 the intersection and clear a median before you can even start your left turn. That alone can take up most of the 2.8 secs.

Humans have trouble with this too. That is why there are very few (none around me) unprotected left turns onto highways with 60 mph speed limits. An FSD will presumably be much better at left turns than humans.

The answer here must lie in what has been said about relying on camera specs from a marketing diagram. We don't know what the true usable range of the cameras actually are.

Also, FSD may have to simply avoid a high-speed unprotected left turns until the sensor suite can catch up. I have not seen any demos from Tesla of cars making unprotected left turns.
 
  • Like
Reactions: sdoorex and DanCar
Also, FSD may have to simply avoid a high-speed unprotected left turns until the sensor suite can catch up. I have not seen any demos from Tesla of cars making unprotected left turns.

And the lack of current demos, my friend, is not evidence that the hardware is not capable of making safe, unprotected left turns. In fact, no evidence has been presented that the hardware is not sufficient.

That's not an accusation or a dismissal, it's a fact.
 
And the lack of current demos, my friend, is not evidence that the hardware is not capable of making safe, unprotected left turns. In fact, no evidence has been presented that the hardware is not sufficient.

That's not an accusation or a dismissal, it's a fact.

What facts are you referring to that add to this discussion ? It seems you are just dismissing any concerns because you hope it is so.

Your dismissal of the concern is that it is simply FUD.

Your accusation was that the math was butchered. Show how was the math butchered?

EDIT: I understand diplomat33's point that he believes Tesla would have figured it out. But I am legitimately trying to understand if there are other factual points that have been missed (assuming the camera specs are correct).
 
Last edited:
  • Like
Reactions: sdoorex and acoste
What facts are you referring to that add to this discussion ? It seems you are just dismissing any concerns because you hope it is so.

Your dismissal of the concern is that it is simply FUD.

Your accusation was that the math was butchered. Show how was the math butchered?

EDIT: I understand diplomat33's point that he believes Tesla would have figured it out. But I am legitimately trying to understand if there are other factual points that have been missed (assuming the camera specs are correct).

The fact I was referring to was that no evidence has been presented that the hardware is incapable of making safe left turns into traffic. In fact, the hardware includes multiple cameras that provide a 360-degree view around the car at all times.

The math was butchered because it assumed the cross traffic would continue moving while the FSD car would remain at the intersection. The math failed to account for the fact that the FSD car is a moving target. That is not a "simplification", it's a complete disregard for the reality of the situation.

It is also important to note that the FSD car is not blindly entering moving traffic. it can see the oncoming traffic, just as a human does, as it is preparing to merge into and adjust its rate of acceleration for a smooth merge. The difference is that the computer is not terrified.
 
Your accusation was that the math was butchered. Show how was the math butchered?
He can't because it's not.

Take the slowest FSD car. Model X 60D which can accelerate 0-100km/h in 6.2 seconds. That equates to acceleration of 4.5m/s²

I'm going to be kind and assume that the Model X can turn 90 degrees in 0 seconds. Something that is impossible...

But anyway...

A car on the road is travelling 100km/h which is 27.77m/s

Assuming the car is 80.1m away and can't be detected.

Solving for when the cars crash is pretty easy.

t x 27.77 - 80.1 = 1/2 (4.5) t²

That gives the quadratic function 2.25t² - 27.77t + 80.1 = 0.

The cars crash 4.595 seconds after the ModelX 60D decides it's all clear and turns..

All my previous comments stand. This is the best case scenario for the slowest FSD car. In the real world you need to factor speeding drivers, traction (tire wear or rain), lower states of charge (reduced acceleration) etc. 80m range is not enough. Maybe the real range of the side camera is greater than Tesla has said but I'd still question how accurate velocity detection is with a single camera at such distances.

Until then maybe it's only enabled on P100D's with >80% charge in sunny weather ;)
 
Last edited:
If you study the cameras and sensors on current teslas it would appear that Fsd is not possible. Think about the car stopping then having to pull out onto a highway where the speed is excess of 55 mph, the car cannot see to the left and right far enough to make the decision and time the maneuver. Something else needs to come into play for that one task to occur. Watch the camera recordings from all cameras you will understand. The radar and side sensors cannot detect that far.

So does the GPS just guide the car out of these more difficult maneuvers? find a way to turn right onto the highway and then do a u turn down the road? that might be an option
 
He can't because it's not.

Take the slowest FSD car. Model X 60D which can accelerate 0-100km/h in 6.2 seconds. That equates to acceleration of 4.5m/s²

I'm going to be kind and assume that the Model X can turn 90 degrees in 0 seconds. Something that is impossible...

But anyway...

A car on the road is travelling 100km/h which is 27.77m/s

Assuming the car is 80.1m away and can't be detected.

Solving for when the cars crash is pretty easy.

t x 27.77 - 80.1 = 1/2 (4.5) t²

That gives the quadratic function 2.25t² - 27.77t + 80.1 = 0.

The cars crash 4.595 seconds after the ModelX 60D decides it's all clear and turns..

All my previous comments stand. This is the best case scenario for the slowest FSD car. In the real world you need to factor speeding drivers, traction (tire wear or rain), lower states of charge (reduced acceleration) etc. 80m range is not enough. Maybe the real range of the side camera is greater than Tesla has said but I'd still question how accurate velocity detection is with a single camera at such distances.

Until then maybe it's only enabled on P100D's with >80% charge in sunny weather ;)

All this also assumes an FSD car would do something no Autopilot car has done yet: operate in full throttle. In reality they go much slower.

A side-facing radar would be such an easy and obvious solution, too bad they left out the corner radars.
 
Not sure if the 80m figure still is correct once Tesla switches from quarter resolution to full resolution and once the network has been trained on unsupervised signals(ie not only on what human labellers can see in images but what actually happened in the future). My guess that this should greatly improve object detection at longer distances. Also in this paper SOTA(state of the art) object detection went from 22% to 74% in just one algorithm improvement.
Relevant new papers
 
Also in this paper SOTA(state of the art) object detection went from 22% to 74% in just one algorithm improvement.
So in the current SOTA they've reached 74% at 30m. Tesla need to be 99% at 100m+.

I agree it's hard to know what the real range is if they can bump up the resolution. I think it may be beyond the current camera and potentially impossible for a single camera for the speeds & ranges needed to avoid crashes in highway T intersections.

Add in a second camera and a radar and I'd think differently. The best part of the investor presentation is how they are using the radar to feed back into training the NN. I see great promise with a second camera facing the side assuming there is enough horizontal separation to get effective stereo vision. Then a side radar for redundancy (or further training).

A side-facing radar would be such an easy and obvious solution, too bad they left out the corner radars.
For now. HW4 seems likely. Tesla are going to realise that they are throwing away $30,000 per car each year they can't get Robotaxi working. Adding a few more sensors to fast track Robotaxi and get complete (and redundant) coverage will become obvious for the small additional cost of a few more radars and cameras.

I expect they'll figure this out this year. Kind of hard to believe they haven't done basic physics for two objects in motion to work out the true camera ranges they need. Now that Elon is there in the autopilot team I have more hope. He's a first principles kind of guy.

Shame about all the existing Model 3 owners with HW3 who thought they had an appreciating asset.... They'll get a decent NoA for onramp to offramp though
 
Last edited:
Add in a second camera and a radar and I'd think differently. The best part of the investor presentation is how they are using the radar to feed back into training the NN. I see great promise with a second camera facing the side assuming there is enough horizontal separation to get effective stereo vision.
Stereoscopy is nearly useless at the kind of long distances discussed here (for comparison, there is evidence that the human vision system uses stereoscopy for depth perception only up to about 10-20m distance). An additional camera with a longer focal length might be useful to detect distant objects (just like what they do with the front camera cluster), but it would also have a narrower view.
 
Stereoscopy is nearly useless at the kind of long distances discussed here (for comparison, there is evidence that the human vision system uses stereoscopy for depth perception only up to about 10-20m distance). An additional camera with a longer focal length might be useful to detect distant objects (just like what they do with the front camera cluster), but it would also have a narrower view.
I was thinking it would make sense to place it in a position to cover the biggest blind spot in the current arrangement. I've noticed that if you are reversed parked in (for charging) and two vans or big SUVs park either side there is no way to see left/right until you've pulled out onto the road. Really need cameras up front close to the bumper.

If you put a camera in the side of each headlight housing it would take care of this. It would also give you about 2m of horizontal separation to the B pillar camera which would be amazing for depth perception at range.
 
I was thinking it would make sense to place it in a position to cover the biggest blind spot in the current arrangement. I've noticed that if you are reversed parked in (for charging) and two vans or big SUVs park either side there is no way to see left/right until you've pulled out onto the road. Really need cameras up front close to the bumper.
Perhaps they placed the side viewing cameras in the B pillar because that's close to the perspective of a human driver sitting in the car. Of course, the human eyes are mounted on a highly flexible gimbal that can lean forward if necessary. ;)
 
Perhaps they placed the side viewing cameras in the B pillar because that's close to the perspective of a human sitting in the car. Of course, the human eyes are mounted on a highly flexible gimbal that can lean forward if necessary. ;)
Indeed :) I should also note when I've found myself in that situation I wound down the window and listened for cars approaching while creeping out slowly. Maybe they should add an external microphone or two to the sensor package as well. Dirt cheap and easy to process the signal. Might come in handy for emergency vehicles/sirens. Worst case would be nice to have with the dashcam
 
  • Like
Reactions: EarlyAdopter
If you do the math then I agree FSD can never work with current cameras in all situations.

Consider a T intersection where you are on the side road waiting to join a straight highway.

The car would have to rely on the side cameras looking left and right which have 80m of range. If traffic on the highway is doing 100km/h this translates to cars travelling at 28m/s.

This means that your Tesla has to pull out onto the highway and accelerate to 100km/h (so as not to cause a crash) in under 2.85seconds (80/28)

Teslas are fast but not that fast.
I don't believe there are too many T intersections with a 100 km/h speedlimit. Not any in Norway afaik, max speed limit for junctions like that is 80 km/h. If any other country has them, I believe they will disappear soon as it's obviously not safe for human or computers.

Still, considering you have an intersection like that, you still have 2 seconds traveling the same direction to accellerate. Also an attentive driver on the highway would also reduce the speed. So accidents should still not happen too often.