Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How is FSD ever going to work with current camera layout?

This site may earn commission on affiliate links.
I’ve been using FSD beta which is both impressive and problematic. But the real question is - if FSD is going to use camera only (radar and ultrasonics now removed), there’s a huge blind spot between the front windshield camera and the side fender cameras. At least half of my left and right turns require checking for cross traffic that there is no way either camera can capture. If you have a T intersection at a perfect 90 degree angle, the cameras can’t see cross traffic unless the car creeps out and angles itself significantly. This can never be a real solution, it’s too slow and dangerous. What do you guys think the solution will be?
Because there are more cameras than you appear to be aware of, and not all can be viewed through Dashcam etc. As others have noted there are B pillar cameras and there are actually three front facing cameras, including a very wide angle one.
 
Do you know how to solve it? If not, how do you know how long it will take?
Describing the problem != solving the problem.
City driving is super complex because of the exponentially higher number of unknowns, compared to highway driving. For example, the density of cars is higher and we, humans, use subtle signals to navigate - and not always in accordance with the legal traffic rules. Not sure how the AI will be able to do that. If all cars were autonomous then is much easier, especially, if they communicate with each other.
Don’t forget, autonomous means fully independent - the car cannot simply “give up” and let the human take over. In fact, the only fully autonomous vehicles I could think of operate in substantially simplified environments.
 
Describing the problem != solving the problem.
City driving is super complex because of the exponentially higher number of unknowns, compared to highway driving. For example, the density of cars is higher and we, humans, use subtle signals to navigate - and not always in accordance with the legal traffic rules. Not sure how the AI will be able to do that. If all cars were autonomous then is much easier, especially, if they communicate with each other.
Don’t forget, autonomous means fully independent - the car cannot simply “give up” and let the human take over. In fact, the only fully autonomous vehicles I could think of operate in substantially simplified environments.
This is a NN Frogger that is being implemented. That is why it is taking a bit longer
 
  • Funny
Reactions: Boza
Planes can land/takeoff in zero visibility (wind, rain and snow are bigger issue). However, they use radars, barometric altimeters, gyroscopes, compasses, and ground-based systems (e.g. ILS) to lead the plane. It is surprising how they don’t get confused with so much “sensor noise”.
 
  • Like
Reactions: QUBO
Planes can land/takeoff in zero visibility (wind, rain and snow are bigger issue). However, they use radars, barometric altimeters, gyroscopes, compasses, and ground-based systems (e.g. ILS) to lead the plane. It is surprising how they don’t get confused with so much “sensor noise”.
I don’t believe it. If so, show me a zero visibility day where the planes landed and took off without any ATC help because that that is what is expected of these cars. Oh… and these cars don’t have ILS to help. So what is going to lead and guide them?
 
Last edited:
I don’t believe it. If so, show me a zero visibility day where the planes landed and took off without any ATC help because that that is what is expected of these cars. Oh… and these cars don’t have ILS to help. So what is going to lead and guide them?
My point exactly. Single sensor reliance is questionable, at best. Also, landing an airplane in zero visibility is much simpler task than driving on city streets.
 
  • Like
Reactions: enemji
Take a foggy day scenario on a winding mountain road.

A car with radar will be able to tell if there are any objects in the fog. But does that mean the car will be able to drive through the turns on that road with 💯% confidence? Or would it just fall into the valley on the next turn?

In other words, if the car was equipped only with lidar and radar, would it work? Or does it still need vision to make the final determination on whether it is safe to proceed?
 
To be clear, I have concerns with two things: single sensor array; and Tesla confidence to solve autonomous city driving (regardless of the sensor array).

Winding mountain road is much simpler scenario than driving in the city. Given the fog (which disables the camera), radar (essentially, a single pixel camera) will be superior in detecting obstacles. Usually, winding mountain roads will have guardrails, i.e. obstacles that determines the road boundaries. Either multiple radars or (I hope) object persistence will be able to form a 3D picture of the road (phased radar arrays are out of reach ATM, although there is some interesting research in that space). So, without reliable feed from the cameras, relying on the radar(s) only the car should be able to more or less “figure out” the road and avoid obstacles. There may be a way to detect the lane divider (more reflective than asphalt at lower frequencies?) but I have not heard of it. IR sensor can definitely do that and other manufacturers use them. So, a combination of radars and IR should be able to drive you in complete fog.

Going back to the multisensor array. It may be more expensive but its capabilities are much superior than single sensor array.
 
Last edited:
To be clear, I have concerns with two things: single sensor array; and Tesla confidence to solve autonomous city driving (regardless of the sensor array).

Winding mountain road is much simpler scenario than driving in the city. Given the fog (which disables the camera), radar (essentially, a single pixel camera) will be superior in detecting obstacles. Usually, winding mountain roads will have guardrails, i.e. obstacles that determines the road boundaries. Either multiple radars or (I hope) object persistence will be able to form a 3D picture of the road (phased radar arrays are out of reach ATM, although there is some interesting research in that space). So, without reliable feed from the cameras, relying on the radar(s) only the car should be able to more or less “figure out” the road and avoid obstacles. There may be a way to detect the lane divider (more reflective than asphalt at lower frequencies?) but I have not heard of it. IR sensor can definitely do that and other manufacturers use them. So, a combination of radars and IR should be able to drive you in complete fog.

Going back to the multisensor array. It may be more expensive but its capabilities are much superior than single sensor array.
Reliance on guardrails is not the way to go. That said, what would the radar tell you? That there is a something ahead. So what action will be determined? Turn left, turn right or backup?
 
To be clear, I have concerns with two things: single sensor array; and Tesla confidence to solve autonomous city driving (regardless of the sensor array).

Winding mountain road is much simpler scenario than driving in the city. Given the fog (which disables the camera), radar (essentially, a single pixel camera) will be superior in detecting obstacles. Usually, winding mountain roads will have guardrails, i.e. obstacles that determines the road boundaries. Either multiple radars or (I hope) object persistence will be able to form a 3D picture of the road (phased radar arrays are out of reach ATM, although there is some interesting research in that space). So, without reliable feed from the cameras, relying on the radar(s) only the car should be able to more or less “figure out” the road and avoid obstacles. There may be a way to detect the lane divider (more reflective than asphalt at lower frequencies?) but I have not heard of it. IR sensor can definitely do that and other manufacturers use them. So, a combination of radars and IR should be able to drive you in complete fog.

Going back to the multisensor array. It may be more expensive but its capabilities are much superior than single sensor array.
I am blown away by peoples denial here. I use it EVERY DAY. It does work in downtown, crazy busy, high volume pedestrian areas. It works in china town SF amazingly well. If works in windy mountain roads going 60mph extremely well. It might slow a little more than I’d like around curves down to around 45, but it successfully does it! (Where the speed limit is 50. So not a big deal)

How can people still think vision doesn’t work.

What are you people not seeing. I’m so confused.
 
  • Like
Reactions: starmap and enemji