Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD - rounadabouts and T-junctions

This site may earn commission on affiliate links.
I wonder if the reason FSD can't currently do Roundabouts or T-Junctions autonomously is because it doesn't have and sensors pointing in the right direction.

Imagine a T-Junction, that's going to need things towards the nose of the car almost looking at right angles to the car (or if the car turns slightly to the left, even further back). Likewise at a roundabout, it'll need to be looking to the right.

In both instances not just a few feet from the car, but a long long way out.
 
There are a few posts around that show the other camera views.

A quick look found this. The fisheye and pillar cameras have a very good view.

upload_2020-7-18_16-18-19-png.565974
 
  • Like
Reactions: JupiterMan
Entering a UK roundabout and relying on a computer to deal with the traffic and not get rear ended after an emergency stop wouldn't be my choice.
Main roads and so on the Tesla AP literally thrashes any competitor that I've tried (Audi, Kia, Lexus etc) - it's utterly brilliant in those situations, and so relaxing it's untrue - but dealing with a busy roundabout would be a major step up.
 
There’s one area of concern for me and that’s where you pull over into a central right hand turn box as found on ordinary A roads.
I guess FSD will cope.
In ordinary AP, mine has a habit at one such box of wanting to dive in there even though I’m going straight on.
That I think is solely down to worn out lane markings and the car is trying to centre itself to the far line.
 
Hi both eight and Dilly... it's not so much the AI I'm worried about... I just don't think the hardware/cameras are there looking in the right directions. I may be wrong (And as said above, there are views we just don't get to see), but looking at the cameras I can physically see, I wouldn't rule it out.
 
I just don't think the hardware/cameras are there looking in the right directions.
its been said many times, but currently human driving relies on two cameras mounted inside the car...!

Elon has been talking up the new "bleeding edge" alpha build he is currently testing, which should get limited US roll out next month... its going to be very interesting to see it in action... Elon says it works... cant wait....!
 
its been said many times, but currently human driving relies on two cameras mounted inside the car...!

I get that, but unlike those in our car they're movable... So when I come upto this roundabout what's looking to the right to see if there's fast approaching traffic (the road is a 70mph DC)?

Google Maps

Or more sedately, what's looking right, when you're turning left here:
Google Maps

Because I can move my body, head and eyes, I can see clearly around 270 degrees. The car doesn't have movable cameras, and I can't see how it has a decent view in either of those situations.

I'm not one for sales waffle, or promises. As I'm sure many do, I work with computers. AI is great, but if it doesn't have any input, it's not going to work.
 
  • Like
Reactions: Wol747
Because I can move my body, head and eyes, I can see clearly around 270 degrees. The car doesn't have movable cameras, and I can't see how it has a decent view in either of those situations.

The car is better field of view than your eyes though. when the autopilot rewrite comes it should be able to see and track objects over 360 degrees at the same time. Yes, there are some very near field blind spots, but for most scenarios their missing vision is mostly covered by ultrasonics.

My concern is not with the cameras field of view, its the cameras ability to see things when conditions are poor, such as when misted up.

tesla-second-gen-autopilot-sensors-suite.png
 
Human driving relies on a stereoscopic vision system with excellent depth perception and has the ability to freely move within its glass container so any obstructions relative to the vehicle can be reduced, whether thats a mud on the glass or a vehicle or bollard next to the vehicle that otherwise obscures the view. Imagine pulling up to a junction and finding a post blocking the vision in a Tesla at the key point? We've all had to lean forward to see around obstacles when driving, or shield our eyes from sunlight, or wind down a side window because of early morning dew, or or or...

The basic features are still a way off too, interesting read about the NHTSA list of requirements and an assessment of where Tesla are against them. Quite a few make you stop and think

Tesla FSD and Feature Complete
 
My concern is not with the cameras field of view, its the cameras ability to see things when conditions are poor, such as when misted up.

Thanks for the diagram, that certainly looks a lot better. As you say though, the weather isn't always ideal.

The basic features are still a way off too, interesting read about the NHTSA list of requirements and an assessment of where Tesla are against them. Quite a few make you stop and think

Very interesting.

I think what drives this home is as stated above, this doesn't just have to work for one journey for one person. It has to work for all journeys for all people. That's a big big ask.... but still the current AP lane following is a definite step up from TACC... so who knows what comes next. Still I doubt this will manage Level 5 or robotaxis in it's current form. It just takes a bit of mist settled on the cameras and it'll not be working.
 
Human driving relies on a stereoscopic vision system with excellent depth perception and has the ability to freely move within its glass container so any obstructions relative to the vehicle can be reduced, whether thats a mud on the glass or a vehicle or bollard next to the vehicle that otherwise obscures the view. Imagine pulling up to a junction and finding a post blocking the vision in a Tesla at the key point?

the autonomy day last year discussed this, and how there are various techniques which FSD will use where the cameras can plot a 3D scene from a single camera feed.

thw new system will stitch together 8 cameras from various viewpoints in real time so should be more reliable than human vision.

My concern is not with the cameras field of view, its the cameras ability to see things when conditions are poor, such as when misted up.

Yes, I can see a future where autonomous traffic systems could slow down or stop in severe weather.

It will be the biggest challenge.
 
It’s a leap of faith to say stitching images together will make it better. I look at this way, no pun intended, when you are stationary at a junction and trying to work out whether you can pull out, anything obscuring your view is obscuring your view. The cameras don’t over lap sideways, so you’re going to have blind spots. Tesla seem to be banking on a moving car building a map of what’s around them, but that’s only valid until something moves.

I’ve seen people question whether the sensor suite is a constraint, it was fixed 4 years ago, since then the processing hardware has been shown to be inadequate twice, there’s a rumour they’re working on HW4, the software has been junked and rewritten more than once, yet we still hang on to the idea the one thing they got right where the sensors?
 
I’ve seen people question whether the sensor suite is a constraint, it was fixed 4 years ago, since then the processing hardware has been shown to be inadequate twice, there’s a rumour they’re working on HW4, the software has been junked and rewritten more than once, yet we still hang on to the idea the one thing they got right where the sensors?

I think there's a fair chance that while the sensors may do a good job (perhaps more so on Elon's route to work), it's likely they'll need another overhaul or three before we get to autonomous all the time every time. Not least to work around obstructions and weather.
 
The cameras don’t over lap sideways,

See VanillaAir_UK's posts #3 and #11 there does appear to be overlap across all cameras.

The other thing to remember, is that with the re-write it will plot movements not only in 3D but also in time, therefore it will know the direction, acceleration and velocity of an object, as such the system should be able to anticipate where an object is and where it will be. A huge difference to todays autopilot/FSD.

And it will do this for all objects continuously all around the vehicle, should be interesting to see.
 
  • Like
Reactions: Yev000
See VanillaAir_UK's posts #3 and #11 there does appear to be overlap across all cameras.

The other thing to remember, is that with the re-write it will plot movements not only in 3D but also in time, therefore it will know the direction, acceleration and velocity of an object, as such the system should be able to anticipate where an object is and where it will be. A huge difference to todays autopilot/FSD.

And it will do this for all objects continuously all around the vehicle, should be interesting to see.

I'm looking and sideways there's no overlap which is what you need to turn out (feel free to point out whats duplicating the forward lookign side cameras as I can't see it and might be mistaken). The only overlap is from about 10 till 2 o'clock - 12 being dead ahead, and directly behind (up to 50m). Nothing between 2 and 4 and 8 and 10 o'clock.

And if you read what I said again.. when stationary at a junction any blind spots can't be filled in, you may build a model of the road and static objects as you approach,. but moving traffic comes and goes and if you can't see it when you are stationary and it either wasn't there as you approached the junction, or it moves while in your blind spot, how are you going to know whats happened? I predict the headline "Tesla pulls out on cyclist" now.
 
  • Like
Reactions: ChrisA70