Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why is lane shifting so hard to figure out?

This site may earn commission on affiliate links.
Today mere meters after a sign informing me that the right lane ends ahead, the car decided to change lanes to the RIGHT to follow the route. I cancelled the change, of course. So, yeah, if it could read more than just speed limits...
 
I cannot envision that Tesla is only using what it sees out the cameras to navigate and make these decisions. They must be using a combination of the model created from what it sees and the model from maps of where it is trying to go. The next step I would take if I were tesla is to improve local maps/lane usage defines by having each car send that info back to the mother ship for encoding into which lane to use.

My biggest problem is that in my area, a lane will go from a thru to a turn only lane frequently. The tesla gets into one of these lanes and then panics when it has to turn cause its in a right turn only lane yet needs to go straight. It seems like it could better understand lane defines by feeding "wrong" map info back to tesla for incorporation into its map model so it knows that it has to get out of a lane to continue its assigned plan. It would be even better if the car asked the mother ship for "lane updates" as it was driving, but I bet that would be a big change.

When I drive, I know these lane configs from experience (not by "reading the road" every time) and make lane changes early.

One final issue I have is that I live in a rural area where roads often turn 90 degrees left and then another 90 right because of property boundaries. The tesla sometimes does these turns comfortably, but most frequently, it comes in far too fast. It should know that a sharp turn is coming up and slow down appropriately. I would think it would have to know from a map that a sharp turn is ahead.

I won't start on roundabouts....
 
I cannot envision that Tesla is only using what it sees out the cameras to navigate and make these decisions. They must be using a combination of the model created from what it sees and the model from maps of where it is trying to go. The next step I would take if I were tesla is to improve local maps/lane usage defines by having each car send that info back to the mother ship for encoding into which lane to use.

My biggest problem is that in my area, a lane will go from a thru to a turn only lane frequently. The tesla gets into one of these lanes and then panics when it has to turn cause its in a right turn only lane yet needs to go straight. It seems like it could better understand lane defines by feeding "wrong" map info back to tesla for incorporation into its map model so it knows that it has to get out of a lane to continue its assigned plan. It would be even better if the car asked the mother ship for "lane updates" as it was driving, but I bet that would be a big change.

When I drive, I know these lane configs from experience (not by "reading the road" every time) and make lane changes early.

One final issue I have is that I live in a rural area where roads often turn 90 degrees left and then another 90 right because of property boundaries. The tesla sometimes does these turns comfortably, but most frequently, it comes in far too fast. It should know that a sharp turn is coming up and slow down appropriately. I would think it would have to know from a map that a sharp turn is ahead.

I won't start on roundabouts....

The major missing part is quality map data. Tesla doesn't buy the expensive, but quality, detailed maps that others use I believe. And they haven't yet programmed their fleet to collect data to make these maps internally. They could do so, but it would be a major, and difficult, long term software project. They have the unique advantage (but will face competition) of many miles of supervised human-driven operation which they could process and use as a baseline policy goal in well populated areas.

You are correct that humans drive with internal maps and experience, and the current driving software has nearly zero memory, like the main character in Memento. It's recomputing and re-estimating every second it seems and that results in uncertain, fluctuating drive policies. It's future 'think ahead' time feels like 5 seconds or less.

I just activated FSD on my car, and I'm surprised how bad the performance is. But, at least judging from the display, the problem doesn't seem to be perception---so all the wailing about missing radar may be besides the point (though we really don't know if there are subtle perception errors or perception noise which is causing the policy problems). And from their AI day presentations, I'm pretty impressed with the recent technology they've adopted for vision, which is reasonably close to state of the art research as one can expect in a practical commercializable system. It's using full 3-d image processing (neural radiance fields and occupancy networks) which have only been invented academically in the last few years.

But the driving policy needs tons of work. This is where the Waymos and others which used expensive hardware for direct physical perception have an advantage. They've been able to work on driving policy for a decade, and it shows. There isn't a wide base of free academic work and example code to take from, unlike visual perception.

I also think that eventually Tesla will discover what everyone else assumes, that they will need to heavily use curated semantically detailed and validated maps, maps far beyond OpenStreet Map, for a robotaxi service. And I think the hardware used by autonomous driving systems will converge on the same solution of mostly vision plus imaging radar, with less and less use of lidar.
 
But people drive in areas where they have no prior familiarity and without consulting a map in real time. How do they change lanes correctly?

1) they don't. Why can people spot tourist driving so easily?
2) they watch what other cars do. Humans can see ahead clearly better than current low resolution cameras (1280x960). 4k-8k cameras are a minimum to equal humans.
3) they have an intuitive sense how road engineers typically mark lanes and signs. This is local knowledge---what works in Los Angeles doesn't work in Italy, much less Delhi.

Conceivably Tesla could have a great advantage by instrumenting their human-driven cars. They track the routes that those cars take and find the average path taken from 'here' to 'there' (a quarter mile ahead, e.g.) where the 'there' matches the route needed by the navigation. You find on average when people change lanes, what their average speed is like. These crowdsourced route suggestions would be inputs to the driving planner as a good goal to follow unless there are clear safety issues preventing it.

This could conceivably give better than human performance, because it would use advance knowledge of what other humans do in previously unseen locations; as if we had a hive mind.
 
The major missing part is quality map data. Tesla doesn't buy the expensive, but quality, detailed maps that others use I believe. And they haven't yet programmed their fleet to collect data to make these maps internally. They could do so, but it would be a major, and difficult, long term software project. They have the unique advantage (but will face competition) of many miles of supervised human-driven operation which they could process and use as a baseline policy goal in well populated areas.

You are correct that humans drive with internal maps and experience, and the current driving software has nearly zero memory, like the main character in Memento. It's recomputing and re-estimating every second it seems and that results in uncertain, fluctuating drive policies. It's future 'think ahead' time feels like 5 seconds or less.

I just activated FSD on my car, and I'm surprised how bad the performance is. But, at least judging from the display, the problem doesn't seem to be perception---so all the wailing about missing radar may be besides the point (though we really don't know if there are subtle perception errors or perception noise which is causing the policy problems). And from their AI day presentations, I'm pretty impressed with the recent technology they've adopted for vision, which is reasonably close to state of the art research as one can expect in a practical commercializable system. It's using full 3-d image processing (neural radiance fields and occupancy networks) which have only been invented academically in the last few years.

But the driving policy needs tons of work. This is where the Waymos and others which used expensive hardware for direct physical perception have an advantage. They've been able to work on driving policy for a decade, and it shows. There isn't a wide base of free academic work and example code to take from, unlike visual perception.

I also think that eventually Tesla will discover what everyone else assumes, that they will need to heavily use curated semantically detailed and validated maps, maps far beyond OpenStreet Map, for a robotaxi service. And I think the hardware used by autonomous driving systems will converge on the same solution of mostly vision plus imaging radar, with less and less use of lidar.
There's definitely issues with the planner they are working on. The reason I think the maps may not be the problem to an extent is that the navigation route info shows what lane you need to be in for the upcoming turn. How often are those lane selections on the route info wrong? Then there's the planner which should be moving the car into the correct lane based on that route info. There's a disconnect between the two.
 
There's definitely issues with the planner they are working on. The reason I think the maps may not be the problem to an extent is that the navigation route info shows what lane you need to be in for the upcoming turn. How often are those lane selections on the route info wrong? Then there's the planner which should be moving the car into the correct lane based on that route info. There's a disconnect between the two.

I agree that there's lots of improvement needed. I'm surprised how far behind the planner is.

Still though, the on-screen maps/planner are fundamentally wrong near my house, leading to a more circuitous route than necessary. I have no idea how to change it. I even looked at OpenStreetMap explorer but didn't see any errors. Google or Apple maps doesn't have this error.

But the maps I'm thinking about would have soft information in addition, not just hard lane lines but information about what people actually do in a given route, and how to make the driving smooth and human like. It would have information about typical speeds depending on traffic and weather, typical lane change locations. Information that can't be gathered by aerial cameras or municipal authorities.