Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Yes!!! Enhanced summon is on its way.....

This site may earn commission on affiliate links.
Enhanced Summon should be interesting. How useful it will be is yet to be determined. I haven’t even “discovered” the firmware version that it is in.

The next upgrade is always oversold, overhyped and under-appreciated. This one might be a serious step forward or not. I will have to see for myself how I can use the technology, but that will be in a month or two depending on rollout.

An improvement to autoparking in a home garage is what I’m looking forward to. Summoning it out of the garage into a street would be startling, we’ll soon see.
 
  • Like
Reactions: MelaniainLA
I think one of the reason(s) we see some of the issues are much more deeper and will not be fixed with HW 3.0. Autopilot does not have a memory (at least in many cases). Tesla can use their fleet data to train cars, but it is different from remembering specific intersections and road configs. Many human decisions are based on you clearly remembering details on a given road. I can understand there might be reasons to do, but this obviously limits how ‘smart’ autopilot is.

Autopilot can become good enough to figure out things correctly on the very first try, but some specific decisions humans can do will be off limits.
 
I think one of the reason(s) we see some of the issues are much more deeper and will not be fixed with HW 3.0. Autopilot does not have a memory (at least in many cases). Tesla can use their fleet data to train cars, but it is different from remembering specific intersections and road configs. Many human decisions are based on you clearly remembering details on a given road. I can understand there might be reasons to do, but this obviously limits how ‘smart’ autopilot is.

Agree, but isn’t the point of the current approach that the fleet develops a memory based on the experience of not one but many cars? Aren’t these the new types neural nets that HW3 will enable?
 
Enhanced Summon should be interesting. How useful it will be is yet to be determined. I haven’t even “discovered” the firmware version that it is in.

The next upgrade is always oversold, overhyped and under-appreciated. This one might be a serious step forward or not. I will have to see for myself how I can use the technology, but that will be in a month or two depending on rollout.

An improvement to autoparking in a home garage is what I’m looking forward to. Summoning it out of the garage into a street would be startling, we’ll soon see.


I want autoparking in home garage and auto-plugging in charger :)
 
Agree, but isn’t the point of the current approach that the fleet develops a memory based on the experience of not one but many cars?

I believe that this is not exactly true. Tesla can use road data to train a car to drive on specific intersections, but a car may not know it is exactly the same intersection. So the knowledge is more generalized, and not tied to very specific decisions a human can do for THAT SPECIFIC intersection.

And Tesla can obviously change that, they just decided not to.
 
  • Like
Reactions: APotatoGod
I believe that this is not exactly true. Tesla can use road data to train a car to drive on specific intersections, but a car may not know it is exactly the same intersection.

I see. I wondered if it were possible for them to train AP based on watching the human driver in shadow mode. Once you see how 1,000 humans behave at this intersection then you train the fleet to do the same. Maybe I’m expecting too much?
 
I see. I wondered if it were possible for them to train AP based on watching the human driver in shadow mode. Once you see how 1,000 humans behave at this intersection then you train the fleet to do the same. Maybe I’m expecting too much?

I think they decided it is not worth the effort at that stage. This will require to have specific data downloaded/stored per intersection or specific road segment. Right now neural nets are just trained for all roads, and there is no need to download anything.
 
What I’m saying may not be 100% correct, but the general idea should be - Autopilot is trained to drive on all roads at any time without a need to download extra data. Humans train themselves as per their individual preferences for specific road segments. Now, can Tesla remember how you prefer to drive certain roads? Yes, they can. But my point that they decided not to go there, at least yet. This is not a feature on their list, I think.
 
I see. I wondered if it were possible for them to train AP based on watching the human driver in shadow mode. Once you see how 1,000 humans behave at this intersection then you train the fleet to do the same. Maybe I’m expecting too much?
This has nothing to do with "shadow mode" (which probably doesn't exist in the form that many imagine). But what Tesla can do is record and analyze route segment data that the cars upload (if the respective privacy option is enabled). It basically records the path of all participating vehicles. This is used to update the realtime traffic data that you can see on the map (if cars move slowly in a certain location there is congestion), but can also be used to develop mapping data (including lanes that cars take). You could also run analytics on this data e.g. to find out what lane a driver typically takes when they are using a specific interchange to go in a specific direction, and based on that add hints to the mapping data that the cars periodically download. That is a form of fleet learning in the cloud based on data uploaded by the cars.
 
I see. So only route/location data is recorded. As opposed to behavior. I agree some are bad drivers but looking at data in aggregate if 80% of drivers do X behavior then maybe it can be considered normal or safe.
For example, what do the majority of drivers do when gaining on a truck in an adjacent lane that is slightly encroaching on your lane? They [presumably] check their other side view and veer to the outside of their own lane as they pass...
 
  • Like
Reactions: APotatoGod
Also, to add to the above. Maybe it is not such a good idea to minic our behavior too much. Many people are bad drivers ))) Tesla is trying to design a universal and safe driver. You may not like some of its decisions )))

This philosophy works pretty well if every car on the road was autonomous. Then all of them can drive very safely and coordinate.

But if the autonomous vehicle is forced to drive and share the road with "bad" human drivers, then there is a certain minimum driving style that is necessary to maximize safety. If everyone else on the freeway is going 80 MPH, it's not safe for your car to go 65 MPH, even though that might be the actual speed limit. The autonomous car is going to have to be programmed with some leeway to allow for co-mingling with less-than-stellar human drivers.

Human drivers will be following too close, not signalling lane changes or exits, speeding, turning from the wrong lane, etc. Good human drivers can see such situations developing, prepare for them, and expect the inevitable results. The autonomous car will have to do the same to achieve the safety level that the technologies are promising.

If I'm driving along a neighborhood street and up ahead I see kids playing soccer in a yard, I instinctively slow down because I'm preparing for the soccer ball to end up in the street in front of the car. I'm doubly-prepared to brake if that happens, even if it's just the ball and not one of the kids that runs into the street. What kind of analysis does the autonomous car have to do in order to prepare for that? Do we have to have a neural network running on the wide-angle camera looking for soccer balls?

In my opinion, making the autonomous car drive is one thing, but making it drive well is much, much harder.
 
I see. So only route/location data is recorded. As opposed to behavior. I agree some are bad drivers but looking at data in aggregate if 80% of drivers do X behavior then maybe it can be considered normal or safe.
For example, what do the majority of drivers do when gaining on a truck in an adjacent lane that is slightly encroaching on your lane? They [presumably] check their other side view and veer to the outside of their own lane as they pass...

Tesla should be able to teach AP to do that. It is not the same “memory” issue. I was thinking about that to. I have a suspicion that they decided to keep car centered in its current lane as a better behavior in general. AP can quickly change lanes to avoid collision, but this will also avoid forcing cars in adjacent lanes to move to the outside of their lanes to. This can be an example of human behavior which should NOT be imitated.
 
  • Like
Reactions: woodisgood
For example, what do the majority of drivers do when gaining on a truck in an adjacent lane that is slightly encroaching on your lane? They [presumably] check their other side view and veer to the outside of their own lane as they pass...
This is something that they could theoretically train a neural network to do using a technique called reinforcement learning. But a safer technique might be not to just blindly veer over if there is a truck in the neighboring lane, but react to a distance measurement by the ultrasonic sensors or perhaps the camera (this is one area where HW3 could potentially help by making the vision-based object localization more accurate).
 
  • Like
Reactions: woodisgood
We should understand that FSD will be different (from humans) type of driver, but still safe and predictable. It will likely even affect how humans drive too. For instance, I will be less afraid to be in a blind spot of FSD car driven automatically (not that I should be doing it, but still). I believe people will eventually start to appreciate this.
 
This philosophy works pretty well if every car on the road was autonomous. Then all of them can drive very safely and coordinate.

But if the autonomous vehicle is forced to drive and share the road with "bad" human drivers, then there is a certain minimum driving style that is necessary to maximize safety. If everyone else on the freeway is going 80 MPH, it's not safe for your car to go 65 MPH, even though that might be the actual speed limit. The autonomous car is going to have to be programmed with some leeway to allow for co-mingling with less-than-stellar human drivers.

Human drivers will be following too close, not signalling lane changes or exits, speeding, turning from the wrong lane, etc. Good human drivers can see such situations developing, prepare for them, and expect the inevitable results. The autonomous car will have to do the same to achieve the safety level that the technologies are promising.

If I'm driving along a neighborhood street and up ahead I see kids playing soccer in a yard, I instinctively slow down because I'm preparing for the soccer ball to end up in the street in front of the car. I'm doubly-prepared to brake if that happens, even if it's just the ball and not one of the kids that runs into the street. What kind of analysis does the autonomous car have to do in order to prepare for that? Do we have to have a neural network running on the wide-angle camera looking for soccer balls?

In my opinion, making the autonomous car drive is one thing, but making it drive well is much, much harder.

A lot of the power of anticipation is in reducing response time. The computer could make up for the anticipation disadvantage through the ability to respond faster than a reactive human.