I thought this thread was about enhanced summon....?
Enhanced Summon and FSD are going to merge one day.
It might as well be now - in this conversation.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I thought this thread was about enhanced summon....?
I think one of the reason(s) we see some of the issues are much more deeper and will not be fixed with HW 3.0. Autopilot does not have a memory (at least in many cases). Tesla can use their fleet data to train cars, but it is different from remembering specific intersections and road configs. Many human decisions are based on you clearly remembering details on a given road. I can understand there might be reasons to do, but this obviously limits how ‘smart’ autopilot is.
Enhanced Summon should be interesting. How useful it will be is yet to be determined. I haven’t even “discovered” the firmware version that it is in.
The next upgrade is always oversold, overhyped and under-appreciated. This one might be a serious step forward or not. I will have to see for myself how I can use the technology, but that will be in a month or two depending on rollout.
An improvement to autoparking in a home garage is what I’m looking forward to. Summoning it out of the garage into a street would be startling, we’ll soon see.
I want autoparking in home garage and auto-plugging in charger
Agree, but isn’t the point of the current approach that the fleet develops a memory based on the experience of not one but many cars?
I believe that this is not exactly true. Tesla can use road data to train a car to drive on specific intersections, but a car may not know it is exactly the same intersection.
Are you sure that's not an anti-aircraft gun?
I see. I wondered if it were possible for them to train AP based on watching the human driver in shadow mode. Once you see how 1,000 humans behave at this intersection then you train the fleet to do the same. Maybe I’m expecting too much?
This has nothing to do with "shadow mode" (which probably doesn't exist in the form that many imagine). But what Tesla can do is record and analyze route segment data that the cars upload (if the respective privacy option is enabled). It basically records the path of all participating vehicles. This is used to update the realtime traffic data that you can see on the map (if cars move slowly in a certain location there is congestion), but can also be used to develop mapping data (including lanes that cars take). You could also run analytics on this data e.g. to find out what lane a driver typically takes when they are using a specific interchange to go in a specific direction, and based on that add hints to the mapping data that the cars periodically download. That is a form of fleet learning in the cloud based on data uploaded by the cars.I see. I wondered if it were possible for them to train AP based on watching the human driver in shadow mode. Once you see how 1,000 humans behave at this intersection then you train the fleet to do the same. Maybe I’m expecting too much?
Also, to add to the above. Maybe it is not such a good idea to minic our behavior too much. Many people are bad drivers ))) Tesla is trying to design a universal and safe driver. You may not like some of its decisions )))
I see. So only route/location data is recorded. As opposed to behavior. I agree some are bad drivers but looking at data in aggregate if 80% of drivers do X behavior then maybe it can be considered normal or safe.
For example, what do the majority of drivers do when gaining on a truck in an adjacent lane that is slightly encroaching on your lane? They [presumably] check their other side view and veer to the outside of their own lane as they pass...
This is something that they could theoretically train a neural network to do using a technique called reinforcement learning. But a safer technique might be not to just blindly veer over if there is a truck in the neighboring lane, but react to a distance measurement by the ultrasonic sensors or perhaps the camera (this is one area where HW3 could potentially help by making the vision-based object localization more accurate).For example, what do the majority of drivers do when gaining on a truck in an adjacent lane that is slightly encroaching on your lane? They [presumably] check their other side view and veer to the outside of their own lane as they pass...
This philosophy works pretty well if every car on the road was autonomous. Then all of them can drive very safely and coordinate.
But if the autonomous vehicle is forced to drive and share the road with "bad" human drivers, then there is a certain minimum driving style that is necessary to maximize safety. If everyone else on the freeway is going 80 MPH, it's not safe for your car to go 65 MPH, even though that might be the actual speed limit. The autonomous car is going to have to be programmed with some leeway to allow for co-mingling with less-than-stellar human drivers.
Human drivers will be following too close, not signalling lane changes or exits, speeding, turning from the wrong lane, etc. Good human drivers can see such situations developing, prepare for them, and expect the inevitable results. The autonomous car will have to do the same to achieve the safety level that the technologies are promising.
If I'm driving along a neighborhood street and up ahead I see kids playing soccer in a yard, I instinctively slow down because I'm preparing for the soccer ball to end up in the street in front of the car. I'm doubly-prepared to brake if that happens, even if it's just the ball and not one of the kids that runs into the street. What kind of analysis does the autonomous car have to do in order to prepare for that? Do we have to have a neural network running on the wide-angle camera looking for soccer balls?
In my opinion, making the autonomous car drive is one thing, but making it drive well is much, much harder.
A lot of the power of anticipation is in reducing response time. The computer could make up for the anticipation disadvantage through the ability to respond faster than a reactive human.