Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Where is the enhanced autopilot?

This site may earn commission on affiliate links.
The video was not rigged although it might have been edited a bit.
I’m not saying the video was rigged, I’m saying the software was.

The video was a real proof of concept based on Tesla combining the AP2 hardware and the MobilEye software.
Which they 100% knew at the time they were not bringing to market, making the whole stunt deliberately misleading IMO.
 
The following is pure speculation on my part but based on over 50 years of experience in computer software development. My view is the current approach to FSD with highly detailed maps changing in real-time is a) the wrong approach and b) not achievable with current communications technology. The reason I say this is that it assumes that the map will constantly be updated by drivers in real-time to all other drivers. They can't simultaneously do car updates now and given bandwidth limitations of current cellular systems (even 5G) it still won't happen. So what happens to your car that is relying on those detailed maps when it has an accident because the map has changed but the car is unaware of the change? For example, they recently dug a big ditch to lay pipe but your car is unaware of the ditch and drives into it relying on the detailed map it has in its system.

We, as humans, drive in areas where we have never been with no maps whatsoever. We rely on our vision (ability to recognize objects) ability to spatially place them relative to ourselves and relative to our direction of travel, and our ability to apply context to these objects (object knowledge). For example, we know a tree is not going to move so if it is in our path it is up to us to maneuver around it. We know stopped vehicles may not stay stopped and it depends on their location (in the drive path) parked along the side of the road, has a driver in the driver's seat or doesn't, turn signals indicating an action or back up lights or brake lights activated, etc. We predict the behavior of objects based on our knowledge about the object be it pedestrian, sign, bridge, bike, motorcycle, car, truck, bus, road, curbs, road edges, road markings, etc. We use this information to navigate a drive path all without a single detailed map. In fact, if there is any place our system fails it is when we repeat a path everyday we form a mental map that we use like habit to reduce the work load on our brain and then we are surprised when something has changed unexpectedly. This is the auto pilot I remember using when I commuted every day from Hayward to Treasure Island in the bay area and often arrived with no memory of having driven the route. (early morning 5am commute). Most of us have done this at some time in our lives.

The detailed map approach tries to do what I just related. To reduce the computing work load by already knowing where fixed objects are and not having to work as hard to compute our relative location and the drive path. The world just changes too fast for this to be a reliable approach. Better to reliably be able to visually identify objects and have stored knowledge of how these objects can behave and to locate them relative to oneself and ones path.

I see no evidence yet (despite another poster claiming that back in Nov 2016 Tele Vision could do many of these things with no proof to back up this claim) that Tesla Vision can reliably detect any of these objects and identify them, much less predictable their behavior based on what the object is. Thus it doesn't see a firetruck in its path, recognize that it is a firetruck and that one of its possible states is stopped and that at the very least would require maneuvering around it or notifying the driver to take control. It can't identify an overpass as being a non threatening object or a shadow as a shadow hence it applies the brakes. This is a perfect example of why you can't rely on the detailed map data. A car breaks down in the roadway in the shadow of an overpass. It has turned on its emergency blinkers but it is not on the detailed map. What does the system do? Is it confused? Does it brake? Does it say its a shadow and I have now been programmed to ignore shadows so it plows into the car? It isn't on the detailed map so the map data cannot help it. Of course, this is one of those situations that frequently results in accidents when humans are doing the driving but would it be worse with autonomous cars?

I keep hoping we will see an improvement in Tesla Vision as demonstrated by its display of the information on our screen. Until it can do that, I am not convinced that it will ever be capable of true autonomous driving.

For all the reasons you mentioned, that is why Tesla is relying on both detailed maps and TeslaVision together for FSD. The car needs both because it needs a detailed map for the navigation (knowing which road to be on to get to a particular destination) but also needs camera vision to do the driving part (avoiding obstacles, steering, changing lanes etc).

Also, keep in mind that Autopilot is probably not using any of the FSD stuff yet. For example, Autopilot is probably not using the part of TeslaVision to track parked objects. If it were using it, it never would have hit the parked fire trucks. So it would be a mistake to draw conclusions on Tesla's FSD based on the Autopilot accidents or what we are seeing or not seeing in the display since Autopilot is not using any of the FSD part of the software.
 
I’m not saying the video was rigged, I’m saying the software was.

Which they 100% knew at the time they were not bringing to market, making the whole stunt deliberately misleading IMO.

I know others interpreted the video very differently but I always saw the video as simply a marketing proof of concept, nothing more.
 
Also, keep in mind that Autopilot is probably not using any of the FSD stuff yet. For example, Autopilot is probably not using the part of TeslaVision to track parked objects. If it were using it, it never would have hit the parked fire trucks. So it would be a mistake to draw conclusions on Tesla's FSD based on the Autopilot accidents or what we are seeing or not seeing in the display since Autopilot is not using any of the FSD part of the software.
The mistake is drawing the conclusion that because EAP can't see a parked car, its probable that FSD can

High resolution maps are a bit of a joke - firstly the the data file would be HUGE, secondly the first car down the road after a layout change would crash but would at least warn everyone else of the change. That's not smart.
 
@diplomat33, how do you know that Tesla isn't using any of the FSD vision software? Do you have access to the source code or some special inside knowledge? While it is possible that they have two independent development activities, I seriously doubt it. That would be a very expensive approach to software development for overlapping functionality. It would be much easier to develop one system and then limit the availability of specific features until paid for. EAP and FSD both require the fundamental ability to do proper image recognition and spatialization.
 
The mistake is drawing the conclusion that because EAP can't see a parked car, its probable that FSD can

But FSD has to be able to see a parked car in order to be FSD. It is a required feature of FSD. So logically we can assume that FSD will be able to see parked cars eventually whereas we know that Autopilot cannot see parked cars now. Again, the difference is that seeing parked cars is a required feature of FSD but it is not a required feature of the current Autopilot.

High resolution maps are a bit of a joke - firstly the the data file would be HUGE, secondly the first car down the road after a layout change would crash but would at least warn everyone else of the change. That's not smart.

Again, this scenario would not happen because FSD cars would never rely solely on maps for navigation for the very reasons you mention. Yes, relying solely on detailed maps would be dumb because it would be difficult to keep them up to date in real-time for the entire fleet and any change to the road that was not updated in the maps could cause a serious accident. That is precisely why I said that cars will use a combo of both detailed maps and camera vision. It is the job of the camera vision to prevent the type of scenario you mention. The camera vision would see a change in the road and avoid the crash.

how do you know that Tesla isn't using any of the FSD vision software?

I don't know for sure. And maybe they are applying some of the FSD software to AP. But Tesla has been working on FSD for 2 years or more. Surely, if Tesla had FSD vision that could track parked vehicles, they would apply it to Autopilot, don't you think? So either Tesla's FSD research completely sucks and they don't even have basic object tracking after 2 years or they do have it but are withholding it from Autopilot. I find it very unlikely that after 2 years, Tesla would still not have good FSD vision especially since we've seen some leaks that they do. So the second option is more likely that they have it but are not including it in autopilot yet.

Do you have access to the source code or some special inside knowledge?

No, I don't have access to the source code. I am making educated guesses just like everybody else on this forum.

It would be much easier to develop one system and then limit the availability of specific features until paid for. EAP and FSD both require the fundamental ability to do proper image recognition and spatialization.

Once Tesla finishes FSD, yes, I do think that is how AP and FSD will work. Tesla will use the FSD software for both and just limit the features for the people who only purchase EAP.
 
AP requires all sensors to be operational and to have a good GPS lock. It has a dedicated AP map layer that is separate from navigation and displayed maps, which is being downloaded based on tiles.

AP2 has made progress, but at a place far far slower than advertised by Elon. The whole matter of the ME breakup only came to light after the introduction of AP2 and the "only needs validation" video.

As someone who bit the bait believing that video was real based on 10 months of experiencing the development progress of AP1 since it's release, I'm inclined to say the AP2 development is slower than a drunk sloth and waaaay far away from where it should be. AP1 feature parity is still not there 18 months after it was claimed to arrive.
It is simply unfathomable why Tesla still doesn't have sign recognition if it's been possible for Yolo/Darknet to increase image recognition by pure code optimization by a factor of nearly 10. I meant, come on, it's not like there's not a ton of software engineering talent at work at Tesla, so where TF are the results!?
 
@diplomat33, what you are saying still doesn't make sense. In essence, you are saying that they have been working on FSD for two years and developing EAP independently and when FSD is working they will replace EAP with the subset of FSD. I think the problem is that they have found the task of vision identification and relative placement in real-time much more difficult than they thought it would be. We can see the lag between recognizing that a vehicle has moved into our path by watching a vehicle pull into our lane and then seeing it displayed on the display. It can be between 0.5-1 sec and that is really slow. Further it can't tell what kind of vehicle it is or provide a size perspective. A semi tractor trailer is a car and so is a motorcycle. The visual display should happen in real-time as we are seeing it. Given TFlops of computing speed it is really bad coding to display an event with such a large lag. It is even a longer lag before it takes any action such as slowing down or applying the brakes. I have worked on real-time military systems and can't imagine if we were tracking an incoming missile if we allowed such a lag to happen. The consequences would not be pretty. Well plowing into someone that just pulled into your lane and slowed down isn't a pretty picture either. When we talk real-time computing and coding we don't talk in 500msec to 1,000 msec time frames. We are usually in the low micro-seconds or even nano-seconds these days.
 
  • Like
Reactions: croman and TTexla
@diplomat33, what you are saying still doesn't make sense. In essence, you are saying that they have been working on FSD for two years and developing EAP independently and when FSD is working they will replace EAP with the subset of FSD. I think the problem is that they have found the task of vision identification and relative placement in real-time much more difficult than they thought it would be. We can see the lag between recognizing that a vehicle has moved into our path by watching a vehicle pull into our lane and then seeing it displayed on the display. It can be between 0.5-1 sec and that is really slow. Further it can't tell what kind of vehicle it is or provide a size perspective. A semi tractor trailer is a car and so is a motorcycle. The visual display should happen in real-time as we are seeing it. Given TFlops of computing speed it is really bad coding to display an event with such a large lag. It is even a longer lag before it takes any action such as slowing down or applying the brakes. I have worked on real-time military systems and can't imagine if we were tracking an incoming missile if we allowed such a lag to happen. The consequences would not be pretty. Well plowing into someone that just pulled into your lane and slowed down isn't a pretty picture either. When we talk real-time computing and coding we don't talk in 500msec to 1,000 msec time frames. We are usually in the low micro-seconds or even nano-seconds these days.

Yes, that is what I am saying. And maybe I am just rationalizing things. Maybe you are right that Tesla is really struggling with FSD and that is why we have not seen new features to EAP yet. I just have a hard time believing that the autopilot that we have now, really is the best "FSD" that Tesla can do after 2+ years. So, I think it makes more sense that Tesla has better FSD in R&D development that has just not been pushed to the public yet.
 
@barjohn Maybe we are both a little right? It could be that Tesla does have FSD in their R&D development that is better than the current autopilot but that they are also working out some vision problems which is why they have not released FSD yet.
 
@diplomat33, as someone that has been both a developer and manager of more projects than I can remember over the years, I won't say that every project or feature was always delivered in the time frame I wanted and I am sure this project is having more than its share of setbacks with so much turn over at the top (usually this results in a change in direction or even starting over as the new person wants to prove that the old way was the problem), there is an aspect to this project that I don't see happening. You are treating it as an all or nothing proposition. It shouldn't be. For example, any vision identification and relative localization that could prevent an accident should have priority. This means the ability to visually recognize any general vehicle type in the current path as a potential threat and to be able to determine its relative location should be the highest priority. It all has to happen in real-time so if it means writing code in machine language, so be it. One can use radar as an assist to measure distance to the object or use visual means or both. Put this into EAP. Maybe all it does is give you a warning or applies the brakes where in FSD it slows down (as a prudent driver would do) and then finds a safe path around the object. If you can't tell the difference between a shadow or a spurious radar reflection then you have a long way to go.

Even the crash into the gore point could be avoided with better vision capability. Assuming that the direct cause was confusion on the drive path cause by the lane split, when the radar detected an object in the direct path, the camera with the ability to recognize a concrete barrier and differentiate it from a shadow should have confirmed the threat and the system should have applied the brakes. I fully understand the concern for phantom braking which the system does now anyway, but when two sensors confirm a direct threat to the safety of the vehicle the system should take action. The obvious conclusion that can be drawn is that they don't have reliable vision capability so the software is programed to ignore many cases of real threats to avoid the false positives created by the radar (reflections off of other objects that appear to be in the drive path).
 
For those that say AP can not see stopped cars is not using firmware 10.4 or later. Since that version it stops for stopped cars 100% of time for me. The issue is at what speed. The fire truck accidents were at 60+ mph and all my driving is less then 50 mph when I have this condition. I think the question should be why are they having this problem at > 50 mph. Slow computer??? Seems misleading to say it can not see stopped cars when you know it can without qualification.

10.4 was a huge advance to EAP and the future FSD which to me is the same code base.
 
The hypothesis is that it can see stopped vehicles if it was previously tracking the vehicles when they were moving but not if they were stopped before they came into radar view. In the latter case, they would generate many false positives from various reflections of fixed objects and be applying the brakes all of the time. This is why I believe vision id is key. It would allow for either confirmation of the radar reflection or rejection if the object were a shadow or sign by the road.
 
The hypothesis is that it can see stopped vehicles if it was previously tracking the vehicles when they were moving but not if they were stopped before they came into radar view. In the latter case, they would generate many false positives from various reflections of fixed objects and be applying the brakes all of the time. This is why I believe vision id is key. It would allow for either confirmation of the radar reflection or rejection if the object were a shadow or sign by the road.
That hypothesis was absolutely true before firmware 10.4. But not true after firmware 10.4. After 10.4 the issue is now speed. I personally drive on surface streets with AP all the time at < 50 and this condition happens frequently st stop lights and it never fails. Do you not drive under the same conditions I am talking about? Very easy to see the difference. I have read many posts of others that agree with me.
 
  • Like
Reactions: Axael and croman
I follow Tesla's advice and don't use EAP on surface streets. Yesterday on a freeway in heavy stop and go traffic, I had to take control on several occasions when vehicles moved into my lane and EAP did not stop or slow down and we had less than a car's length between us. I am not about to risk my vehicle or myself in conditions that the manufacturer says it can't reliably handle. I have seen plenty of crazy things drivers will try with EAP and make videos to post but that doesn't mean the system can be relied to do the same thing every time or under different circumstances.

If you want to test your system and are that confident and own a second car. Park the car on a surface street, then drive your Tesla on AP toward it and see if your car stops or hits it. Post a video of your experiment. Use a street where you have someone stop any cars from entering it from either direction so you put no one else at risk. I want to see this video. :)
 
I follow Tesla's advice and don't use EAP on surface streets. Yesterday on a freeway in heavy stop and go traffic, I had to take control on several occasions when vehicles moved into my lane and EAP did not stop or slow down and we had less than a car's length between us. I am not about to risk my vehicle or myself in conditions that the manufacturer says it can't reliably handle. I have seen plenty of crazy things drivers will try with EAP and make videos to post but that doesn't mean the system can be relied to do the same thing every time or under different circumstances.

If you want to test your system and are that confident and own a second car. Park the car on a surface street, then drive your Tesla on AP toward it and see if your car stops or hits it. Post a video of your experiment. Use a street where you have someone stop any cars from entering it from either direction so you put no one else at risk. I want to see this video. :)
I understand your position and of course if you do not feel safe you should definitely not use AP on Surface Streets or any place else no matter what Tesla recommends. And, since you have to take control on several occasions while in stop or slow down traffic maybe you using AP in a situation that is not very safe. Maybe your driving environment is not as safe as my surface street driving which you may not be that familiar with. Because, in my case I very seldom have to take control un-expectantly. I do use it in places that are safe (for another discussion). I am not talking about taking control at a stop light or a stop sign which are of course expected.

Regarding your suggestion that I put my second car in the middle of the street (or maybe you are saying on the side of the street not in a lane?) and see if it stops for it. That is not something I am interested in doing just to try and convince you I am not lying. However, you could do the same thing and show us that you have to take control in your car in cases where it should stop on it's own. And maybe show it to Tesla SC to get their opinion. Actually, I would be interested in seeing your video if you decide to share it. Btw. There are many many more Tesla AP videos on YouTube that do not do Crazy things then there are that do. Some of the posters are regulars here and I enjoy watching them. But, I do believe you are telling truth about your experience. I am just sharing my experience with you and others.

Anyway, I think many on here can testify that since 2018.10.4 and you are on AP and you are coming up on a stopped car at at stop light that your Tesla has not already seen and "tracked" while it was moving that it will indeed now stop if your speed is around 45mph or less. Even the head of AI made a post on Twitter about that advances in 2018.10.4. Again, check out the YouTube Videos regarding this version. Very impressive improvements.

Below is from the Model X manual but I believe the same in the Model S manual which is what I drive. Please notice that it "MAY not" and also "especially when driving over 50 mph". I think you are safer driving on surface streets where the speed limit is < 50 mph (in pretty much all cases). And a very high percentage of accidents were on the freeway (where recommended) and also when driving > 50 mph. My points above is that this has greatly improved since 2018.10.4 when driving < 50 mph and my hope is that it improves for cars going > 50 mph and maybe some of the accidents will not happen.

I did not address the part about "a vehicle you are following moves out of your driving path". Again, this has improved since 10.4. And, this does happen on Surface Streets but maybe I have not seen it as much as you in your heavy commute traffic. And I am not positive when this happens to me if the car ahead is already tracked since it does see multiple cars ahead of me (as shown in the IC). My main problem prior to before 10.4 was when there were no cars a head of me in the IC (for some distance) and then as I approach cars at a traffic light it would not stop prior to 10.4 and now it does.

Warning: Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles or objects, especially in situations when you are driving over 50 mph (80 km/h) and in situations where a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you. Always pay attention to the road ahead and stay prepared to take immediate corrective action. Depending on Traffic-Aware Cruise Control to avoid a collision can result in serious injury or death. In addition, Traffic-Aware Cruise Control may react to vehicles or objects that either do not exist or are not in the lane of travel, causing Model X to slow down unnecessarily or inappropriately.
 
Warning: Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles or objects, especially in situations when you are driving over 50 mph (80 km/h) and in situations where a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you. Always pay attention to the road ahead and stay prepared to take immediate corrective action. Depending on Traffic-Aware Cruise Control to avoid a collision can result in serious injury or death. In addition, Traffic-Aware Cruise Control may react to vehicles or objects that either do not exist or are not in the lane of travel, causing Model X to slow down unnecessarily or inappropriately.

OMG, so much CYA, might as well go without EAP. I think for most people, it's just safer for them and those around them, to forego EAP.
 
Seriously?????
Supposedly lane changes on surface streets has been activated in certain areas (there are reports from NL and DE, for "highway like" surface streets; "Supposedly" because I couldn't verify it myself yet as I'm in SEA atm).
Sign recognition is still notably absent. As is "all vehicles" rendering in the IC, so AP1 feature parity is still absent 1.5 years after it was supposed to arrive, even if Autosteer has somewhat surpassed that of AP1 by now.
 
  • Like
Reactions: croman