Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
But does AP1 even need the street sign images? I don't think it does anything with them or ever displays it (the human I've seen in some pedestrian detection test videos by Kman). It seems like something they thought they'd do but jettisoned when they couldn't.

I think the real question is why the stop sign image was showing in the shadow mode release of AP2 but hasn't appeared since. Was that Nvidia code that they somehow failed to hide through commenting? It is just intriguing that the stop sign image was so clearly part of that AP2 initial build but its been 7 months and nothing since. Throwing up a stop sign in the IC, if TeslaVision can do that, would be a great safety add on for everyone regardless of buying EAP/FSD. FSD, obviously, would act on that information but they show the speed limits for everyone, why not other signs?

The thing is, AP1's MobilEye is well equipped to recognize street signs and traffic lights. That isn't the problem. The problem is, Tesla's AP1 camera's field of view was not sufficient to see wide enough in many traffic scenarios, especially when stopped at lights.

Enter the "AP 1.5" that Model X and Model S facelift were supposed to get according to Model X PR images (that showed two front cameras and one rear, top of roof camera), front camera wiring/cover (that showed two front cameras) and Model S facelift wiring diagrams (that showed two front cameras and rear radars), and that Elon Musk referred to in a conference as the next thing they are doing with Model X that has improved pedestrian detection...

Is Model X (or was?) getting an "AP 1.5" before full-autonomous "AP 2.0" suite upgrade?

The second camera there would very likely have had the wider FoV to see traffic lights, pedestrians at stoplights etc. on a slightly extended AP1...

If Model S facelift wiring diagrams were prepared for this change (until such references were later removed as the features were obviously cancelled), it is easy to speculate AP1 firmware (and any imagery there) was too...
 
  • Like
Reactions: lunitiks
I think the real question is why the stop sign image was showing in the shadow mode release of AP2 but hasn't appeared since. Was that Nvidia code that they somehow failed to hide through commenting? It is just intriguing that the stop sign image was so clearly part of that AP2 initial build but its been 7 months and nothing since. Throwing up a stop sign in the IC, if TeslaVision can do that, would be a great safety add on for everyone regardless of buying EAP/FSD. FSD, obviously, would act on that information but they show the speed limits for everyone, why not other signs?

I think there is a simple explanation for this:

They're training NNs to recognize signs (not just speed limit, but a variety). They shipped that model in the first version of AP2 by accident. @verygreen said the IC just shows whatever the APE tells it to. Its possible that the vision task was running through the NN model that recognizes stop signs, it saw it, and it sent an event to the IC. Then it showed up on the IC.

The reason its been 7 months and we don't have it again is because it hasn't passed "validation" which would likely entail a sufficiently high confidence that the feature works in a wide variety of cases. For stop signs I would imagine that "sufficiently high" is quite high indeed.

The above seems to be a simple answer applying Occam's Razor. As a SW engineer it also seems simple to me.

Now more speculation: I've long thought that AP2's current primary limiting factor is actually vision. I think Tesla's hire of Karpathy to be Dir. of Vision is probably the most important team within Autopilot currently. Even in AP2's current form, I've noticed that recognizing even basic lane markers it sometimes has trouble with (sometimes!). Recognizing cars at an extremely high degree of confidence is critical for any activity (lane change, stop sign, turns, etc.). This would also align with excellently identifying signage.

Tesla has a huge and growing image dataset. That isn't going to be the problem. But training the different NNs to recognize these in a wide variety of situations is not a simple task. Karpathy is exactly the right person for this job, but he just started last month. I think he'll be key to AP2 though.

Beyond that, one other thing: HD maps. AP2 needs HD maps to raise the confidence of stop sign usage significantly. There are a number of stop signs in my neighborhood that are hidden behind trees/bushes until the very last moment (too late to slow down and stop if you're going too fast). Humans know the stop sign exists because you can intuitively feel it and also vaguely see the "STOP" on the road very distorted. Vision could also theoretically see this on the ground but its not reliable.

Therefore, AP2 needs HD maps so it knows that there is an intersection coming up that has a stop sign, whether you see it or not. This just adds to the safety. @verygreen has noted that each recent release (including the most recent 17.28) has been adding more and more "map downloading" code to the APE. I believe this is the framework for HD map downloading.

So I guess this was a long answer to a seemingly simple question, but as a SW engineer its always said the last mile problem is the hardest. Tesla may be able to identify a stop sign 90% of the time but that's not good enough and they can't ship that software. They might've had this for a very long time.

Lots of speculation, but I believe its reasoned and educated.
 
It's always rendered cars in adjacent lanes during a lane change for me. I think at this point we know it sees the cars; it's obviously a deliberate decision not to render them. Perhaps this was down to the partial object/car detection not being good enough, which would have resulted in jumpy avatars pinging about all over the place (which wouldn't inspire confidence). It might be because AP2 looks for 'objects' and 'in-path objects' rather than specifically cars etc - therefore it'd display almost anything as a car, which could be confusing.

Personally, I'm very curious to see what shows up tomorrow at the Model 3 unveil. There's been so much speculation about the 'second part' of the unveil, and there's clearly an entirely different FSD codebase that we haven't seen, somewhere, which must be very different to our current version of AP2. Add in to that the little hints and code drops that @verygreen has found in the latest firmwares (like Mapbox maps), and the plan to deliver model 3s only to America initially... and you start to get an idea that I think we'll see a FSD demo tomorrow.

It'd be perfect timing... think about it:

1) AP2's struggles are real, but they're almost always to do with lane and path delimiter detection. FSD relies on maps to give accurate lane and path delimiters, plus signs, speed limits etc etc.
2) AP2 has to work wherever Tesla cars are driving around, which is pretty much anywhere today. FSD can focus initially on the US.
3) AP2 *should* be technically much more capable than what we're seeing today - Tesla's vast team of engineers don't spend all that time making minor corrections to longitudinal control... they're up to something else.
4) We've been hearing about things like an improved browser etc for a long time, but nothing has materialised. Presumably 8.2 or 9 is on its way with a different interface.
5) We know that there's 'something big' about the part 2 unveil of the Model 3... they've been very quiet about Model3 autopilot.
6) Tesla's timeline for FSD feature advertising has been weird - I can't imagine they'd write things on their website that they have no intention of delivering, and relatively soon. I know this might seem strange, given their track record, but what's their incentive -they already have the best EV on the market, so they don't need to make wild claims to sell cars. They must have this technology working somewhere in order to be confident enough to market it upfront.
7) The Model 3 interior is basically designed for self-driving. There's not much to do other than play with the screen, and the horizontal orientation makes it far better for entertainment (movies etc).
8) The iC display for autopilot hasn't seen much love - in fact it's regressed since AP1. However, if you think about it, the occupancy grid isn't that well visualised with the current design. I would imagine a much bigger redesign of the iC AP display is coming, showing cars with more fidelity, space around the car (including behind the car) etc.

Additionally, once maps become a part of the equation, everything changes and our AP2 cars suddenly become very smart. However, I can't believe the maps generation is done yet, but I can believe that the most complete ones probably exist for the US. We know that human readable Mapbox maps exist, only for the US, and the machine-readable drivable rails maps exist, but only for Fremont at this time.

So, I think we'll find out a lot about FSD tomorrow :)
 
Personally, I'm very curious to see what shows up tomorrow at the Model 3 unveil. There's been so much speculation about the 'second part' of the unveil,

Elon himself said part 2 reveal already took place.
6) Tesla's timeline for FSD feature advertising has been weird - I can't imagine they'd write things on their website that they have no intention of delivering, and relatively soon. I know this might seem strange, given their track record,...so they don't need to make wild claims to sell cars. They must have this technology working somewhere in order to be confident enough to market it upfront.

no their track record has been full of over-promise and under-deliver. They totally marketed EAP and FSD to sell cars period.
Again there's not even ANY EAP based features and its been 10 months. 10!!

Additionally, once maps become a part of the equation, everything changes and our AP2 cars suddenly become very smart. However, I can't believe the maps generation is done yet, but I can believe that the most complete ones probably exist for the US. We know that human readable Mapbox maps exist, only for the US, and the machine-readable drivable rails maps exist, but only for Fremont at this time.

driaable rails maps? did i miss this post?

So, I think we'll find out a lot about FSD tomorrow :)

The only thing we will see is another elaborate AP demo in order to drive sales.
 
  • Disagree
Reactions: Sterph
I'd like a little more concrete and a little less hype IMHO. But I suspect the WOW feature of the m3 will be the range of the car. Hopefully I'm wrong, I really hope I'm wrong.

I'm sure the range of the base Model 3 will be like 250-260 miles based on the fact they are moving up the base S from 75kwh to 85kwh. I'm miffed at that but it is what it is. M3 is more efficient and has a more efficient motor/inverter than my "obsolete" S75D.

That being said, I'm pretty sure it has to be AP related for the reasons @mrkisskiss laid down. It might also be about the physical car but a lot of that has been leaked/photographed. I'm not sure much remains. Also someone photographed a 3 with a rated range of 310 miles (likely a 75kwh pack), so we know its range and it shouldn't be a big surprise.
 
I know this might seem strange, given their track record, but what's their incentive -they already have the best EV on the market, so they don't need to make wild claims to sell cars. They must have this technology working somewhere in order to be confident enough to market it upfront.

I think this single portion sort of ruined an otherwise great post. It was the breaks the suspension of disbelief moment for me.

Tesla is totally pulling all demand levers they can to reach their sales targets, including making wild claims to sell cars (this started with various never-to-be-seen AP1 features and the P85D HP back in late 2014)...

Whether or not Tesla needs to do this, is up for debate certainly. But reality is, they feel like they need to - and thus they do.
 
  • Disagree
  • Like
Reactions: MP3Mike and KyleDay
Beyond that, one other thing: HD maps. AP2 needs HD maps to raise the confidence of stop sign usage significantly. There are a number of stop signs in my neighborhood that are hidden behind trees/bushes until the very last moment (too late to slow down and stop if you're going too fast). Humans know the stop sign exists because you can intuitively feel it and also vaguely see the "STOP" on the road very distorted. Vision could also theoretically see this on the ground but its not reliable.

Therefore, AP2 needs HD maps so it knows that there is an intersection coming up that has a stop sign, whether you see it or not. This just adds to the safety. @verygreen has noted that each recent release (including the most recent 17.28) has been adding more and more "map downloading" code to the APE. I believe this is the framework for HD map downloading.

I’d like to offer my (completely theoretical) disagreement to the necessity of HD maps.

First off, AP would have to rely on a live, OTA update process loading tiles ahead, since downloading an HD map of the entire US, let alone the world, to current in-car data drives *seems* impossible. In the OTA scenario, however... well, even where I live in NJ, I wouldn’t trust ATT’s service while hurtling at 80mph down the highway with my life.

More importantly, I think there’s a more feasible and data-efficient alternative. As everyone knows, we drive our AP2 Teslas in shadow mode. This of course means they’re recording large amounts of data during events where AP would have done something significantly different from what the driver ended up doing. It seems fairly simple to process the event through NN and then tag the location with an updated response for AP to follow.

To use your example, if a Tesla driver stops before sight of the stop sign, and AP predicted it wouldn’t have stopped, the driving behavior and AP sensor data would be recorded and sent to the NN for significant processing. Of course, AP would eventually still recognize the sign when close enough, and this data may be sent as well as part of the event. Eventually, I’d assume, Tesla will have upgraded a simple “google” map of the US (or world) to include data telling AP how to behave when at a specific, previously tagged location. In all other situations, shadow mode would have repeatedly verified that AP would react correctly, so no additional data from a beefed up map is necessary.

Sorry for spilling my brain into a blog post, and don’t take my ramblings to heart.
 
from what I see, once hw2.5 references appeared, additionally references to a dual node (not just gpu) setup have appeared as well at the same time, which does not sound like it's just a pure coincidence to me.

Any further info?
Does it seem like a more powerful unit with extra SOC/GPU?
Is it a separate networked (redundant?) node like shown here:
px2_arch-jpg.226402


Or anything that hints towards an upgrade to the next generation - DRIVE PX Xavier?



sensors are external to APE, but the code that inits them is mostly the same (but not exactly).
Model 3 has a different connection backup camera obviously, but even for non model 3 HW2.5 backup camera is connected differently.

How is it different? Physically? Or just different in software?
 
I’d like to offer my (completely theoretical) disagreement to the necessity of HD maps.

First off, AP would have to rely on a live, OTA update process loading tiles ahead, since downloading an HD map of the entire US, let alone the world, to current in-car data drives *seems* impossible. In the OTA scenario, however... well, even where I live in NJ, I wouldn’t trust ATT’s service while hurtling at 80mph down the highway with my life.

More importantly, I think there’s a more feasible and data-efficient alternative. As everyone knows, we drive our AP2 Teslas in shadow mode. This of course means they’re recording large amounts of data during events where AP would have done something significantly different from what the driver ended up doing. It seems fairly simple to process the event through NN and then tag the location with an updated response for AP to follow.

To use your example, if a Tesla driver stops before sight of the stop sign, and AP predicted it wouldn’t have stopped, the driving behavior and AP sensor data would be recorded and sent to the NN for significant processing. Of course, AP would eventually still recognize the sign when close enough, and this data may be sent as well as part of the event. Eventually, I’d assume, Tesla will have upgraded a simple “google” map of the US (or world) to include data telling AP how to behave when at a specific, previously tagged location. In all other situations, shadow mode would have repeatedly verified that AP would react correctly, so no additional data from a beefed up map is necessary.

Sorry for spilling my brain into a blog post, and don’t take my ramblings to heart.
Good thoughts. One thing on the data size problem of HD maps. Mobileye claims to have solved this problem - there are several publicly available talks given by their CEO in the last year describing their approach to low-data, crowd-sourced HD maps. The basic approach is to use visual camera confirmation of known physical objects at the side of the road to lock on a position. So a billboard is one example - all the cars know that a billboard is located at some particular place. You need only that one single object's location and description in your HD map for it to be effective. The computer is "looking for" that billboard and when the cameras spot it they process your location based on the frame-by-frame changing size and angle of the billboard in the camera's view. This is super low-bandwidth and solves the problem (at least when it isn't super foggy out).

What Tesla will do? Who knows. But Mobileye is implementing a group effort with GM (I think) and other partners this year to crowdsource these maps.