Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
I'm not sure it's maps. I think actually it's cameras+NN getting confused about whether I'm on a highway or an exit ramp. This is a section where there is no shoulders and concrete barriers -- it "looks" a lot like an exit ramp.

Map meta data could still help, like Elon was talking about spline through intersections, it could provide a strong hint that the road does not have an exit. Similar to that video with the wide unmarked intersection on a curve/ hump that causes the car to track to the wrong lane...
 
Map meta data could still help, like Elon was talking about spline through intersections, it could provide a strong hint that the road does not have an exit. Similar to that video with the wide unmarked intersection on a curve/ hump that causes the car to track to the wrong lane...

Except there is an exit right before where I have this problem. I think that maybe the car thinks I took that exist, because the NN sees concrete barriers, no shoulder, and everything elevated above ground level. So it looks sorta like I'm on a ramp. And I suspect they have some kind of "I'm on a ramp" detector in the NN to be able to implement the thing where it slows down on ramps now, without relying 100% on maps.

But I could also be completely wrong about that. What I know is that this morning I was going through this section in very low speed stop & go traffic and it did not get confused, suggesting to me that it's not entirely a map problem. And it may be a problem that could be fixed with more compute power in HW3. Except -- and this is really the point of my original post on this subject -- I will not get HW3 because I did not order FSD. But if they can't deliver really EAP without HW3 (which I believe is likely true), then they ought to upgrade EAP buyers with HW3. But they probably won't, because Tesla.
 
  • Helpful
Reactions: mongo
Except there is an exit right before where I have this problem. I think that maybe the car thinks I took that exist, because the NN sees concrete barriers, no shoulder, and everything elevated above ground level. So it looks sorta like I'm on a ramp. And I suspect they have some kind of "I'm on a ramp" detector in the NN to be able to implement the thing where it slows down on ramps now, without relying 100% on maps.

But I could also be completely wrong about that. What I know is that this morning I was going through this section in very low speed stop & go traffic and it did not get confused, suggesting to me that it's not entirely a map problem. And it may be a problem that could be fixed with more compute power in HW3. Except -- and this is really the point of my original post on this subject -- I will not get HW3 because I did not order FSD. But if they can't deliver really EAP without HW3 (which I believe is likely true), then they ought to upgrade EAP buyers with HW3. But they probably won't, because Tesla.

Gotcha, so there is an exit, but you are not taking it. Your morning experience may also have been due to following a car in front of you. That has classically allowed for handling tricky intersections.
I understand the pessimism, if this is an issue on a highway, then it seems like Tesla will upgrade the HW, if it can't get the NoA functionality to work on HW2.x .
 
@verygreen a lot of people are having issues with Spotify skipping on their latest firmware and we've put it down to just overloading the media CPU with drawing maps since it goes away if you replace the map with a sketchpad, but wondering how they made such a fundamental error I was wondering if the maps had gotten more detailed recently as many have been receiving maps updates. Do you know if the map resolution has changed in recent times?
it's some sort of cpu overload on the mcu, has nothing to do with autopilot.

The detalization did not change, but if you up the cpu (e.g. do dashcam) - it intensifies. I bet if you do some cpu-heavy web page in the browser or an atari game - that'd have a similar thing
 
  • Informative
Reactions: conman
I believe the reason is that Mobileye has patented speed sign recognition.

Multiple patents exist for speed (and other) sign recognition in an automotive application. At least two of them pre-date MobilEye's patent. And if I'm honest, MobilEye's patent could be fairly easily invalidated with both prior art examples and the fact that it's an obvious technology. Simply applying existing technologies together doesn't make for a solid, defendable patent. At least not outside of Southeast Texas.
 
Multiple patents exist for speed (and other) sign recognition in an automotive application. At least two of them pre-date MobilEye's patent. And if I'm honest, MobilEye's patent could be fairly easily invalidated with both prior art examples and the fact that it's an obvious technology. Simply applying existing technologies together doesn't make for a solid, defendable patent. At least not outside of Southeast Texas.

Given that this is a global question and not merely a U.S. one I doubt we lack the insight here to really judge patent’s applicability.

But does anyone here thing a patent is the reason AP2 does not read speed signs yet? I for one do not think it is the reason. Them not having implemented it yet or it not being reliabile enough or not running well on AP2/2.5 hardware sounds more plausible to me.
 
Given that this is a global question and not merely a U.S. one I doubt we lack the insight here to really judge patent’s applicability.

But does anyone here thing a patent is the reason AP2 does not read speed signs yet? I for one do not think it is the reason. Them not having implemented it yet or it not being reliabile enough or not running well on AP2/2.5 hardware sounds more plausible to me.
But reading speed limits is the most basic thing and easiest of vision recognizion problems and many cars have it (using MobileEye's technology I think).
 
The Mobileye system in my previous car was so unreliable, used to pick up limits from all over the place. I imagine this uncertainty is more likely the reason for not implementing speed limit recognition yet, especially as AP2 reacts to changes in speed limits.
 
  • Like
Reactions: mpt and croman
The Mobileye system in my previous car was so unreliable, used to pick up limits from all over the place. I imagine this uncertainty is more likely the reason for not implementing speed limit recognition yet, especially as AP2 reacts to changes in speed limits.

Except that speed limit recognition works really well on AP1.
 
  • Like
Reactions: Sean Wagner
But does anyone here thing a patent is the reason AP2 does not read speed signs yet? I for one do not think it is the reason. Them not having implemented it yet or it not being reliabile enough or not running well on AP2/2.5 hardware sounds more plausible to me.
But reading speed limits is the most basic thing and easiest of vision recognition problems and many cars have it (using MobileEye's technology I think).

I don't think it is a patent issue. But I also don't think it is a lack of ability on Tesla's part either. I believe Tesla could have implemented camera based speed limit recognition if they had really devoted resources to the problem. I think the lack of speed limit recognition is because of different priorities and a deliberate design choice. When AP2 launched, Tesla's priority was just getting autosteer and TACC back to AP1 level which was a struggle for awhile. After all, what's the point if the car can see speed limit signs if it can't even stay in the lane properly? So, they were most likely devoting most of their efforts to that so camera based speed limit recognition probably went on the back burner. Once autosteer and TACC were good enough, we know Tesla focused on implementing Nav on AP because it was a major EAP feature and also a gateway to FSD. Tesla wanted to get Nav on AP done. Also, Tesla probably made a conscious design choice to go with GPS based speed limit recognition. It's easier, it frees up resources for other things for the team to work on, it's one less thing for the camera vision to have to do, and it's "good enough" for most highway driving.

I do think this will change with AP3. AP3 has more computing power so Tesla will be able to have the camera vision do a lot more. Having the camera vision do speed limit recognition on top of everything else won't be an issue. Plus camera based speed limit recognition is more critical for city driving as you have more changes in speed limits and adapting to changes to the speed limits is more important. Plus, Tesla may have wrapped up speed limit recognition with other sign recognition. So it might be seen more of a "FSD" feature.
 
The Mobileye system in my previous car was so unreliable, used to pick up limits from all over the place. I imagine this uncertainty is more likely the reason for not implementing speed limit recognition yet, especially as AP2 reacts to changes in speed limits.

The MobilEye system is usually combined with an OEM specific database and interpretation so the answer is it really depends on the car to make sense of it with maps etc. There are also three, four generations of the sign recognition out there (different EyeQ chips) so depends on the car.

My personal hunch is AP2 would use traffic-sign recognition if Tesla had a reliable on themselves but they don’t yet or can’t run it on AP2/2.5 hardware.
 
But reading speed limits is the most basic thing and easiest of vision recognizion problems and many cars have it (using MobileEye's technology I think).

MobilEye has had automotive chips capable of this for over a decade now sure but that only means most OEMs have been using their chip. Waymo of course has their own system and there are various laboratory prototypes doing the same but basically I think the issue is Tesla hasn’t implemented their own reliably yet or it is too heavy to run on AP2/2.5 yet.
 
Elon stated that they use 80% of the total capacity now. Sign reading has moved from a "classic" computer vision task (OCR) to NN. Then you need all kinds of signs etc in a huge learning database, adding to the total load. Maybe it would require > remaining 20%?
 
  • Like
Reactions: mongo
basically I think the issue is Tesla hasn’t implemented their own reliably yet or it is too heavy to run on AP2/2.5 yet.

Which honestly, given the current state of EAP/NOA, is as it probably should be. I would rather have them work on not slamming into concrete gores and fire trucks than work on sign recognition. Sign recognition at this point would just bring more false positives leading to sudden braking when not appropriate.

If their house were otherwise in order it would be appropriate for them to be working on sign recognition.
 
  • Like
Reactions: emmz0r
Elon stated that they use 80% of the total capacity now. Sign reading has moved from a "classic" computer vision task (OCR) to NN. Then you need all kinds of signs etc in a huge learning database, adding to the total load. Maybe it would require > remaining 20%?

Even worse on the oversized NN case: if they started with the ability to read signs, then needed to remove later it to make room for driving functions.
 
  • Like
Reactions: emmz0r
I didn't catch this point in the previous 70+ pages here but the statement Elon made about FSD computer only using 5% or 10% if in the redundant mode for NoA, sounds great but it does mean they only have 45% or less to work with really before maxing out this new hardware if going for the safest option (redundant).

We'll know more on Monday but from everything I can imagine they need to do to handle with FSD, nevermind the unknown unknowns I just wonder if that can all be done in the resources available. Just going full frame resolution at a higher frame rate must take up a good chunk of the compute available compared to HW2.5.
 
I didn't catch this point in the previous 70+ pages here but the statement Elon made about FSD computer only using 5% or 10% if in the redundant mode for NoA, sounds great but it does mean they only have 45% or less to work with really before maxing out this new hardware if going for the safest option (redundant).

We'll know more on Monday but from everything I can imagine they need to do to handle with FSD, nevermind the unknown unknowns I just wonder if that can all be done in the resources available. Just going full frame resolution at a higher frame rate must take up a good chunk of the compute available compared to HW2.5.

Full frame size is only 60% more input pixels (current 4 at 100%, 4 at 25%). Doubling frame rate on all cameras only puts things at 3.2x current size
Only using 10% in redundant mode means they have 9 times the current NN size available/free. (or 5% vs 45%, if you prefer). Or 3x the doubled, full frame rate. Basically, they could fit the current full NoA NN in the system three times. That is a lot of room given what the current NN already handles.