Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
I believe the reason is that Mobileye has patented speed sign recognition.

I doubt it. Patents specify the exact method, not the general outcome, and MobilEye’s way of recognizing speed signs is different than we expect from Tesla’s. More likely they aren’t reliable with it yet and/or their approach needs much more compute than they have available/left on HW2/2.5...
 
  • Like
Reactions: croman
From a quick read, this appears to cover the process @verygreen described:
Inital image where a sign is detected
Subimage created and passed to sign processing routine.

If so, Tesla needs to wait until their NN does everything sign related off of the inital image (one pass).
A method for detecting and identifying a traffic sign in a computerized system mounted on a moving vehicle. The system includes a camera mounted on the moving vehicle. The camera captures in real time multiple image frames of the environment in the field of view of the camera and transfers the image frames to an image processor. The processor is programmed for performing the detection of the traffic sign and for performing another driver assistance function. The image frames are partitioned into the image processor into a first portion of the image frames for the detection of the traffic sign and into a second portion of the image frames for the other driver assistance function. Upon detecting an image suspected to be of the traffic sign in at least one of said image frames of the first portion, the image is tracked in at least one of the image frames of the second portion.
1. A method identifying a traffic sign using a computerized system mounted on a moving vehicle, the system including a camera mounted on the moving vehicle, wherein the camera captures in real time a plurality of image frames of the environment in the field of view of the camera and transfers the image frames to an image processor, the method comprising the steps of:
(a) programming the processor for performing the detecting of the traffic sign and for performing another driver assistance function;
(b) first partitioning a first portion of the image frames to the image processor for the detecting of the traffic sign and second partitioning a second portion of the image frames for said other driver assistance function; and
 
They downsample all the cameras 4:1 (and the cameras are pretty low rez to begin with). That's probably why they can't read the speed limit signs. With HW3 they won't be downsampling the camera data.

This is a really great point, and the conclusion from this is that they cannot deliver reliable EAP (according to the original definition) on HW2.x. So, I'm sure Tesla will be upgrading all those EAP purchasers to HW3, right?

There's a regression in 2019.8 where at a certain spot on my morning commute it believes the speed limit (on the highway) is suddenly 25mph and slams on the brakes. This is dangerous and Tesla ought to fix it... but I am on HW2.5 and did not pay for FSD... so how are they gonna fix it exactly?
 
This is a really great point, and the conclusion from this is that they cannot deliver reliable EAP (according to the original definition) on HW2.x. So, I'm sure Tesla will be upgrading all those EAP purchasers to HW3, right?

There's a regression in 2019.8 where at a certain spot on my morning commute it believes the speed limit (on the highway) is suddenly 25mph and slams on the brakes. This is dangerous and Tesla ought to fix it... but I am on HW2.5 and did not pay for FSD... so how are they gonna fix it exactly?
Add more training data to the network. I assume you have sent them a bug report?
 
Add more training data to the network. I assume you have sent them a bug report?

Hahahahahaha

So first of all, yes I did hit the bug report... eventually. There is another regression with the bug report where it takes several minutes to come up. For me this morning it took 3 tries. I was on a completely different part of the highway by the time it worked.

But more importantly -- have you at any time in the past few months tried to get them to pay attention to a bug report? The phone support isn't what it used to be. Give it a try some time. The only way to get anybody to pay attention is if the service center believes there is a hardware problem with your car; in this case bug report gives them a timestamp that they can use to pull the logs. But for reporting a software problem? Forget it.
 
Forget the speed limit bug reports, nothing will happen. It's not even learning from the fleet.

For the past four years, I've driven a model S past my city limit, where the 50kph ends and a sign says 60kph.
AP1 read the sign, drove 60kph.
AP2, even after 2 years and tons of Teslas driving past the very spot(it's a very neuralgic spot to get out of the city), still jumps to 100kph, even though I bet all Teslas there are not exceeding 75kph, most (like everybody else) stay around 65-70kph.

It's appalling and dangerous. Not as dangerous as suddenly braking down to 40kph on a 100 kph section of highway (which also happens), but still.

And it's the last feature missing for AP2 feature parity with AP1.
 
  • Like
  • Helpful
Reactions: am_dmd and rnortman
I suspect the limit is the hardware, and hope that the development version of the software running on HW3 will address these issues when released.
Elon makes some bold statements in this video.


Though I fully recognize he tends to be overly optimistic. I am looking forward to the April 22 announcements.
 
  • Like
Reactions: Richard34212
From a quick read, this appears to cover the process @verygreen described:
Inital image where a sign is detected
Subimage created and passed to sign processing routine.

If so, Tesla needs to wait until their NN does everything sign related off of the inital image (one pass).

The spec in that patent does not define "driver assistance function" so claim construction is needed to really know whether it would even apply to an autonomous vehicle operation that provides 0 assistance to a driver (driverless). So perhaps Tesla could side-step this issue entirely if they only allow image recognition for FSD.

But it could be argued FSD is DAS and therefore Doctrine of Equivalents will come into play.

I'm skeptical this patent is really the issue or that a simple patent would stop Tesla from doing anything that Elon wants. They could just negotiate a license (bad blood but possible) or innovate around like you indicate (though I think Claim 1 is written to encompass everything Tesla would use sign recognition for in both L2 and perhaps FSD).
 
Last edited:
  • Informative
Reactions: Inside
By the way, I meant to ask you. When you get AP3 on your car and it is running the new NN for the FSD software, will you give us info of what it sees and how the AP3-FSD compares to the current AP2 software? Thanks.
so far Tesla refuses to sell the hw3 unit to me, so who knows when I'll be able to get my hands on one and actually break in.

But I certainly hope I'd be able to do this!
 
Hi,

My proposal for the architecture:

44F6F897-3B0F-421F-9874-4731CA948280.jpeg


Main idea: both SOC and NN processor are based on similar IP. And Samsung is the FAB for both silicons.

The architecture looks similar to NVIDIA as is.

Cheers!
 
@verygreen a lot of people are having issues with Spotify skipping on their latest firmware and we've put it down to just overloading the media CPU with drawing maps since it goes away if you replace the map with a sketchpad, but wondering how they made such a fundamental error I was wondering if the maps had gotten more detailed recently as many have been receiving maps updates. Do you know if the map resolution has changed in recent times?