Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
:eek::eek::eek::eek::eek::eek::eek::eek::eek: What?!!!!!!! Oh this is getting interesting. I wonder how they will integrate the NN and if it is going to be function dependent, or if it will switch or if it will be used to check itself (before it wrecks itself) while changing lanes. Super fascinating peek into how Tesla is attempting to approach the issue.
They'll just fuse inputs, I guess. They already fuse NN outputs and radar outputs. So just more fusing.
The NNs ar separate, you can run them side by side as long as resources permit.
 
I am not an AI/NN expert, I just know some buzzwords, so that's the biggest block for me. ;)

Now for the people that really know what they are doing the problem is the NN code is not generic caffe but modified and the other problem is we don't really know what's the exact image format the NN was trained on. It might be possible to intercept an image as it's being fed into the NN, but that's not super trivial. The model does not get unmodified images from the cameras.

Do you know if the prototxt changed?

When I play around with NN's I typically never change the prototxt aside from a few minor things like if I'm combining models or if I need to change some parameter like the input data size.

But, I'm constantly retraining the model after adding additional training data.

So it would be interesting to see if Tesla changed just the model or both the model and the prototxt. One would be a likely minor improvement, and the other would be a lot more significant.

I'm not an expert on any of this stuff either. I know enough that one of my bedrooms has a massive computer with four water cooled NVidia GTX 1080 TI's, and I'm working on a Jetson TX2 based intelligent camera. I haven't done anything entirely useful with it so far aside from making a silly card recognizing black jack game. I may have gotten a little ahead of myself on the computer.
 
Last edited:
May I ask you knowledgeable fellows what effect you think Karpathy has had on the recent firmware release pace of things? It seems to me that there has been a lot of work put into cleaning things up before moving forward.

I would guess that the clean-up work was Chris Lattner's legacy - in his resume he states: "I advocated for and drove a major rewrite of the deep net architecture in the vision stack, leading to significantly better precision, recall, and inference performance."

.40 is the first release with a new NN since June(?), which was when Lattner left, so probably Karpathy's debut.

EDIT: re-reading CL's resume and noticed this statement which I didn't really pay attention to first time around:

I was closely involved with others in the broader Autopilot program, including future hardware support, legal, homologation, regulatory, marketing, etc.

We now know (some) of what the "future hardware" was. The rest seems overkill for autosteer and TACC. Is there an FSD dev team hiding in /etc?
 
Last edited:
They plan to have different NNs for the other cameras (actually already have for some, but didn't give it to us yet).
Once for the wide, one for the sides and one for the repeaters. I guess one for backup camera as well eventually.

Makes sense at this stage to deliver certain piecemeal features/assist:

1) Front NN to drive AP1 parody (already there), later parity
2) Side-marker NN to Enhance AP for auto-lane changes
3) Front fisheye NN for rain sensing, traffic signs and possibly early "FSD only" features like traffic light sensing...
4) Backup camera NN could add to e.g. 2) or 3), though adding to 2) would not fit with the four camera EAP angle...

Whether or not they'll merge all this in a future FSD may be a completely different question.
 
Last edited:
This is likely the case. The code on ape eventually reaches into "MobilEyeConnector" object to emulate what mobil eye spits out.
That said I don't really know if there's a separate FSD team or not.


I think they are two color channels and images from main and narrow are fed into the network separately. A bit hard to know for sure.

They sample the active cameras at 30 fps so I imagine they feed NNs at the same rate.\
We have the full caffe layer model that @tedsk helped to dissect a bit.


Also on an unrelated note, 17.40 now creates disengagement reports for all disengagements. It includes very little data, though.

Here's a sample for steering and braking disengagements.

Code:
{
    "snapshot-version": "0.3",
    "wall-time": "1508360038397471024",
    "monotonic-time": "236198604704",
    "sha1": "e29b97f1b59845ad",
    "requester": "ap-diseng-brake-cancel",
    "faux-board-id": "c8875145-e3fe-493a-98b8-9b8ef4c28ddf",
    "request-clock-type": "monotonic",
    "request-trigger-time": "236198535491",
    "boot-sec-info": "0x00000002",
    "vehicle-type": "M3-RHD",
    "hardware-type": "HW2.5",
    "product-release": "2017.40.1 (develop-2017.40.1-73-e29b97f1b5)",
    "entries": {
        "gps": {
            "lon_deg": "-85.02612860",
            "lat_deg": "33.92129920",
            "horizontal_accuracy_sd_m": "0.74700000",
            "heading_of_motion_deg": "220.66665000",
            "heading_of_vehicle_deg": "220.66665000",
            "heading_accuracy_deg": "0.00747000",
            "ground_speed_mps": "14.43700000",
            "speed_accuracy_mps": "0.00600000"
        }
    }
}

Code:
{
    "snapshot-version": "0.3",
    "wall-time": "1508360332037477392",
    "monotonic-time": "529838611104",
    "sha1": "e29b97f1b59845ad",
    "requester": "ap-diseng-steering-cancel",
    "faux-board-id": "c8875145-e3fe-493a-98b8-9b8ef4c28ddf",
    "request-clock-type": "monotonic",
    "request-trigger-time": "529838521127",
    "boot-sec-info": "0x00000002",
    "vehicle-type": "M3-RHD",
    "hardware-type": "HW2.5",
    "product-release": "2017.40.1 (develop-2017.40.1-73-e29b97f1b5)",
    "entries": {
        "gps": {
            "lon_deg": "-85.06143680",
            "lat_deg": "33.91356490",
            "horizontal_accuracy_sd_m": "0.46500000",
            "heading_of_motion_deg": "220.51976000",
            "heading_of_vehicle_deg": "220.51976000",
            "heading_accuracy_deg": "0.00465000",
            "ground_speed_mps": "14.21500000",
            "speed_accuracy_mps": "0.00500000"
        }
    }
}

Crashes on shutdown seems to be gone as well.

If I'm reading the field names right, one interesting tidbit in that dataset: the car believes it knows its location from GPS within 0.465 meters - about 18 inches.

That's a lot tighter than I've seen elsewhere, but rather loose to try to drive to in absence of other guides.
 
This makes so much sense... it's the T Rex from jurassic park (it can't see us if we aren't moving).

This isn't exactly true. You definitely get radar returns from stationary objects. The problem is that you get radar returns from everything. Raw radar returns are overwhelming and noisy. And because the spatial resolution of radar is not great, the radar can't reliably tell you if that stationary object is actually in your path or if it's on the side of the road, or if it's just a reflection from something that isn't actually there but somewhere else -- if you stopped for all stationary returns that might be in your path, you'd pretty much stop constantly. So most vehicle radar systems, which are optimized for ACC following rather than self-driving, start by filtering out all stationary objects so they have a manageable amount of data left.

(You do actually have a pretty good ability to tell if a stationary object that's very close is in your path; the resolution is better up close. But by the time you're that close, it's too late to slow down if you're at highway speeds.)

Newer radar systems are getting much better at handling stationary returns. I think this is a combination of higher frequencies and more computing power on the radar itself. But radar is always going to be limited in its ability to separate meaningful stationary objects from "clutter". You need vision and/or LiDAR to do it reliably enough for self-driving (even L3).

One thing I'd love to know @verygreen is whether Tesla is getting raw (or almost raw) returns from the radar in HW2.5 or if they're only getting what the radar's on-board processing spits out. Knowing the data throughput of the interface would provide a hint about this... but I doubt they're getting raw returns despite Musk's musings about that last year.

May I ask you knowledgeable fellows what effect you think Karpathy has had on the recent firmware release pace of things? It seems to me that there has been a lot of work put into cleaning things up before moving forward.

In my experience so far, incrementally, slightly better. Not massively better in any way.
 
If I'm reading the field names right, one interesting tidbit in that dataset: the car believes it knows its location from GPS within 0.465 meters - about 18 inches.

That's a lot tighter than I've seen elsewhere, but rather loose to try to drive to in absence of other guides.

Wouldn't read too much into that data ... the car also belives it is a RHD M3 located in the middle of a forest near Atlanta
 
  • Funny
Reactions: boonedocks
(You do actually have a pretty good ability to tell if a stationary object that's very close is in your path; the resolution is better up close. But by the time you're that close, it's too late to slow down if you're at highway speeds.)

In your example, the object is not stationary. It has a negative speed relative to the radar head.

You have to remember that there is an inherent inaccuracy here for both radar and lidar. Both types of sensor report relative values, so the system has to also know the vehicle speed + heading in order to determine absolute speed + heading of an object.

The M8L chip that Tesla uses in AP2 is extremely accurate compared to older vehicle speed sensors, but it is still much less accurate than the data that the radar/lidar sensor is working with. IMO a good reason why vision has to have priority over sensing.
 
Wouldn't read too much into that data ... the car also belives it is a RHD M3 located in the middle of a forest near Atlanta

I saw the model 3, which really made me wonder, but missed the RHD part. When I stuck the coordinates into Google Maps, I got a location right next to a road ("Vinson Mountain Road," in GA as you said,) that did run at roughly 40/220 orientation and looked like the ~32 mph would be perfectly reasonable for it.
 
<snip> May I ask you knowledgeable fellows what effect you think Karpathy has had on the recent firmware release pace of things?<snip>
Oh my, so I just barely got 2017.40.1 installed and ev-fw already lists 2017.41 c84dea9 as available. <snip>
Re: firmware release pace --- FYI, from a spreadsheet I recently created:
p5uGrbH.jpg


I think that's a user error, since there is only 1 version of it.
I wouldn't think so since TeslaFI has captured a couple of them (2017.41.6 6dc7353) as well.
 
Last edited:
  • Like
Reactions: buttershrimp