Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
NN obviously is trained to recognize certain things, so one of those things is "construction zone" I guess. So then they request snapshots of the matching labels.
The snapshot request here looks like this:
Code:
{"query":{"$and":[{"$eq":["@LegacyDebug.gear",4]},
{"$gt":["@TelemetryOutput.distance_travelled_m",2000]},
{"$gt":["@LegacyDebug.veh_speed_mps",5]},{"$lt":["@LegacyDebug.veh_speed_mps",80]},
{"$gt":[{"$sum":[{"$labelboost":"$BIG_4K_of_data","version":"0.1","camera":"main"},
-0.012629894464]},0.5]},{"$gt":["@VisionImageEmbeddings.main.timestamp_ns",0]}]}

Dear lord, they’ve implemented lisp in json...

(Or as Bones would say, “Dammit Jim, I’m a CAR, not a CDR!”)
 
That's really fascinating. The B pillar cams sure seem to be the best ones for determining if your car is currently centered in your lane or not. The main camera looks pretty far ahead, so this could potentially be for validation that the new control algorithm is performing acceptably.

In my earlier checking only side repeaters had to FoV to see lane markings. B pillars do not see low enough. See this image which I made from snapshots where side marker and B pillars images are twisted into a 360 view(ish)...

ap2_blindspots-jpg.238093


I wonder if not using fisheye yet is also the reason why .42 seems to disengage for me on very wide lanes?

How is .40/.42 AP2 doing for you? #32
 
Couple of quick questions @verygreen:

1. Considering that the snaps are called "lb-xxx..", is there any connection between these and the mysterious "lb"-node? (Remember the speculation back in the 2.0-thread about that?)
No, I don't think those two lbs are related. here lb clearly is a shorthand for "label".
BTW since then I found that lb is more than just gps, it's more like a gateway on cid. It manages some security keys for example, controls the e,bedded ethernet switch and so on.

2. Can you see anything resembling a label for blinkers / side markers on cars ahead of you?
There are not human readable labels I can see. It's just the triggers had human-readable names, but actual match pattern was a 4k binary blob. So short of finding out how to generate my own blobs like that and then making it trigger and guess what triggered it there's no way to see any other labels.

Or I guess tehy might push down some more triggers, that would be nice too.
 
End-to-end is a fast way to create a demo, but it is not a viable option for production use. Amnon Shashua has good lecturers on this.

Agreed.

I guess I'll post here too.
The latest snapshotting logic shows us that they actually have some quite advanced labels.
A snapshot trigger for my car requesting "label: construction" triggered for my car today and created this image:

aQcVLmM.jpg


There's a bunch of other labels requested like "Confusing lanes", "slope up/down", "barrier" and so on.

I guess I have an urgent need to drive around in strange places to see what else I can trigger ;)

Wow this is awesome!

they only send the main one for the label requests.
The curve fitting now sends 4 cameras: main, fisheye and the B pillars (both)

What is the curve fitting?
so its using all these 4 cameras you mention and doing some curve fitting for something?
 
What is the curve fitting?
so its using all these 4 cameras you mention and doing some curve fitting for something?
Curve fitting is what they apparently do with with path planning.

In this case when they detect steering wheel position over certain angle when above certain speed they take a snapshot from the 4 cameras in question with some pretty low probability ("sample-likelihood":0.002 , but they evaluate it 1000 times a second.)
 
1000 times per second seems a bit excessive, considering the frame rate of the cameras, the update rate of other sensors and the speed at which the direction of the car can change.
 
1 kHz is only 3cm in 120 km/h so seems a bit excessive. I think around 60 Hz seems most reasonable, that's around 0.5 meters travelled each frame in 120 km/h.

A control loop should not be that closely wired to a single frame input and thus not require a high frame rate. That could cause erratic behavior if disturbed by rain, sunlight, reflections, bad marks etc. Better that the EAP/FSD makes a more long term driving plan based on what it can see ahead that is based on multiple frames of reference.
 
  • Like
Reactions: travis010
Might explain some of the erratic behaviour we are seeing. All inputs and decisions need hysteresis, but you have to weight them differently. It's hard to strike a balance.
 
I see the point that making steering corrections based on a 1ms snapshot could cause erratic steering. But aren't we just talking about the decision to take a snapshot to send back to Tesla? Why would the frequency with which you "roll a die" to see if you should take a snapshot be linked to erratic steering behavior?