Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

V10 NNs

This site may earn commission on affiliate links.
Oops, should have @jimmy_d as well...=)

I received a copy of the V10 networks but haven't had time to take them apart yet. My own experience is that V10's behavior is improved in some significant ways and I'm curious to see if this means that there have been changes in the nets that can be interpreted.

It's worth mentioning that, because the only thing we can really see is the network architecture, we can get some insight into the feature set of the neural networks, the nature of their IO, and a sense of the upper bound of their capacity but that's about all. Very sophisticated behavior can come out of very simple networks given the right training and my read is that Tesla is no where near plumbing the limit of the network capacity they have right now, so there's a good chance that we won't be able to see the source of any perception improvements. If there are planning systems being migrated into the NNs then we might be able to see some of that because that would need architectural changes.

All the networks that we've seen in actual use have been pretty straightforward evolutionary extensions of earlier networks and I'd expect that to continue until the enormously increased capacity of HW3 starts being used in earnest. Karpathy has said some things that imply they are still finding ways to improve the training regimen and that mostly doesn't require any changes to the network architecture.

To really take advantage of HW3 the networks will have to change and those changes will be exciting to see.
 
Thanks. I’m still hopeful they do have this greatly enhanced version for HW3 that they will soon roll out. Doubt there is any hint of it there yet, but since I upgraded to HW3 I of course want it now. Lol

Looking forward to your insights as you have time and fingers crossed they start sneaking in some HW3 specific stuff sooner rather than later.
 
  • Like
Reactions: APotatoGod
I am also eagerly waiting to hear a report when you examine the architectural changes @jimmy_d

But I am even more curious about any other software changes. If @verygreen or anyone else while they are poking around in new firmwares if they see any new names of modules, functions, log entires, etc...

Curious what they are planning for initial FSD features... anything that implies whether they plan on releasing NoA for city streets all at once... or release features more gradually i.e. first stopping before intersections, then continuing, then protected right turns, and so on.

I figure stop sign alert ought to be released soon.

And also I figure an early access release of stopping at red lights ought to be coming very soon. Unless, they plan on waiting and releasing NoA on City streets all at once, and in that case I figure it will take awhile longer.

@verygreen

The "CityStreetsBehavior" "code" you shared? What is it exactly? A configuration parameter? a Boolean? or citystreetsbehavior is the name of a library of code that is unused? or just a symbol you are seeing showing up in executable binaries?
 
Last edited:
  • Like
Reactions: PaulJohn
The "CityStreetsBehavior" "code" you shared? What is it exactly? A configuration parameter? a Boolean? or citystreetsbehavior is the name of a library of code that is unused? or just a symbol you are seeing showing up in executable binaries?
it's a task name in list of possible tasks and some debug stuff.

There's also a reference to it in startup scripts:
Code:
    # start city streets behavior if exists (only on hw3)
    if [ -d /service/city-streets-behavior ] ; then
        sv once /service/city-streets-behavior
    fi

But the task itself is missing (even on hw3)
 
Not directly related to op but seems like this thread is in the right area:

Can someone explain what to me feels like a huge missing element in the AP / FSD model?

Raven S 10.2.1 hw3.

Compared with my own driving style, Tesla automated driving feels as though it is very literal and shallow. The car will suddenly decide that an object or road characteristic has magically appeared from nowhere, without prior evidence. When I'm driving unaided, I imagine a probability / certainty / likelihood 'mask' that is constantly changing. When I suspect decreasing certainty regarding a potentially critical aspect of the road ahead, I slow down.

Having the car so keen to drive right up to the posted speed limit, but based on only a short distance ahead of the car makes for a white knuckle experience!

I feel as the driver I need to see confidence indicators to let me know how solid the car is in its current environment and how vigilant I need to be.

Long / medium / short range confidence.
Route confidence.
System internal confidence (compromised sensors / actuators)

Without this information, especially when I can be pretty certain the AP systems will already be compromised by poor weather or road surface, it is impossible to decide at what point to take over control.

If the car perceives lower certainty at long / medium range, then it should slow down or at least give the driver warning - somewhat along the lines of a CPU load indication.

Also, repeated driver inputs such as changing road positioning should be capable of slightly biasing the car's behaviour. If the car knows it has reduced confidence on near-side lane marking, then the driver should / needs to trim the positioning accordingly.

Many small roads in Europe have 60mph limit - even country lanes. The car's determination to drive at the speed limit whenever its somewhat short term view suggests is possible results in a completely worthless and dangerous algorithm.

Having random cones and other elements repeatedly suddenly appear and dissappear on the IC display suggests there is no averaging of "confidence" but that the NNs are quite prepared to tell me that many objects are appearing from nowhere. Reflections (from glass surfaces or wet roads) seem especially good at generating fantom objects.

Do NNs intrinsically and therefore automatically deal with 'confidence'? Are they biased towards self confidence or selfdoubt? Is there a way of tapping into the NN to see its stress / confidence level dynamically?