Knightshade
Well-Known Member
My reasoning is that although the visualization detects the parked cars correctly, the car still plans a path that would crash into a parked vehicle, therefore the planner does not explicitly draw a path through free space using a "software 1.0" planner. One possible way this could happen is if the visualization was using a different unrelated network than the planner, so their decisions would not necessarily coincide. Since I know that Dojo and end-to-end learning is currently in progress, I think the current planner is doing something akin to end-to-end, but maybe with a shorter amount of video as input to this planner. For example, the neural network could take the last 1-2 seconds of video recorded as input into the planner network instead of the last 20 seconds. This would allow them to train the planner network without having Dojo, but the downside is that to make a good planner you likely need more than the past 1-2 seconds.
FWIW according to green even the FSD beta still uses conventional C++ code for everything other than perception....planning in the sense of "make a best guess about how the road I can't see over that hill will curve" is perception too.... but actual driving policy that decides what the car actually does is still done in conventional code.
There's 58 NNs total operating right now.... and most of it is the same code everyone is running... the only things that were "added" for the FSD beta are 4 more NNs and a whole bunch of conventional c++ code in a module named city_streets
(which is why the idea this was a "total rewrite" is....not terribly accurate... many of the NNs in the FSD beta are the most-current versions of ones that have been in use since 2018, and many more since 2019)
Last edited: