Since it is the weekend, I'm typing out more...sorry if this is better in another thread...
TL;DR - Tesla is at the top of the iceberg with integration of non-real-time ground truth data augmenting the real-time camera data and I hope subsequent builds will make huge progress to elimination of hesitations, lateral jerk, ghosting and lane changes.
Longer version...
I like solving problems and I'm hoping this helps as well...
Hesitations due to current real-time software pixel space limitations can be solved with higher confidence when the planned path is no longer an issue. Higher confidence is achieved with *some* amount of non-real-time ground truth (vector space) data of the next road segment at challenging areas. This is way forward I think they are taking and I think it will work and not reach a local maxima prior to solving all blocking challenging areas. This is is analogous to driving with only the on-screen visualization vs supposed 'super-human' memory (aka vector space ground truth) of previous trips where you've navigated the same road segment prior augmenting the current visualization.
High lateral jerk (quick succession, rapid steering wheel movements) at slow speeds (roughly 1 to 10 mph) are an issue I'm not as certain on how to fix. These are currently caused by the lane centering code which used to be solely based on Kalman smoothing/filtering and it seems it is still done that way as I'm noticing the exact same behavior since I was there in 2015. This type of control method is great for higher speeds, but totally falls over at slow speeds due to the calculations never settling. The signal to noise ratio gets out of whack. At its worst, the steering wheel will wildly move for several seconds at slow speeds as it thinks it is off the center of the lane (i.e. off center from where the path planner is commanding it to be) by an inch or less. This has nothing to do with confidence of occupancy or path planning in general, it is simply a control smoothing issue. I'd love to have a discussion with control engineers on how to make this better... The problem is that high jerk is needed to make/complete parking maneuvers, tight right/left turns
Lane changing issues will get better with feeding in a different type of up-coming road segment data (it seems like the real-time system is being fed with deterministic data on which lane it needs to be in and the path planner acts on this with very high confidence where it never did this before), which I see it doing better with the current build. It is worse in some areas however (regressions always occur with big step changes) but I'm not worried. Trained model weights need to be tweaked most likely.
TL;DR - Tesla is at the top of the iceberg with integration of non-real-time ground truth data augmenting the real-time camera data and I hope subsequent builds will make huge progress to elimination of hesitations, lateral jerk, ghosting and lane changes.
Longer version...
I like solving problems and I'm hoping this helps as well...
Hesitations due to current real-time software pixel space limitations can be solved with higher confidence when the planned path is no longer an issue. Higher confidence is achieved with *some* amount of non-real-time ground truth (vector space) data of the next road segment at challenging areas. This is way forward I think they are taking and I think it will work and not reach a local maxima prior to solving all blocking challenging areas. This is is analogous to driving with only the on-screen visualization vs supposed 'super-human' memory (aka vector space ground truth) of previous trips where you've navigated the same road segment prior augmenting the current visualization.
High lateral jerk (quick succession, rapid steering wheel movements) at slow speeds (roughly 1 to 10 mph) are an issue I'm not as certain on how to fix. These are currently caused by the lane centering code which used to be solely based on Kalman smoothing/filtering and it seems it is still done that way as I'm noticing the exact same behavior since I was there in 2015. This type of control method is great for higher speeds, but totally falls over at slow speeds due to the calculations never settling. The signal to noise ratio gets out of whack. At its worst, the steering wheel will wildly move for several seconds at slow speeds as it thinks it is off the center of the lane (i.e. off center from where the path planner is commanding it to be) by an inch or less. This has nothing to do with confidence of occupancy or path planning in general, it is simply a control smoothing issue. I'd love to have a discussion with control engineers on how to make this better... The problem is that high jerk is needed to make/complete parking maneuvers, tight right/left turns
Lane changing issues will get better with feeding in a different type of up-coming road segment data (it seems like the real-time system is being fed with deterministic data on which lane it needs to be in and the path planner acts on this with very high confidence where it never did this before), which I see it doing better with the current build. It is worse in some areas however (regressions always occur with big step changes) but I'm not worried. Trained model weights need to be tweaked most likely.
You got it!That does make sense in terms of the precision gained by mapping markers to vector space as opposed to pixel space. And it meshes well with the previous AI day info we have regarding merging the multiple camera sources in to a single vector space used by the NN's.
But your info above seems to be a description of what FSD does with the data once it has it. What I was trying to understand was your statement that: "Creep wall and Median box which is powered by non real-time data that was collected by normal FSD cars and turned into ground truth."
I read that as indicating that the creep wall and median box areas are based on data collected by previous cars (perhaps incorporated in to some map-based data markers used by FSD cars?), rather than gleaned by the current car at the time it encounters that scene.
Are you saying the fact that the creep wall and median box features appear to be vector-space objects that is evidence of previously collected data?
Or maybe I'm mis-understanding what you are trying to say.
(again, thanks for your insights)