AlexMasters
Member
I think there is a currently a logic issue with sensor fusion and conflicts between sensors: radar vs. Cameras, ultrasonics vs. Camera etc. I think that there will always need to be vision-based path detection, supported by ultrasonics for distance measurement for adjacent objects, such as an HGV directly alongside the vehicle where a bounding box cant be established or distance/path calculations to be done - as an example, visual detection cant measure a change in separation distance quickly enough to ensure sufficient reaction time. I do believe the move to '4D' visual processing and integration / stitching of the camera views into a near 360 degree view will help enormously when its ready. Moving to a time-series based method of detecting object path and speed will help - as I understand it, the cameras currently analyse the contents of video frame by frame, but dont look at what has changed in an image between a series of frames to derive path information, other than in a very simple way. Looking at a single frame, you are relying on the AP to correctly label vehicles, road markings, stationary objects to map their behaviours and plan accordingly. It is a complex problem indeed.
Last edited: