The more I use autopilot the more I see the complexities of self-driving cars. Here's an example that would be impossible for a car to deal with correctly unless it can read human body language.
I was on autopilot in a 50/60 limit with a very wide lane, the car was in the middle of the lane as you'd expect with a lot of space either side. Up ahead I could see a woman standing at the edge of the road looking up and down the road waiting to cross. When she was 10-15m in front of the car she stepped into the road, clearly with the intention of crossing quickly behind me. As you'd expect, the car sees a pedestrian in its lane and the alarm goes off and it slams on the brakes. This kind of edge case is extremely difficult to deal with.
A much more common problem will be pedestrian crossings. Are the two people standing there waiting to cross or just having a chat? A human would instinctively know this by looking at what the pedestrian is paying attention to, but getting neural networks to understand and interpret this situation correctly is much more difficult.
Just food for thought.
I was on autopilot in a 50/60 limit with a very wide lane, the car was in the middle of the lane as you'd expect with a lot of space either side. Up ahead I could see a woman standing at the edge of the road looking up and down the road waiting to cross. When she was 10-15m in front of the car she stepped into the road, clearly with the intention of crossing quickly behind me. As you'd expect, the car sees a pedestrian in its lane and the alarm goes off and it slams on the brakes. This kind of edge case is extremely difficult to deal with.
A much more common problem will be pedestrian crossings. Are the two people standing there waiting to cross or just having a chat? A human would instinctively know this by looking at what the pedestrian is paying attention to, but getting neural networks to understand and interpret this situation correctly is much more difficult.
Just food for thought.