The absence of this feature really confuses me. It's not a complex thing to do.
.... kind of is, actually. Ignoring that OCR is kind of CPU intensive.... you can't have it just reading any numbers that you drive past as speed limits; it needs to recognize that something is a speed limit sign, and then OCR it. And recognizing a speed limit sign is not trivial (image recognition in general isn't). I'm not sure whether, if I were in their shoes, I'd try to write an algorithm for sign recognition or rely on a neural net. I have a feeling that the algorithm would keep discovering new failure cases that hadn't been encountered before. But neural nets are CPU-hungry black boxes, and not only do they need a very large dataset of manually-evaluated training data, but if something goes wrong, you have no way to be able to tell why (just feed in more training data similar to the failure case and hope the ANN gets better....)
They'll get it. But I'm not surprised that it - or any other aspect of trying to recreate AP1 - is taking a while. Tesla is taking on the most difficult of self-driving tasks (realtime adaptive road navigation - rather than pre-computed maps, like Mercedes uses - with affordable, practical sensors - rather than expensive, awkward LIDAR domes like Waymo uses).