I work in the field, but I hardly consider myself an expert. The vector I don't yet see the sort of progress along that would enable a general purpose AI to transition from GO to world domination, is the machinery to translate from the software world in the physical world.
In a highly constrained environment like Go (or Chess, or many other games), with rules and state objectives to be optimized, the neural networks can learn faster than a human can. A LOT faster. Because a system can be built, by humans, that enables the idiot savant (NN playing Go), to play games of Go in fractions of a second. Millions and billions of games if you want to throw the compute at it, and learn from the aggregate view of those games.
So as long as you can constrain the world that tightly, then yeah - the AI can develop outrageously fast.
Meanwhile, the AI algorithms that are driving our cars (or at least my Model X), gets squirrely on country highways with <30mph corners, and on streets with no lane markers (like the suburb I live in). Do we have driving AI that is learning how to drive in those situations? I believe so. I don't believe that I'm in danger of downloading a patch to my Model X this year that will enable it to drive in those still pretty easy, yet still difficult cases.
And even if I expect to get full hands-off, autonomous driving on my Model X this year, at least on-ramp to offramp freeway driving, that is STILL a chasm away from an AI that can manipulate and change the world, and decide to drive for world domination, or to end poverty, or to read the collected works of all humans for all time. Again. And again.
Because one of the other things we still don't have, anywhere I know of, is an AI that can make decisions for itself about what matters, and then decide what the outcome it wants from a decision that matters. What rule(s) does the AI follow, or objectives does it set and optimize for, when nothing is provided?