Most AI discussion seems assume that a human level or super human level of intelligence somehow necessarily implies an entity that is a person or creature-like and has intentions and foresight.
We are evolved to assume this. Both as predators and prey we've got to assume that anything that acts with intelligence has what amounts to intentions or will that might be directed against our interests.
i think Google is a superhuman AI now, just without foresight or intentionality. A better example that's quite natural is the biosphere and evolution by natural selection. A big advantage that "intelligent Design" advocates have is the evident fact that creatures are clearly products of a superhumanly intelligent designer. They are...it's just a fact that gives ID advocates a huge head start. All the evidence of biology is just that the intelligent designer is the biosphere itself acting as a massively parallel Natural Intelligence that is fabulously intelligent but utterly without foresight or any aspect of personhood or intentionality.
The idea of vast intelligence without any hint of intentionality or foresight is kinda alien to the way we are naturally inclined to think about this.
Im in the camp that thinks that any AGI that is intentional and person-like will result from uploaded human minds well before we can reverse engineer enough abstractions about how our brains work to code a fully artificial mind de novo. It seems inherently easier by brute force technical methods to just copy out the neural pattern of an existing human brain into software than to understand it well enough to recreate it's functions in the abstract from scratch.
I don't see much evidence of AI coded "organism" with foresight and intentionality even at the level of simple biology....in part because we just aren't motivated to right "wild" code that has it's own interests and is out of control.
This doesn't really make this stuff less scary, just more like "Transcendence" and less like "Skynet". Hard to say if human minds uploaded would be any less dangerous given access to superhuman speed and resources.
Another implication is that genuinely intentional organism like software minds need not be human or superhuman level to cause lots of problems. We find dealing with rats and insects pretty hard in the real world despite being much more "intelligent" in the sense being discussed. The software equivalent of rats or insects wild in the networks could be pretty troublesome too.
The good side of that is even if AGI's are as humans are to rats, they still might have trouble eradicating us