Oh wow, you made me go down the rabbit hole of a quick re read of I Robot. In each short story, Asimov postulates some problem with a particular type of robot and what effect it would have. It is indeed an interesting exploration into the issues we might have with robots once we introduce AI powered ones into general society.
I was born 40 years too early. All my life Iāve fantasized about being a robot psychologist. That may very well end up being a real profession soonish.
Thing is.. In the Asimov stories, there's but
one company that makes positronic brains, which, in all the Asimov stories, are the one and only, "thinking" machine that can be placed in a robot: U.S. Robotics. The stories allude to the idea that the design and build of these brains is too high-tech for a third party to come up with on its own, period. And that one company and its successors keep the three (or maybe four, counting the zeroth law) in all the robots.
Under those circumstances, the robots tend to be, well, friendly. Which is cool.
Except we live in a world where any fool can put together neural networks or ChatGPT things that are.. getting.. rather close to sentience. And while there are some companies making attempts to build something that looks vaguely like the three laws into their products, there's plenty who don't give a hoot: There's Money To Be Made, there's a land-rush mentality, and, so, we're all going to find out the hard way whether these new technologies will work with us without us all getting killed. Or the entities running the planet won't be humans any more, the ones doing it will be 'way more intelligent than us. It's nice to think that superintelligent would be nice to humans, Ian Banks style, but there's no guarantees. And no working guardrails.
And, even beyond the question of niceness or not, there's the fundamental bit about whether hyperintelligent entities would even be
sane. There are plenty of species on the face of the planet which are social. Us, for example. Ants, bats, bees, deer, and so on are as well. In a scarcity of food environment with significant resources devoted to reproduction (hello, Darwin!) those species that didn't socialize in one form or another died out over the eons. Hyperintelligent entities don't have that built-in genetic-built evolutionary behavior. So what these entities might do might not make any sense from
any point of view, because they don't have guardrails. Blow up the world because one doesn't see why not?
People have thought about this. One thought revolves around the Fermi paradox. Humans have been around for around 100,000 years. Say that there's an intelligent species out there with a million idle years on its hands. Assuming that they would travel, they'd be all over the galaxy. So, Fermi asked: "Where is everybody?"
One thought people have had is that, once a biological species reaches a certain level, they invent computers, get them smarter.. and then the whole thing falls down, for scary reasons given above.
Hang onto your hats, it's going to be a bumpy decade.