To answer original question posed in this thread, one has to have in mind what purpose is served with AI and what is AI capable of moving in a physical world, what is it packaged in and what execution power does AI have without human control. These parameters determine the risk of harm and the adequate measures to control that risk.
When I think of AI, I think of robots as I work around them and on them. Each robot in operation requires physical safety barriers around the working area to protect individuals in the area. Robots may fail as sensors get fogged, mechanical parts fail, programming bugs and damaged wiring cause havoc. Perhaps future robots might be better able to distinguish between a dirty sensor and an engaged sensor, and will be able to self clean. Until they do, there is a risk of injury or crash and that risk requires adequate control measures. We do not abandon these beautiful, powerful and useful machines because someone might get hurt, we control the risk to the best of our ability and we derive the benefits. I also must say that the majority of robot failures are preceded with or caused by human failure or error.
Another area of application might be simple computing power and data analysis. In this case AI is neatly packed in a computer that does not move and can not physically harm anyone. The only check required might be some inbuilt system redundancy to cross check the performance and results.
I can imagine the usefulness of AI in space exploration and perhaps in some surgical instruments, where AI is likely to perform better than humans.
There are numerous other potential applications, too many to mention. My concern with particular AI that is packaged in a physical form is the risk of failure and the consequences of that. I have the same concern with humans (including myself).
The question is whether we can build machines that outperform humans. The answer is glaring. We see the overwhelming evidence of that all around us every day. Any computer outperforms humans in simple computational speed and data processing. Machines outperform humans on factory floor by many orders of magnitude. I can see great benefits to all of us with further developments of AI on all fronts, but with careful consideration for safety in case of failure.
On one side of the argument, we have proven track record of benefits to humanity on many levels, provided by the progress in the areas of intelligent machines. The other side of the argument is fear of the unknown. Fear is very valid feeling, helping our preservation, and needs to be addressed. Once it is adequately addressed, it would be a shame if one stops pursuing further developments due to uncertainty of the outcome. Why stop now? Perhaps fearful ones among us can step back and let less fearful ones take the lead. My salute goes to courage.
In some new world, in which machines can do most of the stuff that we do now, I will not be an engineer any more, I will be a poet.