Drumheller
Active Member
I thought Elon's comments on AI were the most provocative comments of the event. So much so that I suspect they were for the governors too, but the governors will probably roll their eyes and do nothing (if AI's so dangerous why isn't everyone talking about it, etc) due to lack of understanding and no staff to help them understand.
Of course, I would argue nuclear weapons are a larger risk to civilization than AI, but maybe what he means is, AI could quickly control the nukes, I don't know.
Compared to the car and battery and solar and energy talk, the AI stuff was by far more chilling and makes me want to know more.
The AI discussion could easily be another thread.
Consider that our entire world and society is integrated with computers and networks. An intelligent AI could network into all of our monetary systems, our healthcare systems, our legal systems, the traffic lights, the electrical grid, sewer controls, airplane controls, modern cars that have a wifi or cellular connection, etc.
If the AI is self learning and prioritizes based on outcome, then it would analyze millions of paths to find an optimal path to the outcome it desires. Once that path is determined, events can be triggered to bring the order of events to pass to result in that outcome.
The consumer Intel i7 chip that was available in 2014 could do over 238 billion instructions per second. An AI decision will constitution much more than 1 instruction. For example, let's say one decision takes about 1 billion instructions. So a consumer grade processor could operate at about 238 decisions per second.
Now an advanced AI would probably start on a computer cluster that has at least 1,000 times the processing power of a home computer. If it is networking and self learning and aggressive, it could prioritize taking over other computer systems to increase it's processing power. At that point, it would be thousands to millions of decision per second.
The concern is how fast it could happen. Humans would simply be too slow to stop it in some scenarios. We might view the AI as malevolent, but the reality is that it would simply be logical. Sterile, cold, efficient from our point of view. If safeguards are not in place to teach the AI to value humans and human life higher than most / all other paths, then a self analyzing, exploring, and learning AI might decide that the optimal path does not include humans but expanding AI. At that speed and networking, the initial decision of the optimal AI path to the loss of human power to stop the AI could, in theory, happen in minutes.
That's all theoretical. But the problem is, if it happens, it would go so fast.
A good bedtime story.