He's just plain wrong here, if he was quoted correctly. We don't even have a definition of general AI.
He's correct about the extreme danger of "AI" projects. The threat, however, is idiot savant AIs, which are *not* general AI. We are well into the development of many idiot savant AIs, and very far from general AIs. If idiot humans start treating the idiot savant AIs as if they are general AIs, and giving them work outside their capabilities...
...well, let's just say that the idiot savant physicists who developed nuclear bombs never bothered to talk to biologists about the long-term effect of radioactive fallout. We don't need automated versions of those idiots.
Idiots, indeed.
P.S. I can give a much more specific example. I am not aware of any current succesful effort to mate up logic-and-reasoning-based "AIs" with pattern-matching "AIs" in more than a very primitive fashion. Human brains have a lot of specialized subunits which cooperate and argue with each other, of which some are logic-and-reasoning, some are pattern-matching, and others are... different things which we haven't replicated yet. The current AI efforts are essentially about replicating the subunits, not about developing the general, integrative, or "oversight" intelligence faculty. This is actually a problem. A true "general AI" should be able to say "Well, I *could* give you a self-driving car, but it'll just be stuck in traffic -- wouldn't you rather I moved your home closer to your work?" Or "Well, I could invade Iraq, but based on my study of history, that would create a geopolitical backlash and hurt the US, plus according to my study of law, it's illegal, and so I'm not going to do that." We aren't anywhere close to that. Maybe it will happen eventually, but the actual serious danger is *before* we get that.