Incorrect. AlphaZero uses Monte Carlo tree traversal and evaluates 80 000 positions per second which is easily 5-6 orders of magnitude more than top human players.
I agree in part (and thanks for the correction), AlphaZero uses a general purpose Monte Carlo tree search:
https://deepmind.com/documents/260/alphazero_preprint.pdf
"Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root state s root until a leaf state is reached."
Firstly, its evaluation speed is, according to the paper, not 80k/sec but 63k/sec:
AlphaZero: Chess: 63k/sec, Shogi: 58k/sec, Go: 16k/sec
I don't know what the basis of your 5-6 orders of magnitude figure is, 6 orders of magnitude suggests a difference of a factor of 100,000, i.e. your (slightly incorrect) 80k/sec figure becomes 0.8 positions/sec - do you really believe that human grandmasters can only evaluate 0.8-10 positions per second?
And while there are algorithmic shortcuts grandmasters use in endgame to see ahead 20-30 steps, the general depth in complex situations appears to be around 15:
'Magnus Carlsen claims that he can “see” 15 moves ahead, sometimes even 20.'
And note that he referenced 'full moves', which are actually 30-40 steps deep if we look at it as a tree search...
But even considering dynamic situations with no obvious forced moves, grandmasters are thought to be able to think ~3 full moves ahead, i.e. 6 position changes deep in the tree. Considering the average ~30 legal chess moves per position that's an effective evaluation speed of around 30^6 = hundreds of millions of positions per move - with the overwhelming majority discarded and not consciously evaluated. Considering the average chess move is ~135 seconds that gives a human grandmaster effective position evaluation speed that is better than AlphaZero's.
Anyway, my point was that AlphaZero is not using brute-force alpha-beta tree traversal like the other engines, but is using something a lot closer to what human grandmasters are doing.
Also note the limited size of AlphaZero's neural network:
"Apart from the representation of positions and actions described above, AlphaZero uses the same network architecture as AlphaGo Zero (9), briefly recapitulated here. The neural network consists of a “body” followed by both policy and value “heads”. The body consists of a rectified batch-normalized convolutional layer followed by 19 residual blocks (48). Each such block consists of two rectified batch-normalized convolutional layers with a skip connection. Each convolution applies 256 filters of kernel size 3 × 3 with stride 1. The policy head applies an additional rectified, batch-normalized convolutional layer, followed by a final convolution of 73 filters for chess or 139 filters for shogi, or a linear layer of size 362 for Go, representing the logits of the respective policies described above. The value head applies an additional rectified, batch-normalized convolution of 1 filter of kernel size 1 × 1 with stride 1, followed by a rectified linear layer of size 256 and a tanh-linear layer of size 1."
This is, effectively, a neural network of ~40 hidden layers with 3*3*256*256 (0.6M) weights in each layer - 23M weights total and maybe ~5K neurons-equivalent in a fully connected network. (If I calculated this correctly.)
Which is a fraction of the size of the networks Tesla's chip can handle: the Tesla firmware leak a couple of months ago suggested they have a visual network with 150 million weights. So it's also a reasonable inference that network size and quality can probably be increased at the cost of reducing MCTS search speed to move it even closer to human chess playing behavior.
We have only scratched the surface of machine learning and AIs.