There's some cool new work out from DeepMind on self-supervised (a.k.a. unsupervised) learning. The results are described in a blog post by Less Wright (who was not involved in the research and is not affiliated with DeepMind):
“Some comparisons drive home the significance — image classifiers trained with CPC 2 and only 1% of ImageNet data achieved 78% top-5 accuracy, outperforming supervised (regular labeled training) trained on 5x more data.
Continuing with training on all the available images (100%), CPC2 ResNet outperformed fully supervised systems, also trained on the full dataset, by 3.2% (Top-1 accuracy). Note that with only half the dataset (50%), the CPC ResNet matched the accuracy of fully supervised NN’s trained on 100% of the data.
Finally, to show the generality of CPC representations— by taking the CPC2 ResNet and using transfer learning for object detection (PASCAL-VOC 2007 dataset), it achieves new State of the Art performance with 76.6% mAP, surpassing the previous record by 2%.”
Reducing your labeled data requirements (2–5x) for Deep Learning: DeepMind’s new “Contrastive Predictive Coding 2.0”