Enhancing Generalisation of First-Order Meta-Learning

Published in The 2nd Learning from Limited Labeled Data (LLD) Workshop at The International Conference on Learning Representations (ICLR) 2019, 2019

In this study we focus on first-order meta-learning algorithms that aim to learn a parameter initialization of a network which can quickly adapt to new concepts, given a few examples. We investigate two approaches to enhance generalization and speed of learning of such algorithms, particularly expanding on the Reptile (Nichol et al., 2018) algorithm. We introduce a novel regularization technique called meta-step gradient pruning and also investigate the effects of increasing the depth of network architectures in first-order meta-learning. We present an empirical evaluation of both approaches, where we match benchmark few-shot image classification results with 10 times fewer iterations using Mini-ImageNet dataset and with the use of deeper networks, we attain accuracies that surpass the current benchmarks of few-shot image classification using Omniglot dataset.

Download paper here