We consider the problem of unsupervised feature learning via generative models. This topic is of interest in the fields of sparse coding, autoencoders and recent generative networks such as GANs and GLOs. We develop efficient algorithms for learning such models and characterize their learning capacities (e.g. initialization, overparameterization, generalization) based on optimization, algorithmic and statistical aspects.
In [1], we propose a neurally plausible algorithm for learning dictionaries with sparse structure. The algorithm is fast, sample-efficient and provably guaranted. We provably solve sparse coding when given samples are only partially observed in [2].
In [3], we analyze the gradient dynamics of weight-tied autoeconders and theoretically show that two-layer autoencoders with shared weights approximately recover the dictionary of a sparse coding model via gradient descent. More recently, we explore the gradient dynamics of overparameterized autoencoders from random initialization.
We currently are considering the theoretical characterizations of overparameterized unsupervised networks and generative models in terms of optimization and generalization.
PublicationsT.V. Nguyen, R. Wong, and C. Hegde. βA Provable Approach for Double-Sparse Coding.β (2018). [Paper]
T. V. Nguyen, A. Soni, and C. Hegde. βOn Learning Sparsely Used Dictionaries from Incomplete Samples.β International Conference on Machine Learning. 2018. [Paper]
T. V. Nguyen, R. Wong and C. Hegde, On the Gradient Dynamics of Gradient Descent for Autoencoders, AISTAT 2019. [Paper]