We consider the problem of recovering a signal $\mathbf{x}^* \in \mathbb{R}^n$, from magnitude-only measurements, $y_i = | \left\langle {\mathbf{a}_i},{\mathbf{x}^*} \right\rangle | $ for $i=\{1,2,\ldots,m\}$. This is a stylized version of the classical phase retrieval problem, and is a fundamental challenge in nano- and bio-imaging systems, astronomical imaging, and speech processing. It is well known that the above problem is ill-posed, and therefore some additional assumptions on the signal and/or the measurements are necessary.
In this paper, we first study the case where the underlying signal $\mathbf{x}^*$ is $s$-sparse. For this case, we develop a novel recovery algorithm that we call Compressive Phase Retrieval with Alternating Minimization, or CoPRAM. Our algorithm is simple and be obtained via a natural combination of the classical alternating minimization approach for phase retrieval with the CoSaMP algorithm for sparse recovery. Despite its simplicity, we prove that our algorithm achieves a sample complexity of $O(s^2 \log n)$ with Gaussian measurements $\mathbf{a}_i$, which matches the best known existing results; moreover, it also demonstrates linear convergence in theory and practice. An appealing feature of our algorithm is that it requires no extra tuning parameters other than the signal sparsity level $s$.
We then consider the case where the underlying signal $\mathbf{x}^*$ arises from structured sparsity models. We specifically examine the case of block-sparse signals with uniform block size of $b$ and block sparsity $k=s/b$. For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to $O(ks \log n)$. For sufficiently large block lengths of $b=\Theta(s)$, this bound equates to $O(s \log n)$. Further, we consider power-law decaying sparse signals, further lowering sample complexity to $O(s\log n)$. We also present an extention to rooted tree-sparse signals and analyze the descent criterion of the alternating minimization scheme. We find that, as long as a good initialization is provided, the sample complexity of this problem can be brought down to $O(s)$, which is information theoretically optimal. To our knowledge, this constitutes the first end-to-end linearly convergent group of algorithms for phase retrieval where the Gaussian sample complexity has a sub-quadratic dependence on the signal sparsity level. You can read the full paper on arXiv .
PublicationsG. Jagatap and C. Hegde, “Fast, Sample-Efficient Algorithms for Structured Phase Retrieval”, Neural Information Processing Systems (NIPS), December 2017. [ Paper / Poster / Code ]
(Extended version with additional results) G. Jagatap, C. Hegde, “Sample efficient algorithms for recovering structured signals from magnitude-only measurements”, under review, IEEE Transactions on Information Theory, 2017. [Preprint]
G. Jagatap, C. Hegde, “Towards Sample-Optimal Methods for Solving Random Quadratic Equations with Structure”, to appear, IEEE International Symposium on Information Theory, 2018.preprint