Sparse recovery optimization algorithms are utilized in machine learning, imaging, and parameter fitting in problems, as well as a multitude of other fields. Compressive sensing, a prominent field in mathematics this past decade, has motivated the revival of sparse recovery algorithms with ?-1 norm minimization. Although small underdetermined problems are substantially well understood, large, inconsistent, nearly sparse systems have not been investigated with as much detail. In this dynamical study, two commonly used sparse recovery optimization algorithms, Linearized Bregman and Iterative Shrinkage Thresholding Algorithm are compared. The dependence of their dynamical behaviors on the threshold hyper-parameter and different entry sizes in the solution suggests complementary advantages and disadvantages. These results prompted the creation of a hybrid method which benefits from favorable characteristics from both optimization algorithms such as less chatter and quick convergence. The Hybrid method is proposed, analyzed, and evaluated as outperforming and superior to both linearized Bregman and Iterative Shrinkage Thresholding Algorithm, principally due to the Hybrid’s versatility when processing diverse entry sizes.
“Dynamical Characterization & Analysis of the Optimization Algorithms: Linearized Bregman and Iterative Shrinkage Thresholding Algorithm” by Yihua Xu (Georgia Tech)
Loading...
Hi Yihua, do you have any plans to continue exploring this hybrid model?
Yeah! I did compare the hybrid model with ISTA, LB & MLB(Modified Linearized Bregman). The preliminary result is that hybrid model is superior than all methods.
Hi Yihua, I was wondering how exactly you pick the random initial value and if this process has any effect?
Since the xTrue is generated around 0, then we set the random initial value as 0. However, we also set the initial value as 10, -10, which are far away from the 0 entry but that does not seem to have strong influence. Thanks!