Sparse recovery optimization algorithms are utilized in machine learning, imaging, and parameter fitting in problems, as well as a multitude of other fields. Compressive sensing, a prominent field in mathematics this past decade, has motivated the revival of sparse recovery algorithms with ?-1 norm minimization. Although small underdetermined problems are substantially well understood, large, inconsistent, nearly sparse systems have not been investigated with as much detail. In this dynamical study, two commonly used sparse recovery optimization algorithms, Linearized Bregman and Iterative Shrinkage Thresholding Algorithm are compared. The dependence of their dynamical behaviors on the threshold hyper-parameter and different entry sizes in the solution suggests complementary advantages and disadvantages. These results prompted the creation of a hybrid method which benefits from favorable characteristics from both optimization algorithms such as less chatter and quick convergence. The Hybrid method is proposed, analyzed, and evaluated as outperforming and superior to both linearized Bregman and Iterative Shrinkage Thresholding Algorithm, principally due to the Hybrid’s versatility when processing diverse entry sizes.