Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/85370
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorShi, Yue-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/9671-
dc.language.isoEnglish-
dc.titleOn the lasso regression and asymmetric laplace distribution with applications-
dc.typeThesis-
dcterms.abstractIn this thesis, we consider four classes of optimization models. One class is LAD Generalized Lasso models. We develop a descent algorithm for LAD-Lasso and a new active zero set descent algorithm for LAD Generalized Lasso under nonsmooth optimality conditions; The second class is constrained LAD Lasso models. We extend the descent algorithm to tackle the constraints as well. Application in Mean Absolute Deviation Lasso portfolio selection is studied. The third class is selection of penalty parameter for compressive sensing. We carry out tests using several criteria for selection of the penalty parameter. The fourth class is optimization under Asymmetric Laplace Distributions, namely robust mixture linear regression model and portfolio selection. We first consider LAD Generalized Lasso models. Under dynamic nonsmooth optimality conditions, we develop a descent algorithm by selecting fastest descent directions for LAD-Lasso regression. Then we derive a new active zero set descent algorithm for LAD Generalized Lasso regression. The algorithm updates the zero set and basis search directions recursively until optimality conditions are satisfied. It is also shown that the proposed algorithm converges in finitely many steps. We then consider Constrained LAD Lasso models. We develop a descent algorith-m by updating descent directions selected from basis directional set for nonsmooth optimization problems for MAD-Lasso portfolio selection strategy, extensive real data analysis are provided to evaluate the out-of-sample performances. We next consider selection of penalty parameter. For compressive sensing based signal recovery model, we apply regularized Least Squares for sparse reconstruction since it can reconstruct speech signal from a noisy observation, and proposed a two-level optimization strategy to incorporate the quality design attributes in the sparse solution in compressive speech enhancement by hyper-parameterizing the tuning parameter. The first level involves the compression of the big data and the second level optimizes the tuning parameter by using different optimization criteria (such as Gini index, the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC)). The set of solutions can then be measured against the desired design attributes to achieve the best trade-off between suppression and distortion. Finally, we study two models under Asymmetric Laplace Distributions. We first present an efficient two-level latent EM algorithm for parameter estimation of mixture linear regression models, with group label as the first level latent variable and laplace intermediate variable as the second level latent variable. Explicit updating formula of each iteration are derived and computational complexity can thus be reduced significantly. Then we consider robust portfolio selection model, and derived the Expectation-Maximization (EM) algorithm for parameter estimation of Asymmetric Laplace distribution, efficient frontier analysis is provided to evaluate the performance.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxxii, 175 pages : color illustrations-
dcterms.issued2018-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
dcterms.LCSHMathematical optimization-
Appears in Collections:Thesis
Show simple item record

Page views

46
Last Week
0
Last month
Citations as of Mar 24, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.