Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/95364
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematicsen_US
dc.creatorJiang, Ben_US
dc.creatorChen, Zen_US
dc.creatorLeng, Cen_US
dc.date.accessioned2022-09-19T01:59:55Z-
dc.date.available2022-09-19T01:59:55Z-
dc.identifier.issn1350-7265en_US
dc.identifier.urihttp://hdl.handle.net/10397/95364-
dc.language.isoenen_US
dc.publisherInternational Statistical Instituteen_US
dc.rights© 2020 ISI/BSen_US
dc.rightsThe following publication Binyan Jiang. Ziqi Chen. Chenlei Leng. "Dynamic linear discriminant analysis in high dimensional space." Bernoulli 26 (2) 1234 - 1268, May 2020 is available at https://doi.org/10.3150/19-BEJ1154en_US
dc.subjectBayes ruleen_US
dc.subjectDiscriminant analysisen_US
dc.subjectDynamic linear programmingen_US
dc.subjectHigh-dimensional dataen_US
dc.subjectKernel estimationen_US
dc.subjectSparsityen_US
dc.titleDynamic linear discriminant analysis in high dimensional spaceen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1234en_US
dc.identifier.epage1268en_US
dc.identifier.volume26en_US
dc.identifier.issue2en_US
dc.identifier.doi10.3150/19-BEJ1154en_US
dcterms.abstractHigh-dimensional data that evolve dynamically feature predominantly in the modern data era. As a partial response to this, recent years have seen increasing emphasis to address the dimensionality challenge. However, the non-static nature of these datasets is largely ignored. This paper addresses both challenges by proposing a novel yet simple dynamic linear programming discriminant (DLPD) rule for binary classification. Different from the usual static linear discriminant analysis, the new method is able to capture the changing distributions of the underlying populations by modeling their means and covariances as smooth functions of covariates of interest. Under an approximate sparse condition, we show that the conditional misclassification rate of the DLPD rule converges to the Bayes risk in probability uniformly over the range of the variables used for modeling the dynamics, when the dimensionality is allowed to grow exponentially with the sample size. The minimax lower bound of the estimation of the Bayes risk is also established, implying that the misclassification rate of our proposed rule is minimax-rate optimal. The promising performance of the DLPD rule is illustrated via extensive simulation studies and the analysis of a breast cancer dataset.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationBernoulli, May 2020, v. 26, no. 2, p. 1234-1268en_US
dcterms.isPartOfBernoullien_US
dcterms.issued2020-05-
dc.identifier.scopus2-s2.0-85082678927-
dc.identifier.eissn1573-9759en_US
dc.description.validate202209 bckwen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberRGC-B2-1312, RGC-B2-1304, AMA-0210-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS23633067-
dc.description.oaCategoryVoR alloweden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Jiang-Dynamic_Linear_Discriminant.pdf1.38 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

131
Last Week
0
Last month
Citations as of Oct 6, 2025

Downloads

68
Citations as of Oct 6, 2025

SCOPUSTM   
Citations

15
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

12
Citations as of Oct 10, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.