Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/96940
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorSuzuki, Aen_US
dc.creatorNitanda, Aen_US
dc.creatorWang, Jen_US
dc.creatorXu, Len_US
dc.creatorYamanishi, Ken_US
dc.creatorCavazza, Men_US
dc.date.accessioned2023-01-04T01:56:15Z-
dc.date.available2023-01-04T01:56:15Z-
dc.identifier.issn2640-3498en_US
dc.identifier.urihttp://hdl.handle.net/10397/96940-
dc.description38th International Conference on Machine Learning, 18-24 July 2021, Virtualen_US
dc.language.isoenen_US
dc.publisherML Research Pressen_US
dc.rightsCopyright 2021 by the author(s).en_US
dc.rightsPosted with permission of the author.en_US
dc.titleGeneralization error bound for hyperbolic ordinal embeddingen_US
dc.typeConference Paperen_US
dc.identifier.volume139en_US
dcterms.abstractHyperbolic ordinal embedding (HOE) represents entities as points in hyperbolic space so that they agree as well as possible with given constraints in the form of entity i is more similar to entity j than to entity k. It has been experimentally shown that HOE can obtain representations of hierarchical data such as a knowledge base and a citation network effectively, owing to hyperbolic space’s exponential growth property. However, its theoretical analysis has been limited to ideal noiseless settings, and its generalization error in compensation for hyperbolic space’s exponential representation ability has not been guaranteed. The difficulty is that existing generalization error bound derivations for ordinal embedding based on the Gramian matrix are not applicable in HOE, since hyperbolic space is not inner-product space. In this paper, through our novel characterization of HOE with decomposed Lorentz Gramian matrices, we provide a generalization error bound of HOE for the first time, which is at most exponential with respect to the embedding space’s radius. Our comparison between the bounds of HOE and Euclidean ordinal embedding shows that HOE’s generalization error comes at a reasonable cost considering its exponential representation ability.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the 38th International Conference on Machine Learning, PMLR 139:10011-10021, 2021, https://proceedings.mlr.press/v139/suzuki21a.htmlen_US
dcterms.isPartOfProceedings of Machine Learning Researchen_US
dcterms.issued2021-
dc.relation.conferenceInternational Conference on Machine Learning [ICML]en_US
dc.description.validate202210 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera1532-
dc.identifier.SubFormID45354-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCopyright retained by authoren_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
suzuki21a.pdf336.8 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

124
Last Week
10
Last month
Citations as of Nov 10, 2025

Downloads

63
Citations as of Nov 10, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.