Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106907
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorWang, Yen_US
dc.creatorJu, Yen_US
dc.creatorJian, Men_US
dc.creatorLam, KMen_US
dc.creatorQia, Len_US
dc.creatorDong, Jen_US
dc.date.accessioned2024-06-07T00:58:47Z-
dc.date.available2024-06-07T00:58:47Z-
dc.identifier.isbn978-1-5106-3835-8en_US
dc.identifier.isbn978-1-5106-3836-5 (electronic)en_US
dc.identifier.issn0277-786Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/106907-
dc.descriptionInternational Workshop on Advanced Imaging Technology (IWAIT) 2020, 5-7 January 2020, Yogyakarta, Indonesiaen_US
dc.language.isoenen_US
dc.publisherSPIE - International Society for Optical Engineeringen_US
dc.rights© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited.en_US
dc.rightsThe following publication Yingyu Wang, Yakun Ju, Muwei Jian, Kin-Man Lam, Lin Qia, and Junyu Dong "Self-supervised depth completion with attention-based loss", Proc. SPIE 11515, International Workshop on Advanced Imaging Technology (IWAIT) 2020, 115152T (1 June 2020) is available at https://doi.org/10.1117/12.2566222.en_US
dc.subjectAttention-based lossen_US
dc.subjectDeep completionen_US
dc.subjectMonocular depth estimationen_US
dc.subjectSelf-superviseden_US
dc.subjectStatistical propertiesen_US
dc.titleSelf-supervised depth completion with attention-based lossen_US
dc.typeConference Paperen_US
dc.identifier.volume11515en_US
dc.identifier.doi10.1117/12.2566222en_US
dcterms.abstractDeep completion which predicts dense depth from sparse depth has important applications in the fields of robotics, autonomous driving and virtual reality. It compensates for the shortcomings of low accuracy in monocular depth estimation. However, the previous deep completion works evenly processed each depth pixel and ignored the statistical properties of the depth value distribution. In this paper, we propose a self-supervised framework that can generate accurate dense depth from RGB images and sparse depth without the need for dense depth labels. We propose a novel attention-based loss that takes into account the statistical properties of the depth value distribution. We evaluate our approach on the KITTI Dataset. The experimental results show that our method achieves state-of-the-art performance. At the same time, ablation study proves that our method can effectively improve the accuracy of the results.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of SPIE : the International Society for Optical Engineering, 2020, v. 11515, 115152Ten_US
dcterms.isPartOfProceedings of SPIE : the International Society for Optical Engineeringen_US
dcterms.issued2020-
dc.identifier.scopus2-s2.0-85086630693-
dc.relation.conferenceInternational Workshop on Advanced Imaging Technology [IWAIT]en_US
dc.identifier.eissn1996-756Xen_US
dc.identifier.artn115152Ten_US
dc.description.validate202405 bcchen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0202-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS26683835-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Lam_Self-Supervised_Depth_Completion.pdfPre-Published version8.2 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

79
Last Week
3
Last month
Citations as of Nov 9, 2025

Downloads

44
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

1
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

1
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.