Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/106882
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electrical and Electronic Engineering | - |
dc.creator | Ju, Y | en_US |
dc.creator | Lam, KM | en_US |
dc.creator | Chen, Y | en_US |
dc.creator | Qi, L | en_US |
dc.creator | Dong, J | en_US |
dc.date.accessioned | 2024-06-07T00:58:37Z | - |
dc.date.available | 2024-06-07T00:58:37Z | - |
dc.identifier.isbn | 978-0-9992411-6-5 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/106882 | - |
dc.description | Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, January 7-15, 2021 | en_US |
dc.language.iso | en | en_US |
dc.publisher | International Joint Conference on Artificial Intelligence | en_US |
dc.rights | Copyright © 2020 International Joint Conferences on Artificial Intelligence | en_US |
dc.rights | The following publication Ju, Y., Lam, K. M., Chen, Y., Qi, L., & Dong, J. (2021, January). Pay attention to devils: A photometric stereo network for better details. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence (pp. 694-700) is available at https://doi.org/10.24963/ijcai.2020/97. | en_US |
dc.title | Pay attention to devils : a photometric stereo network for better details | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.doi | 10.24963/ijcai.2020/97 | en_US |
dcterms.abstract | We present an attention-weighted loss in a photometric stereo neural network to improve 3D surface recovery accuracy in complex-structured areas, such as edges and crinkles, where existing learning-based methods often failed. Instead of using a uniform penalty for all pixels, our method employs the attention-weighted loss learned in a self-supervise manner for each pixel, avoiding blurry reconstruction result in such difficult regions. The network first estimates a surface normal map and an adaptive attention map, and then the latter is used to calculate a pixel-wise attention-weighted loss that focuses on complex regions. In these regions, the attention-weighted loss applies higher weights of the detail-preserving gradient loss to produce clear surface reconstructions. Experiments on real datasets show that our approach significantly outperforms traditional photometric stereo algorithms and state-of-the-art learning-based methods. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | In IJCAI'20 : Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, p. 694-700. International Joint Conferences on Artificial Intelligence, 2020 | en_US |
dcterms.issued | 2020 | - |
dc.relation.conference | International Joint Conference on Artificial Intelligence [IJCAI] | - |
dc.description.validate | 202405 bcch | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | EIE-0098 | - |
dc.description.fundingSource | Self-funded | en_US |
dc.description.pubStatus | Published | en_US |
dc.identifier.OPUS | 50281121 | - |
dc.description.oaCategory | voR allowed | en_US |
Appears in Collections: | Conference Paper |
Page views
61
Citations as of May 11, 2025
Downloads
24
Citations as of May 11, 2025

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.