Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/82316
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electronic and Information Engineering | - |
dc.creator | Xi, Y | - |
dc.creator | Zheng, JB | - |
dc.creator | Jia, WJ | - |
dc.creator | He, XJ | - |
dc.creator | Li, HH | - |
dc.creator | Ren, ZQ | - |
dc.creator | Lam, K | - |
dc.date.accessioned | 2020-05-05T05:59:32Z | - |
dc.date.available | 2020-05-05T05:59:32Z | - |
dc.identifier.uri | http://hdl.handle.net/10397/82316 | - |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.rights | This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.rights | The following publication Y. Xi et al., "See Clearly in the Distance: Representation Learning GAN for Low Resolution Object Recognition," in IEEE Access, vol. 8, pp. 53203-53214, 2020 is available at https://dx.doi.org/10.1109/ACCESS.2020.2978980 | en_US |
dc.subject | Image resolution | en_US |
dc.subject | Object recognition | en_US |
dc.subject | Signal resolution | en_US |
dc.subject | Feature extraction | en_US |
dc.subject | Image recognition | en_US |
dc.subject | Generative adversarial networks | en_US |
dc.subject | Task analysis | en_US |
dc.subject | Convolutional neural networks | en_US |
dc.subject | Generative adversarial networks | en_US |
dc.subject | Low resolution object recognition | en_US |
dc.subject | Representation learning | en_US |
dc.title | See clearly in the distance : representation learning GAN for low resolution object recognition | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 53203 | - |
dc.identifier.epage | 53214 | - |
dc.identifier.volume | 8 | - |
dc.identifier.doi | 10.1109/ACCESS.2020.2978980 | - |
dcterms.abstract | Identifying tiny objects with extremely low resolution is generally considered a very challenging task even for human vision, due to limited information presented inside the object areas. There have been very limited attempts in recent years to deal with low-resolution recognition. The existing solutions rely on either generating super-resolution images or learning multi-scale features. However, their performance improvement becomes very limited, especially when the resolution becomes very low. In this paper, we propose a Representation Learning Generative Adversarial Network (<italic>RL</italic>-GAN) to generate super <italic>image representation</italic> that is optimized for recognition. Our solution deals with the classical vision task of object recognition in the distance. We evaluate our idea on the challenging task of low-resolution object recognition. Comparison of experimental results conducted on public and our newly created WIDER-SHIP datasets demonstrate the effectiveness of our <italic>RL</italic>-GAN, which improves the classification results significantly, with 10 & x2013;15 & x0025; gain on average, compared with benchmark solutions. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | IEEE access, 2020, v. 8, p. 53203-53214 | - |
dcterms.isPartOf | IEEE access | - |
dcterms.issued | 2020 | - |
dc.identifier.isi | WOS:000524749000022 | - |
dc.identifier.scopus | 2-s2.0-85082617893 | - |
dc.identifier.eissn | 2169-3536 | - |
dc.description.validate | 202006 bcrc | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
dc.description.pubStatus | Published | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Xi_Learning_GAN_Low.pdf | 8.06 MB | Adobe PDF | View/Open |
Page views
111
Last Week
1
1
Last month
Citations as of Mar 24, 2024
Downloads
104
Citations as of Mar 24, 2024
SCOPUSTM
Citations
10
Citations as of Mar 28, 2024
WEB OF SCIENCETM
Citations
6
Citations as of Mar 28, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.