Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/104385
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineering-
dc.creatorZhang, Yen_US
dc.creatorMa, SYen_US
dc.creatorZhang, Xen_US
dc.creatorLi, Len_US
dc.creatorIp, WHen_US
dc.creatorYung, KLen_US
dc.date.accessioned2024-02-05T08:49:22Z-
dc.date.available2024-02-05T08:49:22Z-
dc.identifier.issn0920-8542en_US
dc.identifier.urihttp://hdl.handle.net/10397/104385-
dc.language.isoenen_US
dc.publisherSpringer New York LLCen_US
dc.rights© Springer Science+Business Media, LLC, part of Springer Nature 2020en_US
dc.rightsThis version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use(https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms), but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/s11227-020-03189-y.en_US
dc.subjectBlurred imageen_US
dc.subjectCamera pointing systemen_US
dc.subjectChang’e space missionen_US
dc.subjectGANsen_US
dc.subjectResize convolutionen_US
dc.titleEDGAN : motion deblurring algorithm based on enhanced generative adversarial networksen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage8922en_US
dc.identifier.epage8937en_US
dc.identifier.volume76en_US
dc.identifier.issue11en_US
dc.identifier.doi10.1007/s11227-020-03189-yen_US
dcterms.abstractRemoving motion blur has been an important issue in computer vision literature. Motion blur is caused by the relative motion between the camera and the photographed object. However, in recent years, some achievements have been made in the research of image deblurring by using deep learning algorithms. In this paper, an enhanced adversarial network model is proposed. The proposed model can use the weight of feature channel to generate sharp image and eliminate draughtboard artefacts. In addition, the mixed loss function enables the network to output high-quality image. The proposed approach is tested using GOPRO datasets and Lai datasets. In the GOPRO datasets, the peak signal-to-noise ratio of the proposed approach is up to 28.674, and DeblurGAN is 27.454. And the structural similarity measure can be achieved up to 0.969, and DeblurGAN is 0.939. Furthermore, the images were obtained from China’s Chang’e 3 Lander to test the new algorithm. Due to the elimination of the chessboard effect, the deblurred image has a better visual appearance. The proposed method achieved higher performance and efficiency in qualitative and quantitative aspects using the benchmark dataset experiments. The results also provided various insights into the design and development of the camera pointing system, which was mounted on the Lander for capturing images of the moon and rover for Chang’e space mission.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of supercomputing, Nov. 2020, v. 76, no. 11, p. 8922-8937en_US
dcterms.isPartOfJournal of supercomputingen_US
dcterms.issued2020-11-
dc.identifier.scopus2-s2.0-85079118825-
dc.identifier.eissn1573-0484en_US
dc.description.validate202402 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberISE-0237-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNatural Science Foundation of Guangdong Province; The Hong Kong Polytechnic University; Shenzhen Technology Universityen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS16530245-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Ip_EDGAN_Motion_Deblurring.pdfPre-Published version1.73 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

68
Citations as of May 11, 2025

Downloads

30
Citations as of May 11, 2025

SCOPUSTM   
Citations

11
Citations as of Jun 5, 2025

WEB OF SCIENCETM
Citations

12
Citations as of Jun 5, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.