Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/95539
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electronic and Information Engineering | en_US |
dc.creator | Zheng, H | en_US |
dc.creator | Yan, Y | en_US |
dc.creator | Wang, Y | en_US |
dc.creator | Shen, X | en_US |
dc.creator | Lu, C | en_US |
dc.date.accessioned | 2022-09-21T01:40:49Z | - |
dc.date.available | 2022-09-21T01:40:49Z | - |
dc.identifier.issn | 0733-8724 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/95539 | - |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.rights | © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.rights | The following publication H. Zheng, Y. Yan, Y. Wang, X. Shen and C. Lu, "Deep Learning Enhanced Long-Range Fast BOTDA for Vibration Measurement," in Journal of Lightwave Technology, vol. 40, no. 1, pp. 262-268, Jan.1, 2022 is available at https://doi.org/10.1109/JLT.2021.3117284 | en_US |
dc.subject | Brillouin optical time-domain analysis (BOTDA) | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Ultra-fast measurement | en_US |
dc.title | Deep learning enhanced long-range fast BOTDA for vibration measurement | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 262 | en_US |
dc.identifier.epage | 268 | en_US |
dc.identifier.volume | 40 | en_US |
dc.identifier.issue | 1 | en_US |
dc.identifier.doi | 10.1109/JLT.2021.3117284 | en_US |
dcterms.abstract | In this paper, we propose and experimentally demonstrate a scheme of deep learning enhanced long-range fast Brillouin optical time-domain analysis (BOTDA). The volumetric data from fast BOTDA is denoised and demodulated by using a deep video denoising network and a deep neural network, respectively. Benefitting from the advanced deep learning algorithms, the sensing range of fast BOTDA is extended to 10 km successfully. In experiment, vibration signal is measured with a sampling rate of 23 Hz, 2 m spatial resolution, and 1.19 MHz accuracy over 10 km single-mode fiber with only 4 averages. Due to the low computational complexity and GPU acceleration, the network takes less than 0.04 s to process 100 × 21800 data, which is much faster than the conventional algorithms. This method provides the potential for real-time vibration measurement in fast BOTDA with long sensing range. | en_US |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Journal of lightwave technology, 1 Jan. 2022, v. 40, no. 1, p. 262-268 | en_US |
dcterms.isPartOf | Journal of lightwave technology | en_US |
dcterms.issued | 2022-01-01 | - |
dc.identifier.scopus | 2-s2.0-85117125604 | - |
dc.identifier.eissn | 1558-2213 | en_US |
dc.description.validate | 202209 bcfc | en_US |
dc.description.oa | Accepted Manuscript | en_US |
dc.identifier.FolderNumber | EIE-0102 | - |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | General Research Fund PolyU 15209919; project ZVGB of the Hong Kong Polytechnic University | en_US |
dc.description.pubStatus | Published | en_US |
dc.identifier.OPUS | 59425534 | - |
dc.description.oaCategory | Green (AAM) | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Zheng_Deep_Learning_Enhanced.pdf | Pre-Published version | 2.21 MB | Adobe PDF | View/Open |
Page views
65
Last Week
0
0
Last month
Citations as of Oct 13, 2024
Downloads
147
Citations as of Oct 13, 2024
SCOPUSTM
Citations
20
Citations as of Oct 17, 2024
WEB OF SCIENCETM
Citations
16
Citations as of Oct 10, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.