Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/99178
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Biomedical Engineering | en_US |
dc.contributor | Mainland Development Office | en_US |
dc.creator | Li, C | en_US |
dc.creator | Yao, J | en_US |
dc.creator | Gao, Y | en_US |
dc.creator | Lai, P | en_US |
dc.creator | He, Y | en_US |
dc.creator | Qi, S | en_US |
dc.creator | Zheng, W | en_US |
dc.date.accessioned | 2023-06-29T03:11:32Z | - |
dc.date.available | 2023-06-29T03:11:32Z | - |
dc.identifier.citation | v. 50, no. 9, ARTN 0907107 | - |
dc.identifier.issn | 0258-7025 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/99178 | - |
dc.language.iso | en | en_US |
dc.publisher | 上海科學技術出版社 | en_US |
dc.rights | © 2023 中国学术期刊电子杂志出版社。本内容的使用仅限于教育、科研之目的。 | en_US |
dc.rights | © 2023 China Academic Journal Electronic Publishing House. It is to be used strictly for educational and research purposes. | en_US |
dc.subject | Microscopy | en_US |
dc.subject | Deep learning | en_US |
dc.subject | Adaptive optics | en_US |
dc.subject | Large field-of-view | en_US |
dc.subject | Two-photon microscopy | en_US |
dc.title | Extending field-of-view of two-photon microscopy using deep learning | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.description.otherinformation | Author name used in this publication: 李迟件 | en_US |
dc.description.otherinformation | Author name used in this publication: 姚靖 | en_US |
dc.description.otherinformation | Author name used in this publication: 高玉峰 | en_US |
dc.description.otherinformation | Author name used in this publication: 赖溥祥 | en_US |
dc.description.otherinformation | Author name used in this publication: 何悦之 | en_US |
dc.description.otherinformation | Author name used in this publication: 齐苏敏 | en_US |
dc.description.otherinformation | Author name used in this publication: 郑炜 | en_US |
dc.description.otherinformation | Title in Simplified Chinese: 利用深度学习扩展双光子成像视场 | en_US |
dc.identifier.volume | 50 | en_US |
dc.identifier.issue | 9 | en_US |
dc.identifier.doi | 10.3788/CJL221433 | en_US |
dcterms.abstract | Objective: Two-photon microscopy (TPM) imaging has been widely used in many fields, such as in vivo tumor imaging, neuroimaging, and brain disease research. However, the small field-of-view (FOV) in two-photon imaging (typically within diameter of 1 mm) limits its further application. Although the FOV can be extended through adaptive optics technology, the complex optical paths, additional device costs, and cumbersome operating procedures limit its promotion. In this study, we propose the use of deep learning technology instead of adaptive optics technology to expand the FOV of two-photon imaging. The large FOV of TPM can be realized without additional hardware (such as a special objective lens or phase compensation device). In addition, a BN-free attention activation residual U-Net (nBRAnet) network framework is designed for this imaging method, which can efficiently correct aberrations without requiring wavefront detection. | en_US |
dcterms.abstract | Methods: Commercially available objectives have a nominal imaging FOV that has been calibrated by the manufacturer. Within the nominal FOV, the objective lens exhibits negligible aberrations. However, the aberrations increase dramatically beyond the nominal FOV. Therefore, the imaging FOV of the objective lens is limited to its nominal FOV. In this study, we improved the imaging quality of the FOV outside the nominal region by combining adaptive optics (AO) and deep learning. Aberrant and AO-corrected images were collected outside the nominal FOV. Thus, we obtained a paired dataset consisting of AO-corrected and uncorrected images. A supervised neural network was trained using the aberrated images as the input and the AO-corrected images as the output. After training, the images collected from regions outside the nominal FOV could be fed directly to the network. Aberration-corrected images were produced, and the imaging system could be used without AO hardware. | en_US |
dcterms.abstract | Results and Discussions: The experimental test results include the imaging results of samples such as fluorescent beads with diameter of 1 μm and Thy1-GFP and CX3CR1-GFP mouse brain slices, and the results of the corresponding network output. The high peak signal-to-noise ratio (PSNR) values of the test output and ground truth demonstrate the feasibility of extending the FOV to TPM imaging using deep learning. At the same time, the intensity contrast between the nBRAnet network output image and the ground truth on the horizontal line is compared in detail (Figs. 3, 4, and 5). The extended FOV of different samples is randomly selected for analysis, and a high degree of coincidence is observed in the intensity comparison. The experimental results show that after using the network, both the resolution and fluorescence intensity can be restored to a level where there is almost no aberration, which is close to the result after correcting using AO hardware. To demonstrate the advantages of the network framework designed in this study, the traditional U-Net structure and the very deep super-resolution (VDSR) model are used to compare with ours. When using the same training dataset to train different models, we find that the experimental results of the VDSR model contain a considerable amount of noise, whereas the experimental results of the U-Net network lose some details (Fig. 6). The high PSNR values clearly demonstrate the strength of our nBRAnet network framework (Table 3). | en_US |
dcterms.abstract | Conclusions: This study provides a novel method to effectively extend the FOV of TPM imaging by designing an nBRAnet network framework. In other words, deep learning was used to enhance the acquired image and expand the nominal FOV for commercial objectives. The experimental results show that images from extended FOVs can be restored to their AO-corrected versions using the trained network. That is, deep learning technology could be used instead of AO hardware technology to expand the FOV of commercially available objectives. This simplifies the operation and reduces the system cost. An extended FOV obtained using deep learning can be employed for cross-regional or whole-brain imaging. | en_US |
dcterms.abstract | 双光子成像技术已被广泛应用于活体肿瘤成像、神经功能成像以及大脑疾病研究等领域,但双光子成像视场较小(视场直径一般在1 mm 以内),限制了其进一步应用。虽然通过特殊的光学设计或者自适应光学技术能够有效增大视场,但复杂的光路设计、高昂的器件成本以及繁琐的操作过程限制了这些技术的推广。提出了一种利用深度学习技术替代自适应光学技术扩展双光子成像视场的新思路,在低成本(无须特殊物镜,无须相位补偿装置)、易操作的前提下实现了大视场双光子成像。设计了一种适用于光学显微系统中扩展双光子成像视场的nBRAnet 网络框架,为使该网络框架可以更好地利用特征图信息,在该框架中引入残差模块和空间注意力机制,同时去除了数据归一化处理,以增加图像对比度信息。实验结果表明:所提深度学习方法可以有效地代替自适应光学技术,增强扩展视场中的精细结构特征,并恢复扩展视场的成像分辨率和信噪比,使双光子成像视场直径扩展到3.46 mm,峰值信噪比超过27 dB。深度学习方法具有成本低、操作简单、图像增强效果显著等特点,有望为跨区域脑成像或全脑成像提供一种经济实用的方案。 | en_US |
dcterms.accessRights | open access | en_US |
dcterms.alternative | 利用深度学习扩展双光子成像视场 | en_US |
dcterms.bibliographicCitation | 中国激光 (Chinese journal of lasers), May 2023, v. 50, no. 9, 0907107 | en_US |
dcterms.isPartOf | 中国激光 (Chinese journal of lasers) | en_US |
dcterms.issued | 2023-05 | - |
dc.identifier.isi | WOS:000982783800001 | - |
dc.identifier.artn | 0907107 | en_US |
dc.description.validate | 202306 bcww | en_US |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | a2097 | - |
dc.identifier.SubFormID | 46575 | - |
dc.description.fundingSource | RGC | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | 国家自然科学基金(81927803, 81930048)、香港Research Grant Coucil( RGC)( 15217721)、 深圳市科技计划基础研 究项目(RCJC20200714114433058, ZDSY20130401165820357) | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | VoR allowed | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Li_Extending_Field-of-view_Two-photon.pdf | 1.53 MB | Adobe PDF | View/Open |
Page views
206
Last Week
9
9
Last month
Citations as of Oct 6, 2025
Downloads
1,149
Citations as of Oct 6, 2025
WEB OF SCIENCETM
Citations
3
Citations as of Oct 2, 2025

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.