Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/94333
PIRA download icon_1.1View/Download Full Text
Title: A joint multitask learning model for cross-sectional and longitudinal predictions of visual field using OCT
Authors: Asaoka, R
Xu, L 
Murata, H
Kiwaki, T
Matsuura, M
Fujino, Y
Tanito, M
Mori, K
Ikeda, Y
Kanamoto, T
Inoue, K
Yamagami, J
Yamanishi, K
Issue Date: Dec-2021
Source: Ophthalmology science, Dec. 2021, v. 1, no. 4, 100055
Abstract: Purpose: We constructed a multitask learning model (latent space linear regression and deep learning [LSLR-DL]) in which the 2 tasks of cross-sectional predictions (using OCT) of visual field (VF; central 10°) and longitudinal progression predictions of VF (30°) were performed jointly via sharing the deep learning (DL) component such that information from both tasks was used in an auxiliary manner (The Association for Computing Machinery's Special Interest Group on Knowledge Discovery and Data Mining [SIGKDD] 2021). The purpose of the current study was to investigate the prediction accuracy preparing an independent validation dataset.
Design: Cohort study.
Participants: Cross-sectional training and testing data sets included the VF (Humphrey Field Analyzer [HFA] 10-2 test) and an OCT measurement (obtained within 6 months) from 591 eyes of 351 healthy people or patients with open-angle glaucoma (OAG) and from 155 eyes of 131 patients with OAG, respectively. Longitudinal training and testing data sets included 7984 VF results (HFA 24-2 test) from 998 eyes of 592 patients with OAG and 1184 VF results (HFA 24-2 test) from 148 eyes of 84 patients with OAG, respectively. Each eye had 8 VF test results (HFA 24-2 test). The OCT sequences within the observation period were used.
Methods: Root mean square error (RMSE) was used to evaluate the accuracy of LSLR-DL for the cross-sectional prediction of VF (HFA 10-2 test). For the longitudinal prediction, the final (eighth) VF test (HFA 24-2 test) was predicted using a shorter VF series and relevant OCT images, and the RMSE was calculated. For comparison, RMSE values were calculated by applying the DL component (cross-sectional prediction) and the ordinary pointwise linear regression (longitudinal prediction).
Main Outcome Measures: Root mean square error in the cross-sectional and longitudinal predictions.
Results: Using LSLR-DL, the mean RMSE in the cross-sectional prediction was 6.4 dB and was between 4.4 dB (VF tests 1 and 2) and 3.7 dB (VF tests 1e7) in the longitudinal prediction, indicating that LSLR-DL significantly outperformed other methods.
Conclusions: The results of this study indicate that LSLR-DL is useful for both the cross-sectional prediction of VF (HFA 10-2 test) and the longitudinal progression prediction of VF (HFA 24-2 test).
Keywords: Glaucoma
OCT
Progression
Visual field
Publisher: Elsevier Inc.
Journal: Ophthalmology science 
EISSN: 2666-9145
DOI: 10.1016/j.xops.2021.100055
Rights: © 2021 by the American Academy of Ophthalmology
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Published by Elsevier Inc.
The following publication Asaoka, R., Xu, L., Murata, H., Kiwaki, T., Matsuura, M., Fujino, Y., ... & Yamanishi, K. (2021). A joint multitask learning model for cross-sectional and longitudinal predictions of visual field using OCT. Ophthalmology Science, 1(4), 100055 is available at https://doi.org/10.1016/j.xops.2021.100055
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
1-s2.0-S2666914521000531-main.pdf1.94 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

57
Last Week
1
Last month
Citations as of May 5, 2024

Downloads

23
Citations as of May 5, 2024

SCOPUSTM   
Citations

6
Citations as of Apr 26, 2024

WEB OF SCIENCETM
Citations

5
Citations as of May 2, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.