Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/108598
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Mechanical Engineeringen_US
dc.creatorRen, Fen_US
dc.creatorWang, Cen_US
dc.creatorSong, Jen_US
dc.creatorTang, Hen_US
dc.date.accessioned2024-08-20T01:52:32Z-
dc.date.available2024-08-20T01:52:32Z-
dc.identifier.issn0022-1120en_US
dc.identifier.urihttp://hdl.handle.net/10397/108598-
dc.language.isoenen_US
dc.publisherCambridge University Pressen_US
dc.rights© The Author(s), 2024. Published by Cambridge University Press. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/ licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.en_US
dc.rightsThe following publication Ren F, Wang C, Song J, Tang H. Deep reinforcement learning finds a new strategy for vortex-induced vibration control. Journal of Fluid Mechanics. 2024;990:A7 is available at https://doi.org/10.1017/jfm.2024.503.en_US
dc.subjectMachine learningen_US
dc.titleDeep reinforcement learning finds a new strategy for vortex-induced vibration controlen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume990en_US
dc.identifier.doi10.1017/jfm.2024.503en_US
dcterms.abstractAs a promising machine learning method for active flow control (AFC), deep reinforcement learning (DRL) has been successfully applied in various scenarios, such as the drag reduction for stationary cylinders under both laminar and weakly turbulent conditions. However, current applications of DRL in AFC still suffer from drawbacks including excessive sensor usage, unclear search paths and insufficient robustness tests. In https://doi.org/10.1017/jfm.2024.503 Published online by Cambridge University Press this study, we aim to tackle these issues by applying DRL-guided self-rotation to suppress the vortex-induced vibration (VIV) of a circular cylinder under the lock-in condition. With a state space consisting only of the acceleration, velocity and displacement of the cylinder, the DRL agent learns an effective control strategy that successfully suppresses the VIV amplitude by 99.6%. Through systematic comparisons between different combinations of sensory-motor cues as well as sensitivity analysis, we identify three distinct stages of the search path related to the flow physics, in which the DRL agent adjusts the amplitude, frequency and phase lag of the actions. Under the deterministic control, only a little forcing is required to maintain the control performance, and the VIV frequency is only slightly affected, showing that the present control strategy is distinct from those utilizing the lock-on effect. Through dynamic mode decomposition analysis, we observe that the growth rates of the dominant modes in the controlled case all become negative, indicating that DRL remarkably enhances the system stability. Furthermore, tests involving various Reynolds numbers and upstream perturbations confirm that the learned control strategy is robust. Finally, the present study shows that DRL is capable of controlling VIV with a very small number of sensors, making it effective, efficient, interpretable and robust. We anticipate that DRL could provide a general framework for AFC and a deeper understanding of the underlying physics.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of fluid mechanics, 10 July 2024, v. 990, A7en_US
dcterms.isPartOfJournal of fluid mechanicsen_US
dcterms.issued2024-07-10-
dc.identifier.eissn1469-7645en_US
dc.identifier.artnA7en_US
dc.description.validate202408 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; Key Research and Development Program of Shaanxi; Natural Science Foundation of Chongqing, China; Fundamental Research Funds for the Central Universities; Hong Kong Polytechnic University Shenzhen Research Instituteen_US
dc.description.pubStatusPublisheden_US
dc.description.TACUP (2024)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Ren_Deep_Reinforcement_Learning.pdf11.72 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

102
Citations as of Nov 10, 2025

Downloads

25
Citations as of Nov 10, 2025

WEB OF SCIENCETM
Citations

15
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.