Deep reinforcement learning finds a new strategy for vortex-induced vibration control

Author:

Ren FengORCID,Wang ChengleiORCID,Song JianORCID,Tang HuiORCID

Abstract

As a promising machine learning method for active flow control (AFC), deep reinforcement learning (DRL) has been successfully applied in various scenarios, such as the drag reduction for stationary cylinders under both laminar and weakly turbulent conditions. However, current applications of DRL in AFC still suffer from drawbacks including excessive sensor usage, unclear search paths and insufficient robustness tests. In this study, we aim to tackle these issues by applying DRL-guided self-rotation to suppress the vortex-induced vibration (VIV) of a circular cylinder under the lock-in condition. With a state space consisting only of the acceleration, velocity and displacement of the cylinder, the DRL agent learns an effective control strategy that successfully suppresses the VIV amplitude by $99.6\,\%$ . Through systematic comparisons between different combinations of sensory-motor cues as well as sensitivity analysis, we identify three distinct stages of the search path related to the flow physics, in which the DRL agent adjusts the amplitude, frequency and phase lag of the actions. Under the deterministic control, only a little forcing is required to maintain the control performance, and the VIV frequency is only slightly affected, showing that the present control strategy is distinct from those utilizing the lock-on effect. Through dynamic mode decomposition analysis, we observe that the growth rates of the dominant modes in the controlled case all become negative, indicating that DRL remarkably enhances the system stability. Furthermore, tests involving various Reynolds numbers and upstream perturbations confirm that the learned control strategy is robust. Finally, the present study shows that DRL is capable of controlling VIV with a very small number of sensors, making it effective, efficient, interpretable and robust. We anticipate that DRL could provide a general framework for AFC and a deeper understanding of the underlying physics.

Funder

Key Research and Development Projects of Shaanxi Province

National Natural Science Foundation of China

Fundamental Research Funds for the Central Universities

Publisher

Cambridge University Press (CUP)

Reference74 articles.

1. An immersed boundary method for complex incompressible flows

2. Schulman, J. , Wolski, F. , Dhariwal, P. , Radford, A. & Klimov, O. 2017 Proximal policy optimization algorithms. arXiv:1707.06347.

3. Robust active flow control over a range of Reynolds numbers using an artificial neural network trained through deep reinforcement learning

4. Mastering the game of Go with deep neural networks and tree search

5. Chou, P.-W. , Maturana, D. & Scherer, S. 2017 Improving stochastic policy gradients in continuous control with deep reinforcement learning using the beta distribution. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (ed. D. Precup & Y.W. Teh), 6–11 August, Sydney, NSW, Australia, pp. 834–843. JMLR.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3