Improving Floating Offshore Wind Farm Flow Control With Scalable Model-Based Deep Reinforcement Learning
This paper proposes a model-based deep reinforcement learning (DRL) framework to maximize the total power output and minimize the fatigue load of a floating offshore wind farm subject to wake effect. Recognizing the extensive interactions required for the DRL training, we first develop an open-sourc...
Saved in:
Published in | IEEE transactions on automation science and engineering Vol. 22; pp. 18255 - 18268 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
2025
|
Subjects | |
Online Access | Get full text |
ISSN | 1545-5955 1558-3783 |
DOI | 10.1109/TASE.2025.3585016 |
Cover
Abstract | This paper proposes a model-based deep reinforcement learning (DRL) framework to maximize the total power output and minimize the fatigue load of a floating offshore wind farm subject to wake effect. Recognizing the extensive interactions required for the DRL training, we first develop an open-source physics-based model that describes the time-averaged dynamics of the floating wind farm. This model is designed with sufficient fidelity to support the wind farm control and high computational efficiency to facilitate DRL training. Subsequently, a model-based DRL approach is proposed, featuring simultaneous learning of system dynamics and optimal control policies. This dual learning process enhances the scalability of the DRL agent, making the framework suitable for large-scale floating wind farms. Finally, the effectiveness of the proposed scheme is validated by case studies with a dynamic floating wind farm simulator FAST.Farm. Note to Practitioners -This paper was motivated by the problem of improving the energy production and reducing fatigue load for floating offshore wind farms affected by wake effects, but it also applies to other types of wind farms. Traditional approaches to wind farm optimization often struggle with the computational complexity of modeling turbine interactions and wake dynamics. To address this, this paper proposes a new approach for optimal wind farm flow control using model-based DRL, which simultaneously learns system dynamics and optimizes control strategies. This integrated approach enhances the training stability and scalability of the DRL agent, making it suitable for large-scale wind farms. Simulation results on a dynamic wind farm simulator FAST.Farm suggest that this approach is feasible but it has not yet been incorporated into a real wind farm energy management system. Future work will explore its integration with real-time monitoring systems. |
---|---|
AbstractList | This paper proposes a model-based deep reinforcement learning (DRL) framework to maximize the total power output and minimize the fatigue load of a floating offshore wind farm subject to wake effect. Recognizing the extensive interactions required for the DRL training, we first develop an open-source physics-based model that describes the time-averaged dynamics of the floating wind farm. This model is designed with sufficient fidelity to support the wind farm control and high computational efficiency to facilitate DRL training. Subsequently, a model-based DRL approach is proposed, featuring simultaneous learning of system dynamics and optimal control policies. This dual learning process enhances the scalability of the DRL agent, making the framework suitable for large-scale floating wind farms. Finally, the effectiveness of the proposed scheme is validated by case studies with a dynamic floating wind farm simulator FAST.Farm. Note to Practitioners -This paper was motivated by the problem of improving the energy production and reducing fatigue load for floating offshore wind farms affected by wake effects, but it also applies to other types of wind farms. Traditional approaches to wind farm optimization often struggle with the computational complexity of modeling turbine interactions and wake dynamics. To address this, this paper proposes a new approach for optimal wind farm flow control using model-based DRL, which simultaneously learns system dynamics and optimizes control strategies. This integrated approach enhances the training stability and scalability of the DRL agent, making it suitable for large-scale wind farms. Simulation results on a dynamic wind farm simulator FAST.Farm suggest that this approach is feasible but it has not yet been incorporated into a real wind farm energy management system. Future work will explore its integration with real-time monitoring systems. |
Author | Mei, Mingyang Liang, Deliang Tian, Runze Xu, Yilin Kou, Peng Zhang, Zhihao |
Author_xml | – sequence: 1 givenname: Mingyang orcidid: 0009-0007-8313-8855 surname: Mei fullname: Mei, Mingyang email: meimingyang@mail.stu.xjtu.edu.cn organization: State Key Laboratory of Electrical Insulation and Power Equipment, Shaanxi Key Laboratory of Smart Grid, School of Electrical Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 2 givenname: Peng orcidid: 0000-0002-6122-3959 surname: Kou fullname: Kou, Peng email: koupeng@mail.xjtu.edu.cn organization: State Key Laboratory of Electrical Insulation and Power Equipment, Shaanxi Key Laboratory of Smart Grid, School of Electrical Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 3 givenname: Yilin surname: Xu fullname: Xu, Yilin organization: State Key Laboratory of Electrical Insulation and Power Equipment, Shaanxi Key Laboratory of Smart Grid, School of Electrical Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 4 givenname: Zhihao orcidid: 0000-0003-3900-2305 surname: Zhang fullname: Zhang, Zhihao organization: State Key Laboratory of Electrical Insulation and Power Equipment, Shaanxi Key Laboratory of Smart Grid, School of Electrical Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 5 givenname: Runze orcidid: 0000-0001-8923-0328 surname: Tian fullname: Tian, Runze organization: State Key Laboratory of Electrical Insulation and Power Equipment, Shaanxi Key Laboratory of Smart Grid, School of Electrical Engineering, Xi'an Jiaotong University, Xi'an, China – sequence: 6 givenname: Deliang orcidid: 0000-0002-2394-1781 surname: Liang fullname: Liang, Deliang organization: State Key Laboratory of Electrical Insulation and Power Equipment, Shaanxi Key Laboratory of Smart Grid, School of Electrical Engineering, Xi'an Jiaotong University, Xi'an, China |
BookMark | eNpFkF9LwzAUxYNMcJt-AMGHfIHO_Gna9HHOTQeTgZv4WNLkxlXaZCRF2be3ZQOfzuHce87Db4JGzjtA6J6SGaWkeNzPd8sZI0zMuJCC0OwKjakQMuG55KPBpyIRhRA3aBLjNyEslQUZo8O6PQb_U7svvGq86gaztTYefAD8WTuDVyq0w-0XL7zrgm_6uDvgnVaNqhrAb95AkzypCAY_AxzxO9TO-qChBdfhDajg-tVbdG1VE-HuolP0sVruF6_JZvuyXsw3iWaMdEmVG60MTw0objKeE0opM4XgMqXSiEzzlBRVRbQqtJKG8SLnlIPNBLFasIpPET3v6uBjDGDLY6hbFU4lJeWAqhxQlQOq8oKq7zycOzUA_P9TkvWUMv4HMidnuw |
CODEN | ITASC7 |
Cites_doi | 10.1088/2516-1083/ac6cc1 10.1017/jfm.2023.129 10.5194/wes-2021-19 10.1016/j.rser.2023.113416 10.1175/JTECH1886.1 10.5194/wes-5-451-2020 10.1109/tste.2025.3555266 10.1016/j.taml.2023.100475 10.1016/j.neunet.2022.03.037 10.1109/TASE.2024.3393918 10.1016/j.jfranklin.2024.107112 10.1016/j.renene.2023.119129 10.1109/TCST.2022.3223185 10.1016/j.renene.2016.07.038 10.1016/j.apenergy.2023.121849 10.1007/s11036-017-0962-2 10.1109/TCST.2023.3243383 10.5194/wes-9-1827-2024 10.1016/j.oceaneng.2024.119971 10.5194/wes-5-945-2020 10.1109/TSTE.2024.3392882 10.5194/wes-7-2271-2022 10.1088/1742-6596/2767/3/032017 10.1088/1361-6501/ad0f6d 10.1016/j.rser.2025.115605 10.23919/ACC.2019.8814600 10.1007/s11036-017-0886-x 10.5194/wes-8-1071-2023 10.5194/wes-6-701-2021 10.1088/1742-6596/2265/2/022063 10.1109/TASE.2023.3295576 10.1088/1742-6596/2767/9/092090 10.1109/MCS.2024.3432342 10.1063/5.0141683 10.2172/891594 10.1109/TASE.2024.3389020 10.1088/1742-6596/2265/3/032109 10.1038/s41560-022-01085-8 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION |
DOI | 10.1109/TASE.2025.3585016 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-3783 |
EndPage | 18268 |
ExternalDocumentID | 10_1109_TASE_2025_3585016 11062486 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 52077165 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AIBXA AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ H~9 IFIPE IPLJI JAVBF LAI M43 O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION |
ID | FETCH-LOGICAL-c220t-b7dcad34dea3d63701112d9538418d56c3409bb0ca9ca8d2397313ef650fc52b3 |
IEDL.DBID | RIE |
ISSN | 1545-5955 |
IngestDate | Wed Oct 01 05:44:26 EDT 2025 Wed Aug 27 02:13:12 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c220t-b7dcad34dea3d63701112d9538418d56c3409bb0ca9ca8d2397313ef650fc52b3 |
ORCID | 0000-0001-8923-0328 0009-0007-8313-8855 0000-0002-6122-3959 0000-0002-2394-1781 0000-0003-3900-2305 |
PageCount | 14 |
ParticipantIDs | ieee_primary_11062486 crossref_primary_10_1109_TASE_2025_3585016 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20250000 2025-00-00 |
PublicationDateYYYYMMDD | 2025-01-01 |
PublicationDate_xml | – year: 2025 text: 20250000 |
PublicationDecade | 2020 |
PublicationTitle | IEEE transactions on automation science and engineering |
PublicationTitleAbbrev | TASE |
PublicationYear | 2025 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref12 ref34 ref15 ref37 ref14 ref36 ref31 ref30 ref11 ref33 ref32 Hafner (ref35) 2023 ref1 Becker (ref10) 2025 ref17 ref16 ref38 (ref2) 2024 ref19 Kadoche (ref13) 2025 ref18 Hall (ref39) 2024 ref24 Jonkman (ref43) 2021 ref23 ref45 ref25 ref20 ref42 ref41 ref22 ref44 ref21 ref28 ref27 ref29 ref8 ref7 ref9 ref4 ref3 ref6 ref5 Fujimoto (ref26); 36 ref40 |
References_xml | – ident: ref33 doi: 10.1088/2516-1083/ac6cc1 – ident: ref38 doi: 10.1017/jfm.2023.129 – ident: ref37 doi: 10.5194/wes-2021-19 – ident: ref1 doi: 10.1016/j.rser.2023.113416 – ident: ref4 doi: 10.1175/JTECH1886.1 – ident: ref5 doi: 10.5194/wes-5-451-2020 – ident: ref15 doi: 10.1109/tste.2025.3555266 – ident: ref32 doi: 10.1016/j.taml.2023.100475 – ident: ref34 doi: 10.1016/j.neunet.2022.03.037 – year: 2023 ident: ref35 article-title: Mastering diverse domains through world models publication-title: arXiv:2301.04104 – ident: ref18 doi: 10.1109/TASE.2024.3393918 – ident: ref31 doi: 10.1016/j.jfranklin.2024.107112 – year: 2025 ident: ref13 article-title: How to craft a deep reinforcement learning policy for wind farm flow control publication-title: arXiv:2506.06204 – ident: ref14 doi: 10.1016/j.renene.2023.119129 – ident: ref12 doi: 10.1109/TCST.2022.3223185 – ident: ref42 doi: 10.1016/j.renene.2016.07.038 – ident: ref9 doi: 10.1016/j.apenergy.2023.121849 – ident: ref29 doi: 10.1007/s11036-017-0962-2 – ident: ref19 doi: 10.1109/TCST.2023.3243383 – ident: ref21 doi: 10.5194/wes-9-1827-2024 – ident: ref23 doi: 10.1016/j.oceaneng.2024.119971 – ident: ref8 doi: 10.5194/wes-5-945-2020 – ident: ref17 doi: 10.1109/TSTE.2024.3392882 – volume: 36 start-page: 1 volume-title: Proc. Adv. Neural Inf. Process ident: ref26 article-title: For SALE: State-action representation learning for deep reinforcement learning – ident: ref6 doi: 10.5194/wes-7-2271-2022 – ident: ref11 doi: 10.1088/1742-6596/2767/3/032017 – ident: ref30 doi: 10.1088/1361-6501/ad0f6d – volume-title: Moorpy (Quasi-Static Mooring Analysis in Python) year: 2024 ident: ref39 – ident: ref36 doi: 10.1016/j.rser.2025.115605 – year: 2021 ident: ref43 article-title: Fast.Farm user’s guide and theory manual – ident: ref20 doi: 10.23919/ACC.2019.8814600 – ident: ref45 doi: 10.1007/s11036-017-0886-x – ident: ref3 doi: 10.5194/wes-8-1071-2023 – year: 2025 ident: ref10 article-title: Closed-loop model-predictive wind farm flow control under time-varying inflow using FLORIDyn publication-title: arXiv:2503.02790 – ident: ref40 doi: 10.5194/wes-6-701-2021 – ident: ref41 doi: 10.1088/1742-6596/2265/2/022063 – ident: ref16 doi: 10.1109/TASE.2023.3295576 – ident: ref24 doi: 10.1088/1742-6596/2767/9/092090 – ident: ref22 doi: 10.1109/MCS.2024.3432342 – ident: ref28 doi: 10.1063/5.0141683 – ident: ref44 doi: 10.2172/891594 – ident: ref27 doi: 10.1109/TASE.2024.3389020 – volume-title: Offshore Wind Technical Potential | Analysis and Maps year: 2024 ident: ref2 – ident: ref7 doi: 10.1088/1742-6596/2265/3/032109 – ident: ref25 doi: 10.1038/s41560-022-01085-8 |
SSID | ssj0024890 |
Score | 2.3920715 |
Snippet | This paper proposes a model-based deep reinforcement learning (DRL) framework to maximize the total power output and minimize the fatigue load of a floating... |
SourceID | crossref ieee |
SourceType | Index Database Publisher |
StartPage | 18255 |
SubjectTerms | Automation Computational modeling deep reinforcement learning energy generation Fatigue fatigue load floating offshore wind farm Load modeling Optimal control System dynamics Training turbine repositioning wake effect Wind energy Wind farms Wind speed Wind turbines |
Title | Improving Floating Offshore Wind Farm Flow Control With Scalable Model-Based Deep Reinforcement Learning |
URI | https://ieeexplore.ieee.org/document/11062486 |
Volume | 22 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore customDbUrl: eissn: 1558-3783 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0024890 issn: 1545-5955 databaseCode: RIE dateStart: 20040101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3LS8MwGA9uJz34nDhf5OBJaNc1bZoc59wYghPchruVvOrAuY3ZIfjXmy_t3BAEbyVNS8gX8r1_P4RudFMqbm89j0eZ8iLr93gc0oWKUcBCkYlh0Jz82Ke9UfQwjsdls7rrhTHGuOIz48Ojy-XruVpBqKxhVRUNI0YrqJIkvGjW2gDrMRdQAZPAi3kclynMZsAbw9agY13BMPaJtY4D4DbfUkJbrCpOqXQPUH-9nKKW5M1f5dJXX7-QGv-93kO0X5qXuFWchyO0Y2bHaG8LdPAETX7iCLg7nQsoe8ZPWfYxsX_HL9ZFx12xfId3n7hdFLLb4XyCB1ac0GiFgT9t6t1Z_afxvTEL_Gwc_qpyoUZcQra-1tCo2xm2e17Jt-CpMAxyTyZaCU0ibQTRlCRAQx9qbq_EqMl0TBWxzqCUgRJcCaZDArRXxGTWyMtUHEpyiqqz-cycIdykiWBUWPtF80hwzRinGc0iJhJJpAjr6HYtgHRRwGqkzh0JeArSSkFaaSmtOqrB3m4mltt6_sf4BdqFz4tAySWq5suVubKmQy6v3ZH5BpKJv24 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1bS8MwFA46H9QH7-LdPPgkdHZNmyaPc25M3Sa4DX0rudWBuo3ZIfjrzUk7NwTBt5KGEHJCzndu30HoQlek4vbV83iYKi-0do_HIVyoGAUuFBkbBsXJ7Q5t9sO75-i5KFZ3tTDGGJd8Zsrw6WL5eqSm4Cq7sqqKBiGjy2glsmZFnJdrzan1mHOpACjwIh5FRRCz4vOrXrVbt8ZgEJWJxcc-dDdfUEMLfVWcWmlsos5sQ3k2yWt5msmy-vrF1fjvHW-hjQJg4mp-I7bRkhnuoPUF2sFdNPjxJODG20hA4jN-SNOPgV0dP1kjHTfE5B3-feJanspuh7MB7lqBQqkVhg5qb9611YAa3xgzxo_GMbAq52zEBWnryx7qN-q9WtMrOi54Kgj8zJOxVkKTUBtBNCUxNKIPNLePYlhhOqKKWHNQSl8JrgTTAYHGV8SkFualKgok2Uel4WhoDhCu0FgwKiyC0TwUXDPGaUrTkIlYEimCQ3Q5E0Ayzok1EmeQ-DwBaSUgraSQ1iHag7OdTyyO9eiP8XO02uy1W0nrtnN_jNZgqdxtcoJK2WRqTi2QyOSZuz7f0GjCvw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Improving+Floating+Offshore+Wind+Farm+Flow+Control+With+Scalable+Model-Based+Deep+Reinforcement+Learning&rft.jtitle=IEEE+transactions+on+automation+science+and+engineering&rft.au=Mei%2C+Mingyang&rft.au=Kou%2C+Peng&rft.au=Xu%2C+Yilin&rft.au=Zhang%2C+Zhihao&rft.date=2025&rft.issn=1545-5955&rft.eissn=1558-3783&rft.volume=22&rft.spage=18255&rft.epage=18268&rft_id=info:doi/10.1109%2FTASE.2025.3585016&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TASE_2025_3585016 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1545-5955&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1545-5955&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1545-5955&client=summon |