Deep Q‐learning recommender algorithm with update policy for a real steam turbine system

In modern industrial systems, diagnosing faults in time and using the best methods becomes increasingly crucial. It is possible to fail a system or to waste resources if faults are not detected or are detected late. Machine learning and deep learning (DL) have proposed various methods for data‐based...

Full description

Saved in:
Bibliographic Details
Published inIET collaborative intelligent manufacturing Vol. 5; no. 3
Main Authors Modirrousta, Mohammad Hossein, Aliyari Shoorehdeli, Mahdi, Yari, Mostafa, Ghahremani, Arash
Format Journal Article
LanguageEnglish
Published Wuhan John Wiley & Sons, Inc 01.09.2023
Wiley
Subjects
Online AccessGet full text
ISSN2516-8398
2516-8398
DOI10.1049/cim2.12081

Cover

More Information
Summary:In modern industrial systems, diagnosing faults in time and using the best methods becomes increasingly crucial. It is possible to fail a system or to waste resources if faults are not detected or are detected late. Machine learning and deep learning (DL) have proposed various methods for data‐based fault diagnosis, and the authors are looking for the most reliable and practical ones. A framework based on DL and reinforcement learning (RL) is developed for fault detection. The authors have utilised two algorithms in their work: Q‐Learning and Soft Q‐Learning. Reinforcement learning frameworks frequently include efficient algorithms for policy updates, including Q‐learning. These algorithms optimise the policy based on the predictions and rewards, resulting in more efficient updates and quicker convergence. The authors can increase accuracy, overcome data imbalance, and better predict future defects by updating the RL policy when new data is received. By applying their method, an increase of 3%–4% in all evaluation metrics by updating policy, an improvement in prediction speed, and an increase of 3%–6% in all evaluation metrics compared to a typical backpropagation multi‐layer neural network prediction with comparable parameters is observed. In addition, the Soft Q‐learning algorithm yields better outcomes compared to Q‐learning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2516-8398
2516-8398
DOI:10.1049/cim2.12081