A discrete event simulator to implement deep reinforcement learning for the dynamic flexible job shop scheduling problem

•The importance of discrete event simulation for training and testing DRL techniques is discussed.•The ineffectiveness of DRL implemented without a simulation environment is demonstrated.•An object-oriented agent-based discrete event simulator of the job shop is presented.•The simulator is designed...

Full description

Saved in:
Bibliographic Details
Published inSimulation modelling practice and theory Vol. 134; p. 102948
Main Authors Tiacci, Lorenzo, Rossi, Andrea
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.07.2024
Subjects
Online AccessGet full text
ISSN1569-190X
1878-1462
DOI10.1016/j.simpat.2024.102948

Cover

More Information
Summary:•The importance of discrete event simulation for training and testing DRL techniques is discussed.•The ineffectiveness of DRL implemented without a simulation environment is demonstrated.•An object-oriented agent-based discrete event simulator of the job shop is presented.•The simulator is designed to be integrated with DRL agents.•It is possible to define customized features, dispatching rules, reward function. The job shop scheduling problem, which involves the routing and sequencing of jobs in a job shop context, is a relevant subject in industrial engineering. Approaches based on Deep Reinforcement Learning (DRL) are very promising for dealing with the variability of real working conditions due to dynamic events such as the arrival of new jobs and machine failures. Discrete Event Simulation (DES) is essential for training and testing DRL approaches, which are based on the interaction of an intelligent agent and the production system. Nonetheless, there are numerous papers in the literature in which DRL techniques, developed to solve the Dynamic Flexible Job Shop Problem (DFJSP), have been implemented and evaluated in the absence of a simulation environment. In the paper, the limitations of these techniques are highlighted, and a numerical experiment that demonstrates their ineffectiveness is presented. Furthermore, in order to provide the scientific community with a simulation tool designed to be used in conjunction with DRL techniques, an agent-based discrete event simulator is also presented.
ISSN:1569-190X
1878-1462
DOI:10.1016/j.simpat.2024.102948