UP - logo
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • The use of reinforcement le...
    He, Zhiliang; Thürer, Matthias; Zhou, Wanling

    International journal of production economics, August 2024, 2024-08-00, Letnik: 274
    Journal Article

    One of the main objectives of Material Flow Control (MFC) is to ensure delivery performance. Traditional MFC realizes this through independent decisions at two levels: order release and production authorization on the shop floor. This hierarchical decision-making can be improved by integration because these decisions are interconnected. This study introduces a new reinforcement learning method that combines, and jointly optimizes various MFC decisions. It enhances the delivery performance of an agent by enabling it to interact with the environment and to learn the parameters of the decision model. Results from a make-to-order pure job shop simulation model demonstrate that the new approach outperforms exiting MFC methods in most cases. This extends existing literature on MFC, which remains entrenched in traditional decision methods, and existing literature on reinforcement learning in the context of production planning and control, which remains largely focused on production scheduling. It has important implications for the future design of production planning and control systems and practice, specifically in contexts where data is readily available or a digital shadow can be obtained. •Transcends hierarchical material flow control.•Integrates release, authorization and dispatching decision.•Outlines a new material flow control mechanism that uses reinforcement learning.•Demonstrates that the new mechanism can outperform the state-of-the-art.