UNI-MB - logo
UMNIK - logo
 
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • Efficient Off-Policy Q-Lear...
    Lopez, Victor G.; Alsalti, Mohammad; Muller, Matthias A.

    IEEE transactions on automatic control, 05/2023, Letnik: 68, Številka: 5
    Journal Article

    This paper introduces and analyzes an improved Q-learning algorithm for discrete-time linear time-invariant systems. The proposed method does not require any knowledge of the system dynamics, and it enjoys significant efficiency advantages over other data-based optimal control methods in the literature. This algorithm can be fully executed off-line, as it does not require to apply the current estimate of the optimal input to the system as in on-policy algorithms. It is shown that a persistently exciting input, defined from an easily tested matrix rank condition, guarantees the convergence of the algorithm. A data-based method is proposed to design the initial stabilizing feedback gain that the algorithm requires. Robustness of the algorithm in the presence of noisy measurements is analyzed. We compare the proposed algorithm in simulation to different direct and indirect data-based control design methods.