Reinforcement Learning for Optimal Feedback Control [electronic resource] : A Lyapunov-Based Approach / by Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon.

Por: Kamalapurkar, Rushikesh [author.]Colaborador(es): Walters, Patrick [author.] | Rosenfeld, Joel [author.] | Dixon, Warren [author.] | SpringerLink (Online service)Tipo de material: TextoTextoSeries Communications and Control EngineeringEditor: Cham : Springer International Publishing : Imprint: Springer, 2018Edición: 1st ed. 2018Descripción: XVI, 293 p. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9783319783840Tema(s): Control engineering | Calculus of variations | System theory | Electrical engineering | Control and Systems Theory | Calculus of Variations and Optimal Control; Optimization | Systems Theory, Control | Communications Engineering, NetworksFormatos físicos adicionales: Printed edition:: Sin título; Printed edition:: Sin título; Printed edition:: Sin títuloClasificación CDD: 629.8 Clasificación LoC:TJ212-225Recursos en línea: Libro electrónicoTexto
Contenidos:
Chapter 1. Optimal control -- Chapter 2. Approximate dynamic programming -- Chapter 3. Excitation-based online approximate optimal control -- Chapter 4. Model-based reinforcement learning for approximate optimal control -- Chapter 5. Differential Graphical Games -- Chapter 6. Applications -- Chapter 7. Computational considerations -- Reference -- Index.
En: Springer Nature eBookResumen: Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book's focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution. To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor-critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements. This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.
Star ratings
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Colección Signatura Copia número Estado Fecha de vencimiento Código de barras
Libro Electrónico Biblioteca Electrónica
Colección de Libros Electrónicos 1 No para préstamo

Acceso multiusuario

Chapter 1. Optimal control -- Chapter 2. Approximate dynamic programming -- Chapter 3. Excitation-based online approximate optimal control -- Chapter 4. Model-based reinforcement learning for approximate optimal control -- Chapter 5. Differential Graphical Games -- Chapter 6. Applications -- Chapter 7. Computational considerations -- Reference -- Index.

Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book's focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution. To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor-critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements. This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.

UABC ; Temporal ; 01/01/2021-12/31/2023.

Con tecnología Koha