Markov Decision Processes and the Belief-Desire-Intention Model [recurso electrónico] : Bridging the Gap for Autonomous Agents / by Gerardo I. Simari, Simon D. Parsons.
Tipo de material: TextoSeries SpringerBriefs in Computer ScienceEditor: New York, NY : Springer New York, 2011Descripción: VIII, 63p. online resourceTipo de contenido: text Tipo de medio: computer Tipo de portador: online resourceISBN: 9781461414728Tema(s): Computer science | Artificial intelligence | Computer simulation | Computer Science | Artificial Intelligence (incl. Robotics) | Simulation and ModelingFormatos físicos adicionales: Printed edition:: Sin títuloClasificación CDD: 006.3 Clasificación LoC:Q334-342TJ210.2-211.495Recursos en línea: Libro electrónicoTipo de ítem | Biblioteca actual | Colección | Signatura | Copia número | Estado | Fecha de vencimiento | Código de barras |
---|---|---|---|---|---|---|---|
Libro Electrónico | Biblioteca Electrónica | Colección de Libros Electrónicos | Q334 -342 (Browse shelf(Abre debajo)) | 1 | No para préstamo | 372466-2001 |
Introduction -- Preliminary Concepts -- An Empirical Comparison of Models -- A Theoretical Comparison of Models -- Related Work. Conclusions, Limitations, and Future Directions.
In this work, we provide a treatment of the relationship between two models that have been widely used in the implementation of autonomous agents: the Belief DesireIntention (BDI) model and Markov Decision Processes (MDPs). We start with an informal description of the relationship, identifying the common features of the two approaches and the differences between them. Then we hone our understanding of these differences through an empirical analysis of the performance of both models on the TileWorld testbed. This allows us to show that even though the MDP model displays consistently better behavior than the BDI model for small worlds, this is not the case when the world becomes large and the MDP model cannot be solved exactly. Finally we present a theoretical analysis of the relationship between the two approaches, identifying mappings that allow us to extract a set of intentions from a policy (a solution to an MDP), and to extract a policy from a set of intentions.
19