000 03514nam a22004335i 4500
001 u374623
003 SIRSI
005 20160812084243.0
007 cr nn 008mamaa
008 100709s2010 gw | s |||| 0|eng d
020 _a9783642139321
_9978-3-642-13932-1
040 _cMX-MeUAM
050 4 _aQ342
082 0 4 _a006.3
_223
100 1 _aWhiteson, Shimon.
_eauthor.
245 1 0 _aAdaptive Representations for Reinforcement Learning
_h[recurso electrónico] /
_cby Shimon Whiteson.
264 1 _aBerlin, Heidelberg :
_bSpringer Berlin Heidelberg,
_c2010.
300 _aXIII, 116 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aStudies in Computational Intelligence,
_x1860-949X ;
_v291
505 0 _aPart 1 Introduction -- Part 2 Reinforcement Learning -- Part 3 On-Line Evolutionary Computation -- Part 4 Evolutionary Function Approximation -- Part 5 Sample-Efficient Evolutionary Function Approximation -- Part 6 Automatic Feature Selection for Reinforcement Learning -- Part 7 Adaptive Tile Coding -- Part 8 RelatedWork -- Part 9 Conclusion -- Part 10 Statistical Significance.
520 _aThis book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own representations have the potential to dramatically improve performance. This book introduces two novel approaches for automatically discovering high-performing representations. The first approach synthesizes temporal difference methods, the traditional approach to reinforcement learning, with evolutionary methods, which can learn representations for a broad class of optimization problems. This synthesis is accomplished by customizing evolutionary methods to the on-line nature of reinforcement learning and using them to evolve representations for value function approximators. The second approach automatically learns representations based on piecewise-constant approximations of value functions. It begins with coarse representations and gradually refines them during learning, analyzing the current policy and value function to deduce the best refinements. This book also introduces a novel method for devising input representations. This method addresses the feature selection problem by extending an algorithm that evolves the topology and weights of neural networks such that it evolves their inputs too. In addition to introducing these new methods, this book presents extensive empirical results in multiple domains demonstrating that these techniques can substantially improve performance over methods with manual representations.
650 0 _aEngineering.
650 0 _aArtificial intelligence.
650 1 4 _aEngineering.
650 2 4 _aComputational Intelligence.
650 2 4 _aArtificial Intelligence (incl. Robotics).
710 2 _aSpringerLink (Online service)
773 0 _tSpringer eBooks
776 0 8 _iPrinted edition:
_z9783642139314
830 0 _aStudies in Computational Intelligence,
_x1860-949X ;
_v291
856 4 0 _zLibro electrónico
_uhttp://148.231.10.114:2048/login?url=http://link.springer.com/book/10.1007/978-3-642-13932-1
596 _a19
942 _cLIBRO_ELEC
999 _c202503
_d202503