Reinforcement Learning and Dynamic Programming Using Function Approximators: 1st Edition (Hardback) book cover

Reinforcement Learning and Dynamic Programming Using Function Approximators

1st Edition

By Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst

CRC Press

280 pages | 74 B/W Illus.

Purchasing Options:$ = USD
Hardback: 9781439821084
pub: 2010-04-29
$120.00
x
eBook (VitalSource) : 9781439821091
pub: 2010-04-29
from $28.98


FREE Standard Shipping!

Description

From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems.

However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence.

Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications.

The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work.

Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.

Table of Contents

1 Introduction

The dynamic programming and reinforcement learning problem

Approximation in dynamic programming and reinforcement learning

About this book

2 An introduction to dynamic programming and reinforcement learning

Introduction

Markov decision processes

Value iteration

Policy iteration

Policy search

Summary and discussion

3 Dynamic programming and reinforcement learning in large and continuous

spaces

Introduction

The need for approximation in large and continuous spaces

Approximation architectures

Approximate value iteration

Approximate policy iteration

Finding value function approximators automatically

Approximate policy search

Comparison of approximate value iteration, policy iteration, and policy search

Summary and discussion

4 Approximate value iteration with a fuzzy representation

Introduction

Fuzzy Q-iteration

Analysis of fuzzy Q-iteration

Optimizing the membership functions

Experimental study

Summary and discussion

5 Approximate policy iteration for online learning and continuous-action control

Introduction

A recapitulation of least-squares policy iteration

Online least-squares policy iteration

Online LSPI with prior knowledge

LSPI with continuous-action, polynomial approximation

Experimental study

Summary and discussion

6 Approximate policy search with cross-entropy optimization of basis functions

Introduction

Cross-entropy optimization

Cross-entropy policy search

Experimental study

Summary and discussion

Appendix A Extremely randomized trees

Structure of the approximator

Building and using a tree

Appendix B The cross-entropy method

Rare-event simulation using the cross-entropy method

Cross-entropy optimization

Symbols and abbreviations

Bibliography

List of algorithms

Index

About the Authors

Robert Babuska, Lucian Busoniu, and Bart de Schutter are with the Delft University of Technology. Damien Ernst is with the University of Liege.

About the Series

Automation and Control Engineering

Learn more…

Subject Categories

BISAC Subject Codes/Headings:
COM037000
COMPUTERS / Machine Theory
TEC007000
TECHNOLOGY & ENGINEERING / Electrical
TEC008000
TECHNOLOGY & ENGINEERING / Electronics / General