Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis and probability theory. Extensive applications to Markov decision processes are presented.
This volume is intended for mathematicians, engineers and computer scientists, who work on learning processes in numerical analysis and are involved with optimization, optimal control, decision analysis and machine learning.
1. Introduction. PART I Mathematical background: 2. Real analysis and linear algebra 3. Background – measure theory 4. Background – probability theory 5. Background – stochastic processes 6. Functional analysis 7. Fixed point equations 8. The distribution of a maximum. PART II General theory of approximate iterative algorithms: 9. Background – linear convergence 10. A general theory of approximate iterative algorithms (AIA) 11. Selection of approximation schedules for coarse-to-fine AIAs. PART III Application to Markov decision processes: 12. Markov decision processes (MDP) – background 13. Markov decision processes – value iteration 14. Model approximation in dynamic programming – general theory 15. Sampling based approximation methods 16. Approximate value iteration by truncation 17. Grid approximations of MDPs with continuous state/action spaces 18. Adaptive control of MDPs.