1st Edition

AlphaGo Simplified Rule-Based AI and Deep Learning in Everyday Games

By Mark Liu Copyright 2025
    410 Pages 21 Color Illustrations
    by Chapman & Hall

    410 Pages 21 Color Illustrations
    by Chapman & Hall

    May 11, 1997, was a watershed moment in the history of artificial intelligence (AI): the IBM supercomputer chess engine, Deep Blue, beat the world Chess champion, Garry Kasparov. It was the first time a machine had triumphed over a human player in a Chess tournament. Fast forward 19 years to May 9, 2016, DeepMind’s AlphaGo beat the world Go champion Lee Sedol. AI again stole the spotlight and generated a media frenzy. This time, a new type of AI algorithm, namely machine learning (ML) was the driving force behind the game strategies.

     

    What exactly is ML? How is it related to AI? Why is deep learning (DL) so popular these days? This book explains how traditional rule-based AI and ML work and how they can be implemented in everyday games such as Last Coin Standing, Tic Tac Toe, or Connect Four. Game rules in these three games are easy to implement. As a result, readers will learn rule-based AI, deep reinforcement learning, and more importantly, how to combine the two to create powerful game strategies (the whole is indeed greater than the sum of its parts) without getting bogged down in complicated game rules.

     

    Implementing rule-based AI and ML in these straightforward games is quick and not computationally intensive. Consequently, game strategies can be trained in mere minutes or hours without requiring GPU training or supercomputing facilities, showcasing AI's ability to achieve superhuman performance in these games. More importantly, readers will gain a thorough understanding of the principles behind rule-based AI, such as the MiniMax algorithm, alpha-beta pruning, and Monte Carlo Tree Search (MCTS), and how to integrate them with cutting-edge ML techniques like convolutional neural networks and deep reinforcement learning to apply them in their own business fields and tackle real-world challenges.

     

    Written with clarity from the ground up, this book appeals to both general readers and industry professionals who seek to learn about rule-based AI and deep reinforcement learning, as well as students and educators in computer science and programming courses.

     

    List of Figures

    Preface

    Acknowledgments

    Section I Rule-Based A.I.

    Chapter 1 Rule-Based AI in the Coin Game

    Chapter 2 Look-Ahead Search in Tic Tac Toe

    Chapter 3 Planning Three Steps Ahead in Connect Four

    Chapter 4 Recursion and MiniMax Tree Search

    Chapter 5 Depth Pruning in MiniMax

    Chapter 6 Alpha-Beta Pruning

    Chapter 7 Position Evaluation in MiniMax

    Chapter 8 Monte Carlo Tree Search

    Section II Deep Learning

    Chapter 9 Deep Learning in the Coin Game

    Chapter 10 Policy Networks in Tic Tac Toe

    Chapter 11 A Policy Network in Connect Four

    Section III Reinforcement Learning

    Chapter 12 Tabular Q-Learning in the Coin Game

    Chapter 13 Self-Play Deep Reinforcement Learning

    Chapter 14 Vectorization to Speed Up Deep Reinforcement Learning

    Chapter 15 A Value Network in Connect Four

    Section IV AlphaGo Algorithms

    Chapter 16 Implement AlphaGo in the Coin Game

    Chapter 17 AlphaGo in Tic Tac Toe and Connect Four

    Chapter 18 Hyperparameter Tuning in AlphaGo

    Chapter 19 The Actor-Critic Method and AlphaZero

    Chapter 20 Iterative Self-Play and AlphaZero in Tic Tac Toe

    Chapter 21 AlphaZero in Unsolved Games

    Bibliography

    Biography

    Mark H. Liu is an Associate Professor of Finance, the (Founding) Director of the MS Finance Program at the University of Kentucky. He obtained his Ph.D. in finance from Boston College in 2004 and his M.A. in economics from Western University in Canada in 1998. Dr. Liu has more than 20 years of coding experience and is the author of two books: Make Python Talk (No Starch Press, 2021) and Machine Learning, Animated (CRC Press, 2023).