Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Overview

Neural Networks For Chess

cover

Free Book

  • Grab your free PDF copy HERE
  • Buy a printed copy at HERE or HERE

Donations are welcome:

paypal

Contents

AlphaZero, Leela Chess Zero and Stockfish NNUE revolutionized Computer Chess. This book gives a complete introduction into the technical inner workings of such engines.

The book is split into four chapters:

  1. The first chapter introduces neural networks and covers all the basic building blocks that are used to build deep networks such as those used by AlphaZero. Contents include the perceptron, back-propagation and gradient descent, classification, regression, multilayer perpectron, vectorization techniques, convolutional netowrks, squeeze and exciation networks, fully connected networks, batch normalization and rectified linear units, residual layers, overfitting and underfitting.

  2. The second chapter introduces classical search techniques used for chess engines as well as those used by AlphaZero. Contents include minimax, alpha-beta search, and Monte Carlo tree search.

  3. The third chapter shows how modern chess engines are designed. Aside from the ground-breaking AlphaGo, AlphaGo Zero and AlphaZero we cover Leela Chess Zero, Fat Fritz, Fat Fritz 2 and Effectively Updateable Neural Networks (NNUE) as well as Maia.

  4. The fourth chapter is about implementing a miniaturized AlphaZero. Hexapawn, a minimalistic version of chess, is used as an example for that. Hexapawn is solved by minimax search and training positions for supervised learning are generated. Then as a comparison, an AlphaZero-like training loop is implemented where training is done via self-play combined with reinforcement learning. Finally, AlphaZero-like training and supervised training are compared.

Source Code

Just clone this repository or directly browse the files. You will find here all sources of the examples of the book.

About

During COVID, I worked a lot from home and saved approximately 1.5 hours of commuting time each day. I decided to use that time to do something useful (?) and wrote a book about computer chess. In the end I decided to release the book for free.

Profits

To be completely transparent, here is what I make from every paper copy sold on Amazon. The book retails for $16.95 (about 15 Euro).

  • printing costs $4.04
  • Amazon takes $6.78
  • my royalties are $6.13

Errata

If you find mistakes, please report them here - your help is appreciated!

You might also like...
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

PyTorch implementation of 1712.06087
PyTorch implementation of 1712.06087 "Zero-Shot" Super-Resolution using Deep Internal Learning

Unofficial PyTorch implementation of "Zero-Shot" Super-Resolution using Deep Internal Learning Unofficial Implementation of 1712.06087 "Zero-Shot" Sup

[IJCAI-2021] A benchmark of data-free knowledge distillation from paper
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus
Free-duolingo-plus - Duolingo account creator that uses your invite code to get you free duolingo plus

free-duolingo-plus duolingo account creator that uses your invite code to get yo

Free like Freedom

This is all very much a work in progress! More to come! ( We're working on it though! Stay tuned!) Installation Open an Anaconda Prompt (in Windows, o

MLSpace: Hassle-free machine learning & deep learning development

MLSpace: Hassle-free machine learning & deep learning development

Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Comments
  • 'Board' object has no attribute 'outcome'

    'Board' object has no attribute 'outcome'

    I just executed python mcts.py and received an error message: 34 0 Traceback (most recent call last): File "mcts.py", line 134, in payout = simulate(node) File "mcts.py", line 63, in simulate while(board.outcome(claim_draw = True) == None): AttributeError: 'Board' object has no attribute 'outcome'

    opened by barvinog 5
  • Invalid Reduction Key auto.

    Invalid Reduction Key auto.

    Thank you for the source code of Chapter 5. I executed python mnx_generateTrainingData.py - OK Then python sup_network.py - OK

    Then I executed python sup_eval.py and got the error :

    Traceback (most recent call last): File "sup_eval.py", line 6, in model = keras.models.load_model("supervised_model.keras") File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 492, in load_wrapper return load_function(*args, **kwargs) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 584, in load_model model = _deserialize_model(h5dict, custom_objects, compile) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 369, in _deserialize_model sample_weight_mode=sample_weight_mode) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper return func(*args, **kwargs) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 229, in compile self.total_loss = self._prepare_total_loss(masks) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 692, in _prepare_total_loss y_true, y_pred, sample_weight=sample_weight) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/losses.py", line 73, in call losses, sample_weight, reduction=self.reduction) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/utils/losses_utils.py", line 156, in compute_weighted_loss Reduction.validate(reduction) File "/home/barvinog/anaconda3/lib/python3.7/site-packages/keras/utils/losses_utils.py", line 35, in validate raise ValueError('Invalid Reduction Key %s.' % key) ValueError: Invalid Reduction Key auto.

    opened by barvinog 2
  • Chapter 2 convolution.py

    Chapter 2 convolution.py

    Hello Dominik, I'm a Python novice, but an experienced chess player and long ago a developer of software for infinite dimensional optimization. I've installed the latest Python on a 64 cores Ryzen Threadripper with two NVIDIA 3090 graphic cards. I study your very helpful overview of modern chess engine programming and started with Chapter 2 where except convolution.py all examples work fine. I have installed module scikit-image as skimage doesn't load correctly. Then (without changing the source of convolution.py) I get the following warning

    PS C:\Users\diete\Downloads\neural_network_chess-1.3\chapter_02> python.exe .\convolution.py (640, 480) Lossy conversion from float64 to uint8. Range [-377.0, 433.0]. Convert image to uint8 prior to saving to suppress this warning. PS C:\Users\diete\Downloads\neural_network_chess-1.3\chapter_02>

    and after some seconds python exits without any more output. Help with this problem is kindly appreciated. Dieter

    opened by d-kraft 1
Releases(v1.5)
Owner
Dominik Klein
random code snippets, including the chess program Jerry
Dominik Klein
Differentiable Annealed Importance Sampling (DAIS)

Differentiable Annealed Importance Sampling (DAIS) This repository contains the code to reproduce the DAIS results from the paper Differentiable Annea

Guodong Zhang 6 Dec 26, 2021
The final project for "Applying AI to Wearable Device Data" course from "AI for Healthcare" - Udacity.

Motion Compensated Pulse Rate Estimation Overview This project has 2 main parts. Develop a Pulse Rate Algorithm on the given training data. Then Test

Omar Laham 2 Oct 25, 2022
Kaggle | 9th place single model solution for TGS Salt Identification Challenge

UNet for segmenting salt deposits from seismic images with PyTorch. General We, tugstugi and xuyuan, have participated in the Kaggle competition TGS S

Erdene-Ochir Tuguldur 276 Dec 20, 2022
Code for Deep Single-image Portrait Image Relighting

Deep Single-Image Portrait Relighting [Project Page] Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs. In ICCV, 2019 Overview Test script for

438 Jan 05, 2023
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Dec 30, 2022
Learning 3D Part Assembly from a Single Image

Learning 3D Part Assembly from a Single Image This repository contains a PyTorch implementation of the paper: Learning 3D Part Assembly from A Single

18 Dec 21, 2022
Some pvbatch (paraview) scripts for postprocessing OpenFOAM data

pvbatchForFoam Some pvbatch (paraview) scripts for postprocessing OpenFOAM data For every script there is a help message available: pvbatch pv_state_s

Morev Ilya 2 Oct 26, 2022
Code for Paper "Evidential Softmax for Sparse MultimodalDistributions in Deep Generative Models"

Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models Abstract Many applications of generative models rely on the marginali

Stanford Intelligent Systems Laboratory 9 Jun 06, 2022
Official Pytorch Code for the paper TransWeather

TransWeather Official Code for the paper TransWeather, Arxiv Tech Report 2021 Paper | Website About this repo: This repo hosts the implentation code,

Jeya Maria Jose 81 Dec 30, 2022
Code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition"

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"

MotionCLIP Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space". Please visit our webpage for mor

Guy Tevet 173 Dec 26, 2022
METS/ALTO OCR enhancing tool by the National Library of Luxembourg (BnL)

Nautilus-OCR The National Library of Luxembourg (BnL) started its first initiative in digitizing newspapers, with layout recognition and OCR on articl

National Library of Luxembourg 36 Dec 05, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Code accompanying the paper "How Tight Can PAC-Bayes be in the Small Data Regime?"

How Tight Can PAC-Bayes be in the Small Data Regime? This is the code to reproduce all experiments for the following paper: @inproceedings{Foong:2021:

5 Dec 21, 2021
Motion Reconstruction Code and Data for Skills from Videos (SFV)

Motion Reconstruction Code and Data for Skills from Videos (SFV) This repo contains the data and the code for motion reconstruction component of the S

268 Dec 01, 2022
Per-Pixel Classification is Not All You Need for Semantic Segmentation

MaskFormer: Per-Pixel Classification is Not All You Need for Semantic Segmentation Bowen Cheng, Alexander G. Schwing, Alexander Kirillov [arXiv] [Proj

Facebook Research 1k Jan 08, 2023
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]

piglet PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021] This repo contains code and data for PIGLeT. If you like

Rowan Zellers 51 Oct 08, 2022
Snscrape-jsonl-urls-extractor - Extracts urls from jsonl produced by snscrape

snscrape-jsonl-urls-extractor extracts urls from jsonl produced by snscrape Usag

1 Feb 26, 2022
Tutorial repo for an end-to-end Data Science project

End-to-end Data Science project This is the repo with the notebooks, code, and additional material used in the ITI's workshop. The goal of the session

Deena Gergis 127 Dec 30, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

20 May 28, 2022