An imperfect information game is a type of game with asymmetric information

Overview

DecisionHoldem

An imperfect information game is a type of game with asymmetric information. Compared with perfect information game, imperfect information game is more common in life. Artificial intelligence in imperfect games like poker has made significant progress and success in recent years. The great success of Superhuman Poker AI, such as Libratus and Deepstack, attracts researchers to pay attention to poker research. However, the lack of open source code limits the development of Texas Hold'em AI to some extent.

This project introduces DecisionHoldem, a high-level AI for heads-up no-limit Texas hold'em with safer depth-limited solving with diverse opponents ranges to reduce the exploitability of the strategy.DecisionHoldem is mainly composed of two parts, namely the blueprint strategy and the real-time search part.

In the blueprint strategy part, DecisionHoldem first employs the hand abstraction technique and action abstraction to obtain an abstracted game. Then we used the linear CFR algorithm iteration on the abstracted game tree to calculate blueprint strategy on a workstation with 48 core CPUs for 3 - 4 days. The total number of iterations is about 200 million.

In the real-time search part, we propose a safer depth-limited solving algorithm than modicum's depth-limited solving algorithm on subgame by putting more possible ranges of opponent private hands into consideration for off-tree nodes. This algorithm can significantly improve the AI game level by reducing the exploitability of the strategy. The details of the algorithm will be introduced in subsequent articles soon.

To evaluate the performance of DecisionHoldem, we play it against Slumbot and OpenStackTwo, respectively. Slumbot is the champion of the 2018 Anual Computer Poker Competition and the only high-level poker AI currently available. About 20,000 games against Slumbot, DecisionHoldem's average profit is more remarkable than 730mbb/h, and it ranked first in statistics on November 26, 2021 (DecisionHoldem's name on the ranking is zqbAgent[2,3]). OpenStackTwo built-in OpenHoldem Texas Hold'em Confrontation Platform is a reproduced version of DeepStack. With about 2,000 games against OpenStack[1], DecisionHoldem's average profit is more excellent than 700mbb/h.

To promote artificial intelligence development in imperfect-information games, we have open-sourced the relevant code of DecisionHoldem with tools for playing against the Slumbot, OpenHoldem and human[5]. Meanwhile, we provide a simple program about Leduc poker, which helps to understand the algorithm framework and its mechanism.

百度

Blueprint Strategy

Requirements

  • For C++11 support
  • GraphViz software

Installation

  1. Clone repositories:
$ git clone https://github.com/AI-Decision/DecisionHoldem.git
  1. copy followed file to DecisionHoldem/PokerAI/cluster
sevencards_strength.bin
preflop_hand_cluster.bin
flop_hand_cluster.bin
turn_hand_cluster.bin
river_hand_cluster.bin
blueprint_strategy.dat

These data can be obtained through Baidu Netdisk.

Link: https://pan.baidu.com/s/157n-H1ECjEryAx0Z03p2_w
Extraction code: q1pv

Training Blueprint Strategy

  • Compile and Run:
$ cd DecisionHoldem/PokerAI
$ g++ Main.cpp -o Main.o -std=c++11 -mcmodel=large -lpthread
$ ./Main.o 0
  • When training is finished, getting blueprint strategy "blueprint_strategy.dat" in DecisionHoldem/PokerAI/cluster.

Evaluation for Blueprint Strategy

  • Best Response:
$ cd DecisionHoldem/PokerAI
$ g++ Main.cpp -o Main.o -std=c++11 -mcmodel=large -lpthread
$ ./Main.o 1

Interface For Holdem Game

AlascasiaHoldem.so and blueprint.so provides a interface for the agent to play with other agent or human in real game scenario.

  • AlascasiaHoldem.so
    It plays with real search.
  • Blueprint.so
    It only uses the blueprint strategy to play.

Human Against Agent

GUI application refer to PyPokerGUI.

  • Run:
$ cd DecisionHoldem/PokerAI/
$ python DecisionHoldem/pypokergui/server/poker.py 8000

Tt is necessary that AlascasiaHoldem.so is in directory "DecisionHoldem/PokerAI/".

Result

localhost:8000 百度

Slumbot Against Agent

https://www.slumbot.com/#
Results on November 26, 2021, DecisionHoldem registered as zqbAgent and ranked first in the leaderboard.

  • Run:
$ cd DecisionHoldem/PokerAI/
$ python DecisionHoldem/pypokergui/play_with_slumbot.py

百度

百度

OpenStackTwo Against Agent

http://holdem.ia.ac.cn/#/battle

  • Run:
$ cd DecisionHoldem/PokerAI/
$ python DecisionHoldem/pypokergui/play_with_ia_v4.py 888891 2 Bot 2000 OpenStackTwo

The Agent_against_OpenStackTwo file contains the information for each game in 2000 games, including the each action probability of our agent, opponents actions and game state.

PokerAI Project Frameworks

├── Poker            # game tree code
│   ├── Node.h              # data structure of every node in game tree
│   ├── Bulid_Tree.h        # traverse every possible hole card, community cards and legal actions to bulid the game tree
│   ├── Exploitability.h    # it compute the exploitability of game tree policy
│   ├── Save_load.h         # it can save game tree policy to a file and load file to bulid a game tree
│   └── Visualize_Tree.h    # Visualize game Tree
│
├── util            # 
│   ├── Engine.h            # it compute game result, judging win person and the person can get the number of chips and get the cluster of the player's hand
│   ├── Exploitability.h    # compute the strategy of best response
│   ├── ThreadPool.h        # Multithread control
│   └── Randint.h           # the class is to generate random number
│
├── Poker           # the foundation class of the poker game
│   ├── Card.h              # every card class, it's id range from 0 to 51
│   ├── Deck.h              # deck class of cards, it contains 52 cards
│   ├── Player.h            # player class,it's attributes contain initial chips, bet chips, small or big blind
│   ├── Table.h             # it's attributes contain players, pot and deck
│   └── State.h             # it is game state, contain every players infoset, legal actions
│
├── Depth_limit_Search.h # it is a algorithm of real time searching in each subgame 
├── Multi_Blureprint.h   # it is a blueprint mccfr algorithm which running with the multithread
└── BlueprintMCCFR.cpp   # it is a blueprint mccfr algorithm which running with the single thread

The Detail of BlueprintMCCFR.h

blueprint_cfr function
  • MCCFR algorithm for training the blueprint strategy.
blueprint_cfrp function
  • MCCFR prune algorithm for training the blueprint strategy.
dfs_discount function
  • discount the regret value.
update_strategy function
  • update the average strategy of blueprint

Visualize Game Tree

  • After running the function of visualizationsearch(root, "blueprint_subnode.stgy"), current folder will generate a 'blueprint_subnode.stgy' file.
$ cd GraphViz/bin
$ dot -Tpng blueprint_subnode.stgy > temp.png

Game tree example

百度

Related projects

GUI is based on a project which can be found here: https://github.com/ishikota/PyPokerGUI
demo project: https://github.com/zqbAse/PokerAI_Sim

Note

[1] www.holdem.ia.ac.cn
[2] www.slumbot.com
[3] https://github.com/ericgjackson/slumbot2017/issues/11
[4] Development Environment:A workstation with an Intel(R) Xeon(R) Gold 6240R CPU, and 512GB of RAM.
[5] Currently some source codes only provide compiled files, and they will be open sourced in the near future.

Authors

The project leader is Junge Zhang , and the main contributors are Dongdong Bai and Qibin Zhou. Kaiqi Huang co-supervises this project as well. In recent years, this team has been devoting to reinforcement learning, multi-agent system, decision-making intelligence.

If you use DecisionHoldem in your research, please cite the following paper.

Qibin Zhou, Dongdong Bai, Junge Zhang, Fuqing Duan, Kaiqi Huang. DecisionHoldem: Safe Depth-Limited Solving With Diverse Opponents for Imperfect-Information Games

License

GNU Affero General Public License v3.0

Owner
Decision AI
Decision AI
NeWT: Natural World Tasks

NeWT: Natural World Tasks This repository contains resources for working with the NeWT dataset. ❗ At this time the binary tasks are not publicly avail

Visipedia 26 Oct 18, 2022
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 05, 2023
BookMyShowPC - Movie Ticket Reservation App made with Tkinter

Book My Show PC What is this? Movie Ticket Reservation App made with Tkinter. Tk

The Nithin Balaji 3 Dec 09, 2022
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022
Iran Open Source Hackathon

Iran Open Source Hackathon is an open-source hackathon (duh) with the aim of encouraging participation in open-source contribution amongst Iranian dev

OSS Hackathon 121 Dec 25, 2022
PyTorch implementation of the Value Iteration Networks (VIN) (NIPS '16 best paper)

Value Iteration Networks in PyTorch Tamar, A., Wu, Y., Thomas, G., Levine, S., and Abbeel, P. Value Iteration Networks. Neural Information Processing

LEI TAI 75 Nov 24, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
An OpenAI-Gym Package for Training and Testing Reinforcement Learning algorithms with OpenSim Models

Authors: Utkarsh A. Mishra and Dr. Dimitar Stanev Advisors: Dr. Dimitar Stanev and Prof. Auke Ijspeert, Biorobotics Laboratory (BioRob), EPFL Video Pl

Utkarsh Mishra 16 Dec 13, 2022
Official Implementation of SWAD (NeurIPS 2021)

SWAD: Domain Generalization by Seeking Flat Minima (NeurIPS'21) Official PyTorch implementation of SWAD: Domain Generalization by Seeking Flat Minima.

Junbum Cha 97 Dec 20, 2022
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Teddy Koker 655 Jan 04, 2023
ppo_pytorch_cpp - an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch

PPO Pytorch C++ This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment t

Martin Huber 59 Dec 09, 2022
Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

Nicely is a real-time Feedback and Intervention Program Depression is a prevalent issue across all age groups, socioeconomic classes, and cultural identities.

1 Jan 16, 2022
Region-aware Contrastive Learning for Semantic Segmentation, ICCV 2021

Region-aware Contrastive Learning for Semantic Segmentation, ICCV 2021 Abstract Recent works have made great success in semantic segmentation by explo

Hanzhe Hu 30 Dec 29, 2022
Saeed Lotfi 28 Dec 12, 2022
Plover-tapey-tape: an alternative to Plover’s built-in paper tape

plover-tapey-tape plover-tapey-tape is an alternative to Plover’s built-in paper

7 May 29, 2022
Implementation of the paper Scalable Intervention Target Estimation in Linear Models (NeurIPS 2021), and the code to generate simulation results.

Scalable Intervention Target Estimation in Linear Models Implementation of the paper Scalable Intervention Target Estimation in Linear Models (NeurIPS

0 Oct 25, 2021
SatelliteNeRF - PyTorch-based Neural Radiance Fields adapted to satellite domain

SatelliteNeRF PyTorch-based Neural Radiance Fields adapted to satellite domain.

Kai Zhang 46 Nov 20, 2022
A simple version for graphfpn

GraphFPN: Graph Feature Pyramid Network for Object Detection Download graph-FPN-main.zip For training , run: python train.py For test with Graph_fpn

WorldGame 67 Dec 25, 2022
(CVPR 2022) Energy-based Latent Aligner for Incremental Learning

Energy-based Latent Aligner for Incremental Learning Accepted to CVPR 2022 We illustrate an Incremental Learning model trained on a continuum of tasks

Joseph K J 37 Jan 03, 2023