This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Overview

Hand Cricket

Table of Content

Overview

This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python. Behind the game is a CNN model that is trained to identify hand sign for numbers 0,1,2,3,4,5 & 6. For those who have never played this game, the rules are explained below.

The Game in action

hand-cricket.mov

Installation

  • You need Python (3.6) & git (to clone this repo)
  • git clone [email protected]:abhinavnayak11/Hand-Cricket.git . : Clone this repo
  • cd path/to/Hand-Cricket : cd into the project folder
  • conda env create -f environment.yml : Create a virtual env with all the dependencies
  • conda activate comp-vision : activate the virtual env
  • python src/hand-cricket.py : Run the script

Game rules

Hand signs

  • You can play numbers 0, 1, 2, 3, 4, 5, 6. Their hand sign are shown here

Toss

  • You can choose either odd or even (say you choose odd)
  • Both the players play a number (say players play 3 & 6). Add those numbers (3+6=9).
  • Check if the sum is odd or even. (9 is odd)
  • If the result is same as what you have chosen, you have won the toss, else you have lost. (9 is odd, you chose odd, hence you win)

The Game

  • The person who wins the toss is the batsman, the other player is the bowler. (In the next version of the game, the toss winner will be allowed to chose batting/bowling)
  • Scoring Runs:
    • Both players play a number.
    • The batsman's number is added to his score only when the numbers are different.
    • There is special power given to 0. If batsman plays 0 and bowler plays any number but 0, bowler's number is added to batsman's score
  • Getting out:
    • Batsman gets out when both the players play the same number. Even if both the numbers are 0.
  • Winning/Losing:
    • After both the players have finished their innings, the person scoring more runs wins the game

Game code : hand-cricket.py


Project Details

  1. Data Collection :
    • After failing to find a suitable dataset, I created my own dataset using my phone camera.
    • The dataset contains a total of 1848 images. To ensure generality (i.e prevent overfitting to one type of hand in one type of environment) images were taken with 4 persons, in 6 different lighting conditions, in 3 different background.
    • Sample of images post augmentations are shown below, images
    • Data can be found uploaded at : github | kaggle. Data collection code : collect-data.py
  2. Data preprocessing :
    • A Pytorch dataset was created to handle the preprocessing of the image dataset (code : dataset.py).
    • Images were augmented before training. Following augmentations were used : Random Rotation, Random Horizontal Flip and Normalization. All the images were resized to (128x128).
    • Images were divided into training and validation set. Training set was used to train the model, whereas validation set helped validate the model performance.
  3. Model training :
    • Different pretrained models(resent18, densenet121 etc, which are pre-trained on the ImageNet dataset) from pytorch library were used to train on this dataset. Except the last 2 layers, all the layers were frozen and then trained. With this the pre-trained model helps extracting useful features and the last 2 layers will be fine-tuned to my dataset.
    • Learning rate for training the model was chosen with trial and error. For each model, learning rate was different.
    • Of all the models trained, densnet121 performed the best, with a validation accuracy of 0.994.
    • Training the model : train.py, engine.py, training-notebook

Future Scope

  • Although, this was a fun application, the dataset can be used in applications like sign language recognition.


License: MIT

Owner
Abhinav R Nayak
Aspiring data scientist
Abhinav R Nayak
A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization

University1652-Baseline [Paper] [Slide] [Explore Drone-view Data] [Explore Satellite-view Data] [Explore Street-view Data] [Video Sample] [中文介绍] This

Zhedong Zheng 335 Jan 06, 2023
A full pipeline AutoML tool for tabular data

HyperGBM Doc | 中文 We Are Hiring! Dear folks,we are offering challenging opportunities located in Beijing for both professionals and students who are k

DataCanvas 240 Jan 03, 2023
DSAC* for Visual Camera Re-Localization (RGB or RGB-D)

DSAC* for Visual Camera Re-Localization (RGB or RGB-D) Introduction Installation Data Structure Supported Datasets 7Scenes 12Scenes Cambridge Landmark

Visual Learning Lab 143 Dec 22, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 04, 2023
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
paper list in the area of reinforcenment learning for recommendation systems

paper list in the area of reinforcenment learning for recommendation systems

HenryZhao 23 Jun 09, 2022
TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)

TAP: Text-Aware Pre-training TAP: Text-Aware Pre-training for Text-VQA and Text-Caption by Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Flo

Microsoft 61 Nov 14, 2022
Unofficial reimplementation of ECAPA-TDNN for speaker recognition (EER=0.86 for Vox1_O when train only in Vox2)

Introduction This repository contains my unofficial reimplementation of the standard ECAPA-TDNN, which is the speaker recognition in VoxCeleb2 dataset

Tao Ruijie 277 Dec 31, 2022
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)

Cross-Camera Convolutional Color Constancy, ICCV 2021 (Oral) Mahmoud Afifi1,2, Jonathan T. Barron2, Chloe LeGendre2, Yun-Ta Tsai2, and Francois Bleibe

Mahmoud Afifi 76 Jan 07, 2023
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
Examples of how to create colorful, annotated equations in Latex using Tikz.

The file "eqn_annotate.tex" is the main latex file. This repository provides four examples of annotated equations: [example_prob.tex] A simple one ins

SyNeRCyS Research Lab 3.2k Jan 05, 2023
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022
Code for the paper "Improved Techniques for Training GANs"

Status: Archive (code is provided as-is, no updates expected) improved-gan code for the paper "Improved Techniques for Training GANs" MNIST, SVHN, CIF

OpenAI 2.2k Jan 01, 2023
Model-based reinforcement learning in TensorFlow

Bellman Website | Twitter | Documentation (latest) What does Bellman do? Bellman is a package for model-based reinforcement learning (MBRL) in Python,

46 Nov 09, 2022
A font family with a great monospaced variant for programmers.

Fantasque Sans Mono A programming font, designed with functionality in mind, and with some wibbly-wobbly handwriting-like fuzziness that makes it unas

Jany Belluz 6.3k Jan 08, 2023
Original Pytorch Implementation of FLAME: Facial Landmark Heatmap Activated Multimodal Gaze Estimation

FLAME Original Pytorch Implementation of FLAME: Facial Landmark Heatmap Activated Multimodal Gaze Estimation, accepted at the 17th IEEE Internation Co

Neelabh Sinha 19 Dec 17, 2022
Implementation of ProteinBERT in Pytorch

ProteinBERT - Pytorch (wip) Implementation of ProteinBERT in Pytorch. Original Repository Install $ pip install protein-bert-pytorch Usage import torc

Phil Wang 92 Dec 25, 2022
Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations.

S2VC Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations. In thi

81 Dec 15, 2022
High accurate tool for automatic faces detection with landmarks

faces_detanator High accurate tool for automatic faces detection with landmarks. The library is based on public detectors with high accuracy (TinaFace

Ihar 7 May 10, 2022
A fast implementation of bss_eval metrics for blind source separation

fast_bss_eval Do you have a zillion BSS audio files to process and it is taking days ? Is your simulation never ending ? Fear no more! fast_bss_eval i

Robin Scheibler 99 Dec 13, 2022