An Ensemble of CNN (Python 3.5.1 Tensorflow 1.3 numpy 1.13)

Overview

An Ensemble of CNN

Machine Learning project 2017

Read me

NOTE: All commands should be run inside the tensorflow environment

Dependencies

Python 3.5.1 Tensorflow 1.3 numpy 1.13

Although the model might work with previous version of above libraries, these version are what were used in development. Link to dataset: https://www.kaggle.com/xainano/handwrittenmathsymbols

Preparation

Do the following steps only if the folders MathExprJpeg/train_data and MathExprJpeg/train_data and files x_data.npy, y_data.npy and labels.npy are none existent, otherwise go directly to Running Main section.

In order to run the model the training and testing folders must be created. If test_data folder and train_data folder does not exist in the MathExprJpeg folder, these must be created. Create them by running the script create_train_test_data.py like this:

python create_train_test_data.py

Then the datafiles for the training data must be created in order to speed up training. Called x_data.npy, y_data.npy and labels.npy. If non existent create by running the script create_datafiles.py like this:

python create_datafiles.py
Running main

After the preparation steps are done, set the preferred modes in cnn_math_main.py file. This is done by changing the parameters at the top of the file:

# set if data shold be read in advance
fileread = True
ensemble_mode = False 

fileread = True means that the data will be read from the previously created x_data.npy and y_data.npy files. Setting this to True is highly recommended. The ensemble_mode = False means that the model will not be run in ensemble mode. This is recommended as the ensemble mode is performance heavy and can not be guaranteed to work in the latest releases.

It is recommended to change the name of the logging file for each run:

writer = tf.summary.FileWriter('./logs/cnn_math_logs_true_2ep_r1')
writer.add_graph(sess.graph)

Also set the preferred value to the epoch and batch_size:

training_epochs = 40
batch_size = 20

When all of the above has been done, the model can be run with the command:

python cnn_math_main.py

The model has been known to sometimes get errors while reading files. The source of which is unknown. If such an error is to occur, run the following commands:

rm -r MatchExprJpeg/train_data
rm -r MatchExprJpeg/test_data
rm x_data.npy
rm y_data.npy
rm labels.npy

python create_train_test_data.py
python create_datafiles.py

And then try to run the cnn_math_main.py again. start tensorboard with:

tensorboard --logdir=./logs

to see the graphs for the scalars, the image being processed etc.

The model is implemented in the cnn_model.py class. If this file is tempered with it is possible that the model breaks.

Happy predicting

MachineLearningProjectCNN

Tensorflow 2.x implementation of Vision-Transformer model

Vision Transformer Unofficial Tensorflow 2.x implementation of the Transformer based Image Classification model proposed by the paper AN IMAGE IS WORT

Soumik Rakshit 16 Jul 20, 2022
Library of various Few-Shot Learning frameworks for text classification

FewShotText This repository contains code for the paper A Neural Few-Shot Text Classification Reality Check Environment setup # Create environment pyt

Thomas Dopierre 47 Jan 03, 2023
这是一个unet-pytorch的源码,可以训练自己的模型

Unet:U-Net: Convolutional Networks for Biomedical Image Segmentation目标检测模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Downl

Bubbliiiing 567 Jan 05, 2023
DilatedNet in Keras for image segmentation

Keras implementation of DilatedNet for semantic segmentation A native Keras implementation of semantic segmentation according to Multi-Scale Context A

303 Mar 15, 2022
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
Joint-task Self-supervised Learning for Temporal Correspondence (NeurIPS 2019)

Joint-task Self-supervised Learning for Temporal Correspondence Project | Paper Overview Joint-task Self-supervised Learning for Temporal Corresponden

Sifei Liu 167 Dec 14, 2022
Contrastive Feature Loss for Image Prediction

Contrastive Feature Loss for Image Prediction We provide a PyTorch implementation of our contrastive feature loss presented in: Contrastive Feature Lo

Alex Andonian 44 Oct 05, 2022
Code for the paper "Curriculum Dropout", ICCV 2017

Curriculum Dropout Dropout is a very effective way of regularizing neural networks. Stochastically "dropping out" units with a certain probability dis

Pietro Morerio 21 Jan 02, 2022
Pytorch implementation of Decoupled Spatial-Temporal Transformer for Video Inpainting

Decoupled Spatial-Temporal Transformer for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, J

51 Dec 13, 2022
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

143 Dec 28, 2022
Towards Long-Form Video Understanding

Towards Long-Form Video Understanding Chao-Yuan Wu, Philipp Krähenbühl, CVPR 2021 [Paper] [Project Page] [Dataset] Citation @inproceedings{lvu2021,

Chao-Yuan Wu 69 Dec 26, 2022
Long Expressive Memory (LEM)

Long Expressive Memory for Sequence Modeling This repository contains the implementation to reproduce the numerical experiments of the paper Long Expr

Konstantin Rusch 47 Dec 17, 2022
Group-Free 3D Object Detection via Transformers

Group-Free 3D Object Detection via Transformers By Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong. This repo is the official implementation of "Group-

Ze Liu 213 Dec 07, 2022
The `rtdl` library + The official implementation of the paper

The `rtdl` library + The official implementation of the paper "Revisiting Deep Learning Models for Tabular Data"

Yandex Research 510 Dec 30, 2022
Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

LBK 26 Dec 02, 2022
code for Fast Point Cloud Registration with Optimal Transport

robot This is the repository for the paper "Accurate Point Cloud Registration with Robust Optimal Transport". We are in the process of refactoring the

28 Jan 04, 2023
Codes for paper "Towards Diverse Paragraph Captioning for Untrimmed Videos". CVPR 2021

Towards Diverse Paragraph Captioning for Untrimmed Videos This repository contains PyTorch implementation of our paper Towards Diverse Paragraph Capti

Yuqing Song 61 Oct 11, 2022
Autoregressive Predictive Coding: An unsupervised autoregressive model for speech representation learning

Autoregressive Predictive Coding This repository contains the official implementation (in PyTorch) of Autoregressive Predictive Coding (APC) proposed

iamyuanchung 173 Dec 18, 2022
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

PIC4SeRCentre 20 Jan 03, 2023
Title: Heart-Failure-Classification

This Notebook is based off an open source dataset available on where I have created models to classify patients who can potentially witness heart failure on the basis of various parameters. The best

Akarsh Singh 2 Sep 13, 2022