Supporting code for the Neograd algorithm

Related tags

Deep LearningNeograd
Overview

Neograd

This repo supports the paper Neograd: Gradient Descent with a Near-Ideal Learning Rate, which introduces the algorithm "Neograd". The paper and associated code are by Michael F. Zimmer. It's been submitted to JMLR.

Getting Started

Download the code. Paths within the program are relative.

Prerequisites

Python 3
Jupyter notebook

Installing

Unzip/clone the repo. You should see this directory structure:
neograd/
libs/
notebooks/
figs/
The meaning of these names is self-explanatory. Only the name "notebooks" can be changed without interfering with the paths.

Running Notebooks

After cd-ing into the "notebooks" directory, open a notebook in Jupyter and execute the cells. If you choose to uncomment certain lines (the save fig command) a figure will be saved for you. Some of these are the same figs that appear in the aforementioned paper.

Descriptions of notebooks

These experiment notebooks contain evaluations of algorithms against the named cost fcn
EXPT_2Dshell
EXPT_Beale
EXPT_double
EXPT_quartic
EXPT_sigmoid-well

Additionally, these contain additional tests.
EXPT_hybrid
EXPT_manual
EXPT_momentum

Descriptions of libraries

algos_vec
Functions that are central to the GD family and Neograd family.

common
Functions for rho, alpha, and functions for tracking results of a run.

common_vec
Functions used by algos_vec, which aren't central to the algorithms. Also, these functions have a specific assumption that the "parameter vector" is a numpy array.

costgrad_vec
This is an aggregation of all the functions needed to compute the cost and gradient of the specific cost functions examined in the paper.

params
Contains all global parameters (not to be confused with the parameter vector that is being optimized). Also present is a function to return a "good choice" of alpha for each algorithm-cost function combination, as determined by trial and error.

plotting
The plotting functions are passed the dictionaries of results returned by the optimization runs

A few details

"p" represents the parameter vector in the repo; note this differs from "theta" which is used in the paper.

Statistics during the run are accumulated by a dictionary of lists. The keys in the dictionary contain the name of the statistic, and the "values" are lists. Before entering the main loop, the names/keys must be declared; this is done in the function "init_results". After each iteration, a list will have a value appended to it; this is done in the function "update_results". Both of these functions are in the "common" library.

If you set the total iteration number ("num") too high, you may find you get underflow errors plus their ramifications. This is because the Neograd algorithm will drive the error down to be so small, it bumps up against machine precision. There are a number of sophisticated ways to handle this, but for the purposes here it is enough to simply stop the optimization before it becomes an issue.

In the code on github, this alternative definition of rho may be used. Simply change the parameter "g_rhotype" to "original", instead of "new". This is discussed in an appendix of the paper.

Author

Michael F. Zimmer

License

This project is licensed under the MIT license.

Owner
Michael Zimmer
Michael Zimmer
A simple code to convert image format and channel as well as resizing and renaming multiple images.

Rename-Resize-and-convert-multiple-images A simple code to convert image format and channel as well as resizing and renaming multiple images. This cod

Happy N. Monday 3 Feb 15, 2022
Colab notebook and additional materials for Python-driven analysis of redlining data in Philadelphia

RedliningExploration The Google Colaboratory file contained in this repository contains work inspired by a project on educational inequality in the Ph

Benjamin Warren 1 Jan 20, 2022
Lane follower: Lane-detector (OpenCV) + Object-detector (YOLO5) + CAN-bus

Lane Follower This code is for the lane follower, including perception and control, as shown below. Environment Hardware Industrial Camera Intel-NUC(1

Siqi Fan 3 Jul 07, 2022
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)

U-GAT-IT — Official TensorFlow Implementation (ICLR 2020) : Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization fo

Junho Kim 6.2k Jan 04, 2023
On the Analysis of French Phonetic Idiosyncrasies for Accent Recognition

On the Analysis of French Phonetic Idiosyncrasies for Accent Recognition With the spirit of reproducible research, this repository contains codes requ

0 Feb 24, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
A set of tests for evaluating large-scale algorithms for Wasserstein-2 transport maps computation.

Continuous Wasserstein-2 Benchmark This is the official Python implementation of the NeurIPS 2021 paper Do Neural Optimal Transport Solvers Work? A Co

Alexander 22 Dec 12, 2022
Static Features Classifier - A static features classifier for Point-Could clusters using an Attention-RNN model

Static Features Classifier This is a static features classifier for Point-Could

ABDALKARIM MOHTASIB 1 Jan 25, 2022
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
Source code of D-HAN: Dynamic News Recommendation with Hierarchical Attention Network

D-HAN The source code of D-HAN This is the source code of D-HAN: Dynamic News Recommendation with Hierarchical Attention Network. However, only the co

30 Sep 22, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 06, 2023
This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation This is the pytorch implementation for the paper: Generalizable Mix

18 Sep 02, 2022
ArtEmis: Affective Language for Art

ArtEmis: Affective Language for Art Created by Panos Achlioptas, Maks Ovsjanikov, Kilichbek Haydarov, Mohamed Elhoseiny, Leonidas J. Guibas Introducti

Panos 268 Dec 12, 2022
Deep Structured Instance Graph for Distilling Object Detectors (ICCV 2021)

DSIG Deep Structured Instance Graph for Distilling Object Detectors Authors: Yixin Chen, Pengguang Chen, Shu Liu, Liwei Wang, Jiaya Jia. [pdf] [slide]

DV Lab 31 Nov 17, 2022
QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit c

Monash Green AI Lab 51 Dec 10, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" in Pytorch.

GLOM An implementation of Geoffrey Hinton's paper "How to represent part-whole hierarchies in a neural network" for MNIST Dataset. To understand this

50 Oct 19, 2022