Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting

Overview

Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting

1. Classification Task

PyTorch implementation of DisturbLabel: Regularizing CNN on the Loss Layer [CVPR 2016] extended with Directional DisturbLabel method.

This classification code is built on top of https://github.com/amirhfarzaneh/disturblabel-pytorch/blob/master/README.md project and utilizes implementation from ResNet 18 from https://github.com/huyvnphan/PyTorch_CIFAR10

Directional DisturbLabel

  if args.mode == 'ddl' or args.mode == 'ddldr':
      out = F.softmax(output, dim=1)
      norm = torch.norm(out, dim=1)
      out = out / norm[:, None]
      idx = []
      for i in range(len(out)):
          if out[i,target[i]] > .5:
              idx.append(i)
              
      if len(idx) > 0:
          target[idx] = disturb(target[idx]).to(device) 

Usage

python main_ddl.py --mode=dl --alpha=20

Most important arguments

--dataset - which data to use

Possible values:

value dataset
MNIST MNIST
FMNIST Fashion MNIST
CIFAR10 CIFAR-10
CIFAR100 CIFAR-100
ART Art Images: Drawing/Painting/Sculptures/Engravings
INTEL Intel Image Classification

Default: MNIST

-- mode - regularization method applied

Possible values:

value method
noreg Without any regularization
dl Vanilla DistrubLabel
ddl Directional DisturbLabel
dropout Dropout
dldr DistrubLabel+Dropout
ddldl Directional DL+Dropout

Default: ddl

--alpha - alpha for vanilla Distrub label and Directional DisturbLabel

Possible values: int from 0 to 100. Default: 20

--epochs - number of training epochs

Default: 100

2. Regression Task

DisturbValue

def noise_generator(x, alpha):
    noise = torch.normal(0, 1e-8, size=(len(x), 1))
    noise[torch.randint(0, len(x), (int(len(x)*(1-alpha)),))] = 0

    return noise

DisturbError

def disturberror(outputs, values):
    epsilon = 1e-8
    e = values - outputs
    for i in range(len(e)):
        if (e[i] < epsilon) & (e[i] >= 0):
            values[i] = values[i] + e[i] / 4
        elif (e[i] > -epsilon) & (e[i] < 0):
            values[i] = values[i] - e[i] / 4

    return values

Datasets

  1. Boston: 506 instances, 13 features
  2. Bike Sharing: 731 instances, 13 features
  3. Air Quality(AQ): 9357 instances, 10 features
  4. make_regression(MR): 5000 instances, 30 features (random sample for regression)
  5. Housing Price - Kaggle(HP): 1460 instances, 81 features
  6. Student Performance (SP): 649 instances, 13 features (20 - categorical were dropped)
  7. Superconductivity Dataset (SD): 21263 instances, 81 features
  8. Communities & Crime (CC): 1994 instances, 100 features
  9. Energy Prediction (EP): 19735 instancies, 27 features

Experiment Setting

Model: MLP which has 3 hidden layers

Result: Averaged over 20 runs

Hyperparameters: Using grid search options

Usage

python main_new.py --de y --dataset "bike" --dv_annealing y --epoch 100 --T 80
python main_new.py --de y --dv y --dataset "bike" -epoch 100
python main_new.py --de y --l2 y --dataset "air" -epoch 100
python main_new.py --dv y --dv_annealing y --dataset "air" -epoch 100 #for annealing setting dv should be "y"

--dataset: 'bike', 'air', 'boston', 'housing', 'make_sklearn', 'superconduct', 'energy', 'crime', 'students'
--dropout, --dv(disturbvalue), --de(disturberror), --l2, --dv_annealing: (string) y / n
--lr: (float)
--batch_size, --epoch, --T(cos annealing T): (int)
-- default dv_annealing: alpha_min = 0.05, alpha_max = 0.12, T_i = 80
Owner
Yongho Kim
Research Assistant
Yongho Kim
Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Net

Diego Porres 185 Dec 24, 2022
Hardware accelerated, batchable and differentiable optimizers in JAX.

JAXopt Installation | Examples | References Hardware accelerated (GPU/TPU), batchable and differentiable optimizers in JAX. Installation JAXopt can be

Google 621 Jan 08, 2023
Iran Open Source Hackathon

Iran Open Source Hackathon is an open-source hackathon (duh) with the aim of encouraging participation in open-source contribution amongst Iranian dev

OSS Hackathon 121 Dec 25, 2022
Data, model training, and evaluation code for "PubTables-1M: Towards a universal dataset and metrics for training and evaluating table extraction models".

PubTables-1M This repository contains training and evaluation code for the paper "PubTables-1M: Towards a universal dataset and metrics for training a

Microsoft 365 Jan 04, 2023
Animal Sound Classification (Cats Vrs Dogs Audio Sentiment Classification)

this is a simple artificial neural network model using deep learning and torch-audio to classify cats and dog sounds.

crispengari 3 Dec 05, 2022
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with ONNX, TensorRT, ncnn, and OpenVINO supported.

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

7.7k Jan 03, 2023
Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

This repo is the official implementation of "Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework". @inproceedings{zhou2021insta

34 Dec 31, 2022
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
Stock-Prediction - prediction of stock market movements using sentiment analysis and deep learning.

Stock-Prediction- In this project, we aim to enhance the prediction of stock market movements using sentiment analysis and deep learning. We divide th

5 Jan 25, 2022
Scheduling BilinearRewards

Scheduling_BilinearRewards Requirement Python 3 =3.5 Structure main.py This file includes the main function. For getting the results in Figure 1, ple

junghun.kim 0 Nov 25, 2021
implement of SwiftNet:Real-time Video Object Segmentation

SwiftNet The official PyTorch implementation of SwiftNet:Real-time Video Object Segmentation, which has been accepted by CVPR2021. Requirements Python

haochen wang 64 Dec 14, 2022
Implementation of the Paper: "Parameterized Hypercomplex Graph Neural Networks for Graph Classification" by Tuan Le, Marco Bertolini, Frank Noé and Djork-Arné Clevert

Parameterized Hypercomplex Graph Neural Networks (PHC-GNNs) PHC-GNNs (Le et al., 2021): https://arxiv.org/abs/2103.16584 PHM Linear Layer Illustration

Bayer AG 26 Aug 11, 2022
Housing Price Prediction

This project aim was to predict the price of houses in the Boston area during the great financial crisis through regression, as well as classify houses into different quality categories according to

Florian Klement 1 Jan 27, 2022
Code of our paper "Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning"

CCOP Code of our paper Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning Requirement Install OpenSelfSup Install Detectron2

Chenhongyi Yang 21 Dec 13, 2022
Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 87 Jan 03, 2023
learned_optimization: Training and evaluating learned optimizers in JAX

learned_optimization: Training and evaluating learned optimizers in JAX learned_optimization is a research codebase for training learned optimizers. I

Google 533 Dec 30, 2022
Tensorflow Repo for "DeepGCNs: Can GCNs Go as Deep as CNNs?"

DeepGCNs: Can GCNs Go as Deep as CNNs? In this work, we present new ways to successfully train very deep GCNs. We borrow concepts from CNNs, mainly re

Guohao Li 612 Nov 15, 2022
Hierarchical Attentive Recurrent Tracking

Hierarchical Attentive Recurrent Tracking This is an official Tensorflow implementation of single object tracking in videos by using hierarchical atte

Adam Kosiorek 147 Aug 07, 2021
CausaLM: Causal Model Explanation Through Counterfactual Language Models

CausaLM: Causal Model Explanation Through Counterfactual Language Models Authors: Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart Abstract: Understan

Amir Feder 39 Jul 10, 2022