Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting

Overview

Disturbing Target Values for Neural Network regularization: attacking the loss layer to prevent overfitting

1. Classification Task

PyTorch implementation of DisturbLabel: Regularizing CNN on the Loss Layer [CVPR 2016] extended with Directional DisturbLabel method.

This classification code is built on top of https://github.com/amirhfarzaneh/disturblabel-pytorch/blob/master/README.md project and utilizes implementation from ResNet 18 from https://github.com/huyvnphan/PyTorch_CIFAR10

Directional DisturbLabel

  if args.mode == 'ddl' or args.mode == 'ddldr':
      out = F.softmax(output, dim=1)
      norm = torch.norm(out, dim=1)
      out = out / norm[:, None]
      idx = []
      for i in range(len(out)):
          if out[i,target[i]] > .5:
              idx.append(i)
              
      if len(idx) > 0:
          target[idx] = disturb(target[idx]).to(device) 

Usage

python main_ddl.py --mode=dl --alpha=20

Most important arguments

--dataset - which data to use

Possible values:

value dataset
MNIST MNIST
FMNIST Fashion MNIST
CIFAR10 CIFAR-10
CIFAR100 CIFAR-100
ART Art Images: Drawing/Painting/Sculptures/Engravings
INTEL Intel Image Classification

Default: MNIST

-- mode - regularization method applied

Possible values:

value method
noreg Without any regularization
dl Vanilla DistrubLabel
ddl Directional DisturbLabel
dropout Dropout
dldr DistrubLabel+Dropout
ddldl Directional DL+Dropout

Default: ddl

--alpha - alpha for vanilla Distrub label and Directional DisturbLabel

Possible values: int from 0 to 100. Default: 20

--epochs - number of training epochs

Default: 100

2. Regression Task

DisturbValue

def noise_generator(x, alpha):
    noise = torch.normal(0, 1e-8, size=(len(x), 1))
    noise[torch.randint(0, len(x), (int(len(x)*(1-alpha)),))] = 0

    return noise

DisturbError

def disturberror(outputs, values):
    epsilon = 1e-8
    e = values - outputs
    for i in range(len(e)):
        if (e[i] < epsilon) & (e[i] >= 0):
            values[i] = values[i] + e[i] / 4
        elif (e[i] > -epsilon) & (e[i] < 0):
            values[i] = values[i] - e[i] / 4

    return values

Datasets

  1. Boston: 506 instances, 13 features
  2. Bike Sharing: 731 instances, 13 features
  3. Air Quality(AQ): 9357 instances, 10 features
  4. make_regression(MR): 5000 instances, 30 features (random sample for regression)
  5. Housing Price - Kaggle(HP): 1460 instances, 81 features
  6. Student Performance (SP): 649 instances, 13 features (20 - categorical were dropped)
  7. Superconductivity Dataset (SD): 21263 instances, 81 features
  8. Communities & Crime (CC): 1994 instances, 100 features
  9. Energy Prediction (EP): 19735 instancies, 27 features

Experiment Setting

Model: MLP which has 3 hidden layers

Result: Averaged over 20 runs

Hyperparameters: Using grid search options

Usage

python main_new.py --de y --dataset "bike" --dv_annealing y --epoch 100 --T 80
python main_new.py --de y --dv y --dataset "bike" -epoch 100
python main_new.py --de y --l2 y --dataset "air" -epoch 100
python main_new.py --dv y --dv_annealing y --dataset "air" -epoch 100 #for annealing setting dv should be "y"

--dataset: 'bike', 'air', 'boston', 'housing', 'make_sklearn', 'superconduct', 'energy', 'crime', 'students'
--dropout, --dv(disturbvalue), --de(disturberror), --l2, --dv_annealing: (string) y / n
--lr: (float)
--batch_size, --epoch, --T(cos annealing T): (int)
-- default dv_annealing: alpha_min = 0.05, alpha_max = 0.12, T_i = 80
Owner
Yongho Kim
Research Assistant
Yongho Kim
ML-Ensemble – high performance ensemble learning

A Python library for high performance ensemble learning ML-Ensemble combines a Scikit-learn high-level API with a low-level computational graph framew

Sebastian Flennerhag 764 Dec 31, 2022
Official tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”

Tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”.

3.7k Dec 31, 2022
Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

1 Oct 11, 2021
Paper: De-rendering Stylized Texts

Paper: De-rendering Stylized Texts Wataru Shimoda1, Daichi Haraguchi2, Seiichi Uchida2, Kota Yamaguchi1 1CyberAgent.Inc, 2 Kyushu University Accepted

CyberAgent AI Lab 55 Dec 18, 2022
An Evaluation of Generative Adversarial Networks for Collaborative Filtering.

An Evaluation of Generative Adversarial Networks for Collaborative Filtering. This repository was developed by Fernando B. Pérez Maurera. Fernando is

Fernando Benjamín PÉREZ MAURERA 0 Jan 19, 2022
Machine-in-the-Loop Rewriting for Creative Image Captioning

Machine-in-the-Loop Rewriting for Creative Image Captioning Data Annotated sources of data used in the paper: Data Source URL Mohammed et al. Link Gor

Vishakh P 6 Jul 24, 2022
A full pipeline AutoML tool for tabular data

HyperGBM Doc | 中文 We Are Hiring! Dear folks,we are offering challenging opportunities located in Beijing for both professionals and students who are k

DataCanvas 240 Jan 03, 2023
π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis

π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe

375 Dec 31, 2022
Modeling Temporal Concept Receptive Field Dynamically for Untrimmed Video Analysis

Modeling Temporal Concept Receptive Field Dynamically for Untrimmed Video Analysis This is a PyTorch implementation of the model described in our pape

qzhb 6 Jul 08, 2021
Gray Zone Assessment

Gray Zone Assessment Get started Clone github repository git clone https://github.com/andreanne-lemay/gray_zone_assessment.git Build docker image dock

1 Jan 08, 2022
This is the PyTorch implementation of GANs N’ Roses: Stable, Controllable, Diverse Image to Image Translation

Official PyTorch repo for GAN's N' Roses. Diverse im2im and vid2vid selfie to anime translation.

1.1k Jan 01, 2023
My implementation of Fully Convolutional Neural Networks in Keras

Keras-FCN This repository contains my implementation of Fully Convolutional Networks in Keras (Tensorflow backend). Currently, semantic segmentation c

The Duy Nguyen 15 Jan 13, 2020
Code for 2021 NeurIPS --- Towards Multi-Grained Explainability for Graph Neural Networks

ReFine: Multi-Grained Explainability for GNNs We are trying hard to update the code, but it may take a while to complete due to our tight schedule rec

Shirley (Ying-Xin) Wu 47 Dec 16, 2022
This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Crisalix 72 Dec 10, 2022
[NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators"

G-PATE This is the official code base for our NeurIPS 2021 paper: "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of T

AI Secure 14 Oct 12, 2022
一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

一个目标检测的通用框架(不需要cuda编译),支持Yolo全系列(v2~v5)、EfficientDet、RetinaNet、Cascade-RCNN等SOTA网络。

Haoyu Xu 203 Jan 03, 2023
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 01, 2022
Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis

HAABSAStar Code for "Adversarial Training for a Hybrid Approach to Aspect-Based Sentiment Analysis". This project builds on the code from https://gith

1 Sep 14, 2020
Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021.

SphereRPN Code for the paper SphereRPN: Learning Spheres for High-Quality Region Proposals on 3D Point Clouds Object Detection, ICIP 2021. Authors: Th

Thang Vu 15 Dec 02, 2022
Code for training and evaluation of the model from "Language Generation with Recurrent Generative Adversarial Networks without Pre-training"

Language Generation with Recurrent Generative Adversarial Networks without Pre-training Code for training and evaluation of the model from "Language G

Amir Bar 253 Sep 14, 2022