Reverse engineer your pytorch vision models, in style

Related tags

Deep Learningrover
Overview

🔍 Rover

Reverse engineer your CNNs, in style

Open In Colab

Rover will help you break down your CNN and visualize the features from within the model. No need to write weirdly abstract code to visualize your model's features anymore.

💻 Usage

git clone https://github.com/Mayukhdeb/rover.git; cd rover

install requirements:

pip install -r requirements.txt
from rover import core
from rover.default_models import models_dict

core.run(models_dict = models_dict)

and then run the script with streamlit as:

$ streamlit run your_script.py

if everything goes right, you'll see something like:

You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501

🧙 Custom models

rover supports pretty much any PyTorch model with an input of shape [N, 3, H, W] (even segmentation models/VAEs and all that fancy stuff) with imagenet normalization on input.

import torchvision.models as models 
model = models.resnet34(pretrained= True)  ## or any other model (need not be from torchvision.models)

models_dict = {
    'my model': model,  ## add in any number of models :)
}

core.run(
    models_dict = models_dict
)

🖼️ Channel objective

Optimizes a single channel from one of the layer(s) selected.

  • layer index: specifies which layer you want to use out of the layers selected.
  • channel index: specifies the exact channel which needs to be visualized.

🧙‍♂️ Writing your own objective

This is for the smarties who like to write their own objective function. The only constraint is that the function should be named custom_func.

Here's an example:

def custom_func(layer_outputs):
    '''
    layer_outputs is a list containing 
    the outputs (torch.tensor) of each layer you selected

    In this example we'll try to optimize the following:
    * the entire first layer -> layer_outputs[0].mean()
    * 20th channel of the 2nd layer -> layer_outputs[1][20].mean()
    '''
    loss = layer_outputs[0].mean() + layer_outputs[1][20].mean()
    return -loss

Running on google colab

Check out this notebook. I'll also include the instructions here just in case.

Clone the repo + install dependencies

!git clone https://github.com/Mayukhdeb/rover.git
!pip install torch-dreams --quiet
!pip install streamlit --quiet

Navigate into the repo

import os 
os.chdir('rover')

Write your file into a script from a cell. Here I wrote it into test.py

%%writefile  test.py

from rover import core
from rover.default_models import models_dict

core.run(models_dict = models_dict)

Run script on a thread

import threading

proc = threading.Thread(target= os.system, args=['streamlit run test.py'])
proc.start()

Download ngrok:

!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip -o ngrok-stable-linux-amd64.zi

More ngrok stuff

get_ipython().system_raw('./ngrok http 8501 &')

Get your URL where rover is hosted

!curl -s http://localhost:4040/api/tunnels | python3 -c \
    "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"

💻 Args

  • width (int, optional): Width of image to be optimized
  • height (int, optional): Height of image to be optimized
  • iters (int, optional): Number of iterations, higher -> stronger visualization
  • lr (float, optional): Learning rate
  • rotate (deg) (int, optional): Max rotation in default transforms
  • scale max (float, optional): Max image size factor.
  • scale min (float, optional): Minimum image size factor.
  • translate (x) (float, optional): Maximum translation factor in x direction
  • translate (y) (float, optional): Maximum translation factor in y direction
  • weight decay (float, optional): Weight decay for default optimizer. Helps prevent high frequency noise.
  • gradient clip (float, optional): Maximum value of the norm of gradient.

Run locally

Clone the repo

git clone https://github.com/Mayukhdeb/rover.git

install requirements

pip install -r requirements.txt

showtime

streamlit run test.py
Owner
Mayukh Deb
Learning about life, one epoch at a time
Mayukh Deb
Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection".

A2S-USOD Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection". Code will be released upon

15 Dec 16, 2022
A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud.

Lidar with Velocity A robust camera and Lidar fusion based velocity estimator to undistort the pointcloud. related paper: Lidar with Velocity : Motion

ISEE Research Group 164 Dec 30, 2022
Implementation of our paper "DMT: Dynamic Mutual Training for Semi-Supervised Learning"

DMT: Dynamic Mutual Training for Semi-Supervised Learning This repository contains the code for our paper DMT: Dynamic Mutual Training for Semi-Superv

Zhengyang Feng 120 Dec 30, 2022
A library of extension and helper modules for Python's data analysis and machine learning libraries.

Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks. Sebastian Raschka 2014-2020 Links Doc

Sebastian Raschka 4.2k Jan 02, 2023
Differentiable Simulation of Soft Multi-body Systems

Differentiable Simulation of Soft Multi-body Systems Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming C. Lin [Paper] [Code] Updates The C++ backend s

YilingQiao 26 Dec 23, 2022
A python script to dump all the challenges locally of a CTFd-based Capture the Flag.

A python script to dump all the challenges locally of a CTFd-based Capture the Flag. Features Connects and logins to a remote CTFd instance. Dumps all

Podalirius 77 Dec 07, 2022
Galactic and gravitational dynamics in Python

Gala is a Python package for Galactic and gravitational dynamics. Documentation The documentation for Gala is hosted on Read the docs. Installation an

Adrian Price-Whelan 101 Dec 22, 2022
Pyramid addon for OpenAPI3 validation of requests and responses.

Validate Pyramid views against an OpenAPI 3.0 document Peace of Mind The reason this package exists is to give you peace of mind when providing a REST

Pylons Project 79 Dec 30, 2022
The first dataset on shadow generation for the foreground object in real-world scenes.

Object-Shadow-Generation-Dataset-DESOBA Object Shadow Generation is to deal with the shadow inconsistency between the foreground object and the backgr

BCMI 105 Dec 30, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Create UIs for prototyping your machine learning model in 3 minutes

Note: We just launched Hosted, where anyone can upload their interface for permanent hosting. Check it out! Welcome to Gradio Quickly create customiza

Gradio 11.7k Jan 07, 2023
2021 credit card consuming recommendation

2021 credit card consuming recommendation

Wang, Chung-Che 7 Mar 08, 2022
A proof of concept ai-powered Recaptcha v2 solver

Recaptcha Fullauto I've decided to open source my old Recaptcha v2 solver. My latest version will be opened sourced this summer. I am hoping this proj

Nate 60 Dec 20, 2022
3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans.

3DMV 3DMV jointly combines RGB color and geometric information to perform 3D semantic segmentation of RGB-D scans. This work is based on our ECCV'18 p

Владислав Молодцов 0 Feb 06, 2022
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
NAS-FCOS: Fast Neural Architecture Search for Object Detection (CVPR 2020)

NAS-FCOS: Fast Neural Architecture Search for Object Detection This project hosts the train and inference code with pretrained model for implementing

Ning Wang 180 Dec 06, 2022
Individual Tree Crown classification on WorldView-2 Images using Autoencoder -- Group 9 Weak learners - Final Project (Machine Learning 2020 Course)

Created by Olga Sutyrina, Sarah Elemili, Abduragim Shtanchaev and Artur Bille Individual Tree Crown classification on WorldView-2 Images using Autoenc

2 Dec 08, 2022
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

Xinyu Huang 28 Oct 27, 2022
AAAI 2022 paper - Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

AT-BMC Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction (AAAI 2022) Paper Prerequisites Install pac

16 Nov 26, 2022