Tzer: TVM Implementation of "Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation (OOPSLA'22)“.

Related tags

Deep Learningtzer
Overview


ArtifactReproduce BugsQuick StartInstallationExtend Tzer

Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation

This is the source code repo for "Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation" (Conditionally accepted by OOPSLA'22).

Artifact

Please check here for detailed documentation of the artifact prepared for OOPSLA'22.

Reproduce Bugs

Till submission, Tzer has been detected 40 bugs for TVM with 30 confirmed and 24 fixed (merged in the latest branch). Due to the anonymous review policy of OOPSLA, the links of actual bug reports will be provided after the review process.

We provide strong reproducibility of our work. To reproduce all bugs, all you need to do is a single click Open In Colab on your browser. Since some bugs need to be triggered by some complex GPU settings, to maximumly ease the hardware and software effort, the bugs are summarized in a Google Colab environment (No GPU required, but just a browser!).

Quick Start

You can easily start using Tzer with docker.

docker run --rm -it tzerbot/oopsla

# Inside the image.
cd tzer
python3 src/main_tir.py --fuzz-time 10     --report-folder ten-minute-fuzz
#                       run for 10 min.    bugs in folder `ten-minute-fuzz`

Successful installation looks like:

Report folder contents [click to expand]
  • cov_by_time.txt: a csv file where columns means "time" (second) and edge coverage;
  • ${BUG_TYPE}_${BUG_ID}.error_message.txt: error message snapshot of failures;
  • ${BUG_TYPE}_${BUG_ID}.ctx: context data to reproduce bugs (stored in Pickle. See config.py)
  • meta.txt: metadata including git version of TVM and experiment time;
  • tir_by_time.pickle: generated <F, P> (i.e., TIR and Passes) files (if TIR_REC=1 is set);
  • valid_seed_new_cov_count.txt: number of generated valid tests with new coverage;
Main commandline options [click to expand]

Commandline options (added as tail of commands):

  • --fuzz-time: Time budget of fuzzing (minute);
  • --tolerance: Parameter $N_{max}$ in the paper (control the interleaving of IR and pass mutation);
  • --report-folder: Path to store results (e.g., coverage trend);

Environment variables to control the algorithm options (added the prefix of commands):

  • PASS=1 to enable pass mutation;
  • NO_SEEDS=1 to disable initial seeds (start from an empty function);
  • NO_COV=1 to disable the coverage feedback;
  • TIR_REC=1to record generated TIR files (for evaluating non-coverage version);
Reproduce ablation study [click to expand]
# (1): General IR Mutation (No Coverage)*
TVM_HOME=$TVM_NO_COV_HOME PYTHONPATH=$TVM_HOME/python TIR_REC=1 NO_COV=1 python3 src/main_tir.py --fuzz-time 240 --report-folder ablation-1
python3 src/get_cov.py --folders ablation-1 # Evaluate samples on instrumented TVM to get coverage results.

# (2): (1) + Coverage Guidance
python3 src/main_tir.py --fuzz-time 240 --report-folder ablation-2

# (3): (2) + Domain-Specific IR Mutation
LOW=1 python3 src/main_tir.py --fuzz-time 240 --report-folder ablation-3

# (4): (3) + Random Pass Mutation
PASS=1 RANDOM_PASS=1 LOW=1 python3 src/main_tir.py --fuzz-time 240 --report-folder ablation-4

# (5): (3) + Evolutionary IR-Pass Mutation
# aka: Best Tzer! Pleasse use this command if you want to compare Tzer with your own system~
PASS=1 LOW=1 python3 src/main_tir.py --fuzz-time 240 --report-folder ablation-5 --tolerance 4

Note that fuzzing is performance-sensitive: To obtain reliable results, evaluation should be conducted in a "clean" environment (e.g., close irrelavant processes as many as possible). To determine how "clean" your environment is, you can log the load average of your Linux system. Expected load average should be around 1 or lower (as what we did in the experiments).

Installation

Expected requirements [click to expand]
  • Hardware: 8GB RAM; 256G Storage; X86 CPU; Good Network to GitHub; Docker (for Docker installation)
  • Software: Linux (tested under Manjaro and Ubuntu20.04. Other Linux distributions should also work)

We provide 3 methods for installing Tzer:

Docker Hub (Recommended, Out-of-the-box!) [click to expand]

Directly run Tzer in pre-built container image! Make sure you have docker installed.

docker run --rm -it tzerbot/oopsla
Docker Build (10~20 min., for customized development) [click to expand]

Build Tzer under a docker environment! Make sure you have docker installed.

  1. git clone https://github.com/Tzer-AnonBot/tzer.git && cd tzer
  2. docker build --tag tzer-oopsla:eval .
  3. docker run --rm -it tzer-oopsla:eval
Manual Build (20~30 min., for customized dev. and native performance) [click to expand]
Build Tzer natively on your Linux:

Prepare dependencies:

# Arch Linux / Manjaro
sudo pacman -Syy
sudo pacman -S compiler-rt llvm llvm-libs compiler-rt clang cmake git python3
# Ubuntu
sudo apt update
sudo apt install -y libfuzzer-12-dev # If you fail, try "libfuzzer-11-dev", "-10-dev", ...
sudo apt install -y clang cmake git python3

Build TVM and Tzer:

git clone https://github.com/Tzer-AnonBot/tzer.git
cd tzer/tvm_cov_patch

# Build TVM with intruments
bash ./build_tvm.sh # If you fail, check the script for step-by-step instruction;
cd ../../../
# If success:
# tvm with coverage is installed under `tvm_cov_patch/tvm`
# tvm without coverage is under `tvm_cov_patch/tvm-no-cov`

# Install Python dependency
python3 -m pip install -r requirements.txt

# Set up TVM_HOME and PYTHONPATH env var before using TVM and Tzer.
export TVM_HOME=$(realpath tvm_cov_patch/tvm)
export TVM_NO_COV_HOME=$(realpath tvm_cov_patch/tvm-no-cov)
export PYTHONPATH=$TVM_HOME/python

Extend Tzer

We implemented many re-usable functionalities for future and open research! To easily implement other coverage-guided fuzzing algorithm for TVM, after your installing TVM with memcov by applying tvm_cov_patch/memcov4tvm.patch to TVM (See tvm_cov_patch/build_tvm.sh), you can get current coverage of TVM by:

from tvm.contrib import coverage

print(coverage.get_now()) # Current visited # of CFG edges
print(coverage.get_total()) # Total number of # of CFG edges

coverage.push() # store current coverage snapshot to a stack and reset it to empty (useful for multi-process scenario)
coverage.pop()  # merge the top snapshot from the stack. 

Usage push-pop combo: Some times the target program might crash, but we don't want the fuzzer to be affected by the failure. Therefore, you can set a "safe guard" by:

  1. push: save current snapshot and reset the coverage hitmap;
  2. raise a sub-process to compile target IR & passes with TVM;
  3. pop: merge the snapshot of the sub-process and last stored snapshot (top of the stack) to get a complete coverage.

Latency of the combo is optimized to ~1ms as we applied bit-level optimization.

Cite Us

Please cite our paper if you find our contributions are helpful. :-)

@inproceedings{tzer-2022,
  title={Coverage-Guided Tensor Compiler Fuzzing with Joint IR-Pass Mutation},
  author={Liu, Jiawei and Wei, Yuxiang and Yang, Sen and Deng, Yinlin and Zhang, Lingming},
  booktitle={Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages, and Applications},
  year={2022}
}
You might also like...
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Master Docs License Apache MXNet (incubating) is a deep learning framework designed for both efficiency an

MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

STCN Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [a

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack Case study of the FCA. The code can be find in FCA. Cas

This Jupyter notebook shows one way to implement a simple first-order low-pass filter on sampled data in discrete time.

How to Implement a First-Order Low-Pass Filter in Discrete Time We often teach or learn about filters in continuous time, but then need to implement t

Releases(tvm-0.8.dev1040)
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
CS583: Deep Learning

CS583: Deep Learning

Shusen Wang 2.6k Dec 30, 2022
Weakly- and Semi-Supervised Panoptic Segmentation (ECCV18)

Weakly- and Semi-Supervised Panoptic Segmentation by Qizhu Li*, Anurag Arnab*, Philip H.S. Torr This repository demonstrates the weakly supervised gro

Qizhu Li 159 Dec 20, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

80 Dec 27, 2022
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
[Arxiv preprint] Causality-inspired Single-source Domain Generalization for Medical Image Segmentation (code&data-processing pipeline)

Causality-inspired Single-source Domain Generalization for Medical Image Segmentation Arxiv preprint Repository under construction. Might still be bug

Cheng 31 Dec 27, 2022
CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework

CAPRI: Context-Aware Interpretable Point-of-Interest Recommendation Framework This repository contains a framework for Recommender Systems (RecSys), a

RecSys Lab 8 Jul 03, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
Official implementation of the RAVE model: a Realtime Audio Variational autoEncoder

RAVE: Realtime Audio Variational autoEncoder Official implementation of RAVE: A variational autoencoder for fast and high-quality neural audio synthes

ACIDS 587 Jan 01, 2023
Dynamic Graph Event Detection

DyGED Dynamic Graph Event Detection Get Started pip install -r requirements.txt TODO Paper link to arxiv, and how to cite. Twitter Weather dataset tra

Mert Koşan 3 May 09, 2022
Auxiliary data to the CHIIR paper Searching to Learn with Instructional Scaffolding

Searching to Learn with Instructional Scaffolding This is the data and analysis code for the paper "Searching to Learn with Instructional Scaffolding"

Arthur Câmara 2 Mar 02, 2022
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
BLEND: A Fast, Memory-Efficient, and Accurate Mechanism to Find Fuzzy Seed Matches

BLEND is a mechanism that can efficiently find fuzzy seed matches between sequences to significantly improve the performance and accuracy while reducing the memory space usage of two important applic

SAFARI Research Group at ETH Zurich and Carnegie Mellon University 19 Dec 26, 2022
Implementation of Ag-Grid component for Streamlit

streamlit-aggrid AgGrid is an awsome grid for web frontend. More information in https://www.ag-grid.com/. Consider purchasing a license from Ag-Grid i

Pablo Fonseca 556 Dec 31, 2022
SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021]

SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021] Pdf: https://openreview.net/forum?id=v5gjXpmR8J Code for our ICLR 2021 pape

Princeton INSPIRE Research Group 113 Nov 27, 2022
A Python Package For System Identification Using NARMAX Models

SysIdentPy is a Python module for System Identification using NARMAX models built on top of numpy and is distributed under the 3-Clause BSD license. N

Wilson Rocha 175 Dec 25, 2022
Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)"

BAM and CBAM Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)" Updat

Jongchan Park 1.7k Jan 01, 2023
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dea

MIC-DKFZ 1.2k Jan 04, 2023
Pixel-level Crack Detection From Images Of Levee Systems : A Comparative Study

PIXEL-LEVEL CRACK DETECTION FROM IMAGES OF LEVEE SYSTEMS : A COMPARATIVE STUDY G

Manisha Panta 2 Jul 23, 2022
Evaluating Cross-lingual Sentence Representations

XNLI: The Cross-Lingual NLI Corpus XNLI is an evaluation corpus for language transfer and cross-lingual sentence classification in 15 languages. New:

Meta Research 395 Dec 19, 2022