Fuzzing tool (TFuzz): a fuzzing tool based on program transformation

Related tags

Deep LearningT-Fuzz
Overview

T-Fuzz

T-Fuzz consists of 2 components:

  • Fuzzing tool (TFuzz): a fuzzing tool based on program transformation
  • Crash Analyzer (CrashAnalyzer): a tool that verifies whether crashes found transformed programs are true bugs in the original program or not (coming soon).

OS support

The current version is tested only on Ubuntu-16.04, while trying to run the code, please use our tested OS.

Prerequisite

T-Fuzz system is built on several opensource tools.

Installing radare2

$ git clone https://github.com/radare/radare2.git
$ cd radare2
$ ./sys/install.sh

Installing python libraries

installing some dependent libraries

Note: to use apt-get build-dep, you need to uncomment the deb-src lines in your apt source file (/etc/apt/sources.list) and run apt-get update.

$ sudo apt-get install build-essential gcc-multilib libtool automake autoconf bison debootstrap debian-archive-keyring
$ sudo apt-get build-dep qemu-system
$ sudo apt-get install libacl1-dev

installing pip and setting up virtualenv & wrapper

$ sudo apt-get install python-pip python-virtualenv
$ pip install virtualenvwrapper

Add the following lines to your shell rc file (~/.bashrc or ~/.zshrc).

export WORKON_HOME=$HOME/.virtual_envs
source /usr/local/bin/virtualenvwrapper.sh

Creating a python virtual environment

$ mkvirtualenv tfuzz-env

Installing dependent libraries

This command will install all the dependent python libraries for you.

$ workon tfuzz-env
$ pip install -r req.txt

Fuzzing target programs with T-Fuzz

$ ./TFuzz  --program  
   
     --work_dir 
    
      --target_opts 
     

     
    
   

Where

  • : the path to the target program to fuzz
  • : the directory to save the results
  • : the options to pass to the target program, like AFL, use @@ as placeholder for files to mutate.

Examples

  1. Fuzzing base64 with T-Fuzz
$ ./TFuzz  --program  target_programs/base64  --work_dir workdir_base64 --target_opts "-d @@"
  1. Fuzzing uniq with T-Fuzz
$ ./TFuzz  --program  target_programs/uniq  --work_dir workdir_uniq --target_opts "@@"
  1. Fuzzing md5sum with T-Fuzz
$ ./TFuzz  --program  target_programs/md5sum  --work_dir workdir_md5sum --target_opts "-c @@"
  1. Fuzzing who with T-Fuzz
$ ./TFuzz  --program  target_programs/who  --work_dir workdir_who --target_opts "@@"

Using CrashAnalyzer to verify crashes

T-Fuzz CrashAnalyzer has been put in a docker image, however, it is still not working in all binaries we tested, we are still investigating it the cause.

Here is how:

Run the following command to run our docker image

$ [sudo] docker pull tfuzz/tfuzz-test
$ [sudo] docker run  --security-opt seccomp:unconfined -it tfuzz/tfuzz-test  /usr/bin/zsh 

In the container:

There are 3 directories:

  • release: contains code the built lava binaries
  • results: contains some results we found in lava-m dataset
  • radare2: it is a program used by T-Fuzz.

Currently, T-Fuzz may not work, because the tracer crashes accidentally. And the CrashAnalyzer can not work on all results. But some cases can be recovered.

For example:

To verify bugs in base64, first goto release and checkout ca_base64:

$ cd release
$ git checkout ca_base64

Then we use a transformed program to recover the crash in the original program:

  1. Choose a transformed program and run it on the input found by a fuzzer:
$ cd ~
$./results/ca_base64/554/base64_tfuzz_28/base64_tfuzz_28 -d ./results/ca_base64/554/crashing_inputs_from/results_saved_0_from 
[1]    131 segmentation fault (core dumped)  ./results/ca_base64/554/base64_tfuzz_28/base64_tfuzz_28 -d
  1. Recover an input from this transformed program and crashing input
). Re-hooking. WARNING | 2018-12-04 04:28:23,228 | angr.project | Address is already hooked, during hook(0x90dd000, ). Re-hooking. WARNING | 2018-12-04 04:28:23,229 | angr.simos.linux | Tracer has been heavily tested only for CGC. If you find it buggy for Linux binaries, we are sorry! Adding = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 == 47))> Adding = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 == 47))> Adding = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 == 47))> Adding = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 == 47))> results saved to /root/base64_result/recover_0 ">
$ ./release/CrashAnalyzer  --tprogram ./results/ca_base64/554/base64_tfuzz_28/base64_tfuzz_28 --target_opts "-d @@" --crash_input ./results/ca_base64/554/crashing_inputs_from/results_saved_0_from --result_dir base64_result --save_to recover
WARNING | 2018-12-04 04:28:22,350 | angr.analyses.disassembly_utils | Your verison of capstone does not support MIPS instruction groups.
Trying /root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from
WARNING | 2018-12-04 04:28:23,228 | angr.project | Address is already hooked, during hook(0x9021cd0, 
        
         ). Re-hooking.
WARNING | 2018-12-04 04:28:23,228 | angr.project | Address is already hooked, during hook(0x90dd000, 
         
          ). Re-hooking.
WARNING | 2018-12-04 04:28:23,229 | angr.simos.linux | Tracer has been heavily tested only for CGC. If you find it buggy for Linux binaries, we are sorry!
Adding 
          
           = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_0_0_8 == 47))>
Adding 
           
            = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_1_1_8 == 47))>
Adding 
            
             = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_2_2_8 == 47))> Adding 
             
              = 65) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 <= 90)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 >= 97) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 <= 122)), ((file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 >= 48) && (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 <= 57)), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 == 43), (file_/root/results/ca_base64/554/crashing_inputs_from/results_saved_0_from_9_3_3_8 == 47))> results saved to /root/base64_result/recover_0 
             
            
           
          
         
        

Then /root/base64_result/recover_0 is generated, we can use it to trigger a crash in the original program.

  1. verify the input by running the generated input on the original program
$ ./results/base64 -d base64_result/recover_0 
Successfully triggered bug 554, crashing now!
Successfully triggered bug 554, crashing now!
Successfully triggered bug 554, crashing now!
[1]    177 segmentation fault (core dumped)  ./results/base64 -d base64_result/recover_0
Owner
HexHive
Enforcing memory safety guarantees and type safety guarantees at the compiler and runtime level
HexHive
Code of Periodic Activation Functions Induce Stationarity

Periodic Activation Functions Induce Stationarity This repository is the official implementation of the methods in the publication: L. Meronen, M. Tra

AaltoML 12 Jun 07, 2022
Racing line optimization algorithm in python that uses Particle Swarm Optimization.

Racing Line Optimization with PSO This repository contains a racing line optimization algorithm in python that uses Particle Swarm Optimization. Requi

Parsa Dahesh 6 Dec 14, 2022
Awesome-google-colab - Google Colaboratory Notebooks and Repositories

Unofficial Google Colaboratory Notebook and Repository Gallery Please contact me to take over and revamp this repo (it gets around 30k views and 200k

Derek Snow 1.2k Jan 03, 2023
Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling"

Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling" Pipeline of Tip-Adapter Tip-Adapter can provid

peng gao 187 Dec 28, 2022
Pytorch implementation of VAEs for heterogeneous likelihoods.

Heterogeneous VAEs Beware: This repository is under construction 🛠️ Pytorch implementation of different VAE models to model heterogeneous data. Here,

Adrián Javaloy 35 Nov 29, 2022
The fastai book, published as Jupyter Notebooks

English / Spanish / Korean / Chinese / Bengali / Indonesian The fastai book These notebooks cover an introduction to deep learning, fastai, and PyTorc

fast.ai 17k Jan 07, 2023
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

RodrigoCMoraes 1 Oct 24, 2021
[ICCV '21] In this repository you find the code to our paper Keypoint Communities

Keypoint Communities In this repository you will find the code to our ICCV '21 paper: Keypoint Communities Duncan Zauss, Sven Kreiss, Alexandre Alahi,

Duncan Zauss 262 Dec 13, 2022
Source code for "Roto-translated Local Coordinate Framesfor Interacting Dynamical Systems"

Roto-translated Local Coordinate Frames for Interacting Dynamical Systems Source code for Roto-translated Local Coordinate Frames for Interacting Dyna

Miltiadis Kofinas 19 Nov 27, 2022
Pytorch implementation of Cut-Thumbnail in the paper Cut-Thumbnail:A Novel Data Augmentation for Convolutional Neural Network.

Cut-Thumbnail (Accepted at ACM MULTIMEDIA 2021) Tianshu Xie, Xuan Cheng, Xiaomin Wang, Minghui Liu, Jiali Deng, Tao Zhou, Ming Liu This is the officia

3 Apr 12, 2022
[ICCV21] Self-Calibrating Neural Radiance Fields

Self-Calibrating Neural Radiance Fields, ICCV, 2021 Project Page | Paper | Video Author Information Yoonwoo Jeong [Google Scholar] Seokjun Ahn [Google

381 Dec 30, 2022
git《Joint Entity and Relation Extraction with Set Prediction Networks》(2020) GitHub:

Joint Entity and Relation Extraction with Set Prediction Networks Source code for Joint Entity and Relation Extraction with Set Prediction Networks. W

130 Dec 13, 2022
A very simple baseline to estimate 2D & 3D SMPL-compatible keypoints from a single color image.

Minimal Body A very simple baseline to estimate 2D & 3D SMPL-compatible keypoints from a single color image. The model file is only 51.2 MB and runs a

Yuxiao Zhou 49 Dec 05, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 01, 2023
PyTorch implementation of SimSiam: Exploring Simple Siamese Representation Learning

SimSiam: Exploring Simple Siamese Representation Learning This is a PyTorch implementation of the SimSiam paper: @Article{chen2020simsiam, author =

Facebook Research 834 Dec 30, 2022
SeqAttack: a framework for adversarial attacks on token classification models

A framework for adversarial attacks against token classification models

Walter 23 Nov 25, 2022
ContourletNet: A Generalized Rain Removal Architecture Using Multi-Direction Hierarchical Representation

ContourletNet: A Generalized Rain Removal Architecture Using Multi-Direction Hierarchical Representation (Accepted by BMVC'21) Abstract: Images acquir

10 Dec 08, 2022
Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic plasticity".

Impression-Learning-Camera-Ready Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic

2 Feb 09, 2022
Systematic generalisation with group invariant predictions

Requirements are Python 3, TensorFlow v1.14, Numpy, Scipy, Scikit-Learn, Matplotlib, Pillow, Scikit-Image, h5py, tqdm. Experiments were run on V100 GPUs (16 and 32GB).

Faruk Ahmed 30 Dec 01, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Alex Nichol 62 Dec 20, 2022