Angora is a mutation-based fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Overview

Angora

License Build Status

Angora is a mutation-based coverage guided fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Published Work

Arxiv: Angora: Efficient Fuzzing by Principled Search, S&P 2018.

Building Angora

Build Requirements

  • Linux-amd64 (Tested on Ubuntu 16.04/18.04 and Debian Buster)
  • Rust stable (>= 1.31), can be obtained using rustup
  • LLVM 4.0.0 - 7.1.0 : run PREFIX=/path-to-install ./build/install_llvm.sh.

Environment Variables

Append the following entries in the shell configuration file (~/.bashrc, ~/.zshrc).

export PATH=/path-to-clang/bin:$PATH
export LD_LIBRARY_PATH=/path-to-clang/lib:$LD_LIBRARY_PATH

Fuzzer Compilation

The build script will resolve most dependencies and setup the runtime environment.

./build/build.sh

System Configuration

As with AFL, system core dumps must be disabled.

echo core | sudo tee /proc/sys/kernel/core_pattern

Test

Test if Angora is builded successfully.

cd /path-to-angora/tests
./test.sh mini

Running Angora

Build Target Program

Angora compiles the program into two separate binaries, each with their respective instrumentation. Using autoconf programs as an example, here are the steps required.

# Use the instrumenting compilers
CC=/path/to/angora/bin/angora-clang \
CXX=/path/to/angora/bin/angora-clang++ \
LD=/path/to/angora/bin/angora-clang \
PREFIX=/path/to/target/directory \
./configure --disable-shared

# Build with taint tracking support 
USE_TRACK=1 make -j
make install

# Save the compiled target binary into a new directory
# and rename it with .taint postfix, such as uniq.taint

# Build with light instrumentation support
make clean
USE_FAST=1 make -j
make install

# Save the compiled binary into the directory previously
# created and rename it with .fast postfix, such as uniq.fast

If you fail to build by this approach, try wllvm and gllvm described in Build a target program.

Also, we have implemented taint analysis with libdft64 instead of DFSan (Use libdft64 for taint tracking).

Fuzzing

./angora_fuzzer -i input -o output -t path/to/taint/program -- path/to/fast/program [argv]

For more information, please refer to the documentation under the docs/ directory.

Comments
  • Unable to compile lavam programs correctly

    Unable to compile lavam programs correctly

    Hello Angora authors,

    I'm trying to reproduce the lavam evaluation within Magma's infrastructure. However, I think I encounter the following 2 issues. Could you help me to check if I'm doing anything wrong?

    Thank you in advance!

    The 2 issues are as follow:

    1. Angora cannot find any bugs while AFLplusplus can easily discover ones within a few minutes. From the log files I see that Angora is saying Multiple inconsistent warnings. It caused by the fast and track programs has different behaviors. If most constraints are inconsistent, ensure they are compiled with the same environment. Otherwise, please report us.
    2. For who, AFLplusplus can only find <20 bugs after running for 5 hours. For other targets it is finding the numbers of bugs reported in your paper.

    You can find the scripts I use to compile and run the fuzzing campaigns here. Basically, the lavam programs are compiled with fuzzers/aflplusplus/instrument.sh and fuzzers/angora/instrument.sh, which they set up some config and execute targets/lavam/build.sh.
    In targets/lavam/LAVAM you can find the patched source code following your instructions.

    To launch the fuzzing campaigns, cd into tools/captain and run ./run.sh run_lavamrc.
    run_lavamrc is the config file for the campaign. It would create a working directory in ~/lavam-results, build docker containers and start fuzzing with fuzzers/aflplusplus/run.sh and fuzzers/angora/run.sh. The fuzzing results are stored in ~/lavam-results/ar as tarballs.

    Please do let me know if you need any additional information.

    Spencer

    opened by spencerwuwu 1
  • Fix up compiler warnings

    Fix up compiler warnings

    • Correct signedness for c-strings in angora-clang
    • Const-correctness throughout
    • Move #[link] attribute to extern block

    Fixes all warnings emitted by clang version 14.

    opened by bossmc 0
  • Upgrade to GitHub-native Dependabot

    Upgrade to GitHub-native Dependabot

    Dependabot Preview will be shut down on August 3rd, 2021. In order to keep getting Dependabot updates, please merge this PR and migrate to GitHub-native Dependabot before then.

    Dependabot has been fully integrated into GitHub, so you no longer have to install and manage a separate app. This pull request migrates your configuration from Dependabot.com to a config file, using the new syntax. When merged, we'll swap out dependabot-preview (me) for a new dependabot app, and you'll be all set!

    With this change, you'll now use the Dependabot page in GitHub, rather than the Dependabot dashboard, to monitor your version updates, and you'll configure Dependabot through the new config file rather than a UI.

    If you've got any questions or feedback for us, please let us know by creating an issue in the dependabot/dependabot-core repository.

    Learn more about migrating to GitHub-native Dependabot

    Please note that regular @dependabot commands do not work on this pull request.

    dependencies 
    opened by dependabot-preview[bot] 1
  • Angora compile IR

    Angora compile IR

    Would Angora have support to compile from LLVM or BAP derived intermediate representation?

    Trying to analyze binary (pre-compiled) but couldn't figure out how:

     INFO  angora::fuzz_main > CommandOpt { mode: LLVM, id: 0, main: ("/input/azorult2", []), track: ("/input/azorult2", []), tmp_dir: "./output/bar/tmp", out_file: "./output/bar/tmp/cur_input", forksrv_socket_path: "./output/bar/tmp/forksrv_socket", track_path: "./output/bar/tmp/track", is_stdin: true, search_method: Gd, mem_limit: 200, time_limit: 1, is_raw: true, uses_asan: false, ld_library: "$LD_LIBRARY_PATH:/clang+llvm/lib", enable_afl: true, enable_exploitation: true }
    thread 'main' panicked at 'The program is not complied by Angora', fuzzer/src/check_dep.rs:55:9
    
    opened by aug2uag 1
  • Update rand requirement from 0.7 to 0.8

    Update rand requirement from 0.7 to 0.8

    Updates the requirements on rand to permit the latest version.

    Changelog

    Sourced from rand's changelog.

    [0.8.0] - 2020-12-18

    Platform support

    • The minimum supported Rust version is now 1.36 (#1011)
    • getrandom updated to v0.2 (#1041)
    • Remove wasm-bindgen and stdweb feature flags. For details of WASM support, see the getrandom documentation. (#948)
    • ReadRng::next_u32 and next_u64 now use little-Endian conversion instead of native-Endian, affecting results on Big-Endian platforms (#1061)
    • The nightly feature no longer implies the simd_support feature (#1048)
    • Fix simd_support feature to work on current nightlies (#1056)

    Rngs

    • ThreadRng is no longer Copy to enable safe usage within thread-local destructors (#1035)
    • gen_range(a, b) was replaced with gen_range(a..b). gen_range(a..=b) is also supported. Note that a and b can no longer be references or SIMD types. (#744, #1003)
    • Replace AsByteSliceMut with Fill and add support for [bool], [char], [f32], [f64] (#940)
    • Restrict rand::rngs::adapter to std (#1027; see also #928)
    • StdRng: add new std_rng feature flag (enabled by default, but might need to be used if disabling default crate features) (#948)
    • StdRng: Switch from ChaCha20 to ChaCha12 for better performance (#1028)
    • SmallRng: Replace PCG algorithm with xoshiro{128,256}++ (#1038)

    Sequences

    • Add IteratorRandom::choose_stable as an alternative to choose which does not depend on size hints (#1057)
    • Improve accuracy and performance of IteratorRandom::choose (#1059)
    • Implement IntoIterator for IndexVec, replacing the into_iter method (#1007)
    • Add value stability tests for seq module (#933)

    Misc

    • Support PartialEq and Eq for StdRng, SmallRng and StepRng (#979)
    • Added a serde1 feature and added Serialize/Deserialize to UniformInt and WeightedIndex (#974)
    • Drop some unsafe code (#962, #963, #1011)
    • Reduce packaged crate size (#983)
    • Migrate to GitHub Actions from Travis+AppVeyor (#1073)

    Distributions

    • Alphanumeric samples bytes instead of chars (#935)
    • Uniform now supports char, enabling rng.gen_range('A'..='Z') (#1068)
    • Add UniformSampler::sample_single_inclusive (#1003)

    Weighted sampling

    • Implement weighted sampling without replacement (#976, #1013)
    • rand::distributions::alias_method::WeightedIndex was moved to rand_distr::WeightedAliasIndex. The simpler alternative rand::distribution::WeightedIndex remains. (#945)
    • Improve treatment of rounding errors in WeightedIndex::update_weights (#956)
    • WeightedIndex: return error on NaN instead of panic (#1005)

    Documentation

    • Document types supported by random (#994)
    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Automerge options (never/patch/minor, and dev/runtime dependencies)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 0
  • showmap: added tool for displaying coverage data

    showmap: added tool for displaying coverage data

    Analogous to afl-showmap. Logs code coverage information to a file (in the same format as afl-showmap).

    This is my first time writing Rust, so I hope that it's okay!

    opened by adrianherrera 0
Releases(1.3.0)
  • 1.3.0(Apr 13, 2022)

    • Support LLVM 11/12
    • Tested in Rust 1.6.*, and Ubuntu 20.04
    • Fix issues
      • getc model
      • https://github.com/AngoraFuzzer/Angora/commit/b31af93bb7401a296af0ddaa7b80eaaed7f73415
      • https://github.com/AngoraFuzzer/Angora/issues/86
    • New PRs
    Source code(tar.gz)
    Source code(zip)
  • 1.2.2(Jul 17, 2019)

    • Implementation of Never-zero counter: The idea is from Marc and Heiko in AFLPlusPlus . https://github.com/vanhauser-thc/AFLplusplus/blob/master/llvm_mode/README.neverzero

    • add inst_ratio : issue #67

    • fix asan compatible: did not instrument function startswith "asan.module"

    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Jun 14, 2019)

  • 1.2.0(May 23, 2019)

A framework to train language models to learn invariant representations.

Invariant Language Modeling Implementation of the training for invariant language models. Motivation Modern pretrained language models are critical co

6 Nov 16, 2022
[NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators"

G-PATE This is the official code base for our NeurIPS 2021 paper: "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of T

AI Secure 14 Oct 12, 2022
Keyword spotting on Arm Cortex-M Microcontrollers

Keyword spotting for Microcontrollers This repository consists of the tensorflow models and training scripts used in the paper: Hello Edge: Keyword sp

Arm Software 1k Dec 30, 2022
A computer vision pipeline to identify the "icons" in Christian paintings

Christian-Iconography A computer vision pipeline to identify the "icons" in Christian paintings. A bit about iconography. Iconography is related to id

Rishab Mudliar 3 Jul 30, 2022
Concept drift monitoring for HA model servers.

{Fast, Correct, Simple} - pick three Easily compare training and production ML data & model distributions Goals Boxkite is an instrumentation library

98 Dec 15, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 07, 2023
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
Official implementation for "QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation" (CVPR 2022)

QS-Attn: Query-Selected Attention for Contrastive Learning in I2I Translation (CVPR2022) https://arxiv.org/abs/2203.08483 Unpaired image-to-image (I2I

Xueqi Hu 50 Dec 16, 2022
An implementation of the efficient attention module.

Efficient Attention An implementation of the efficient attention module. Description Efficient attention is an attention mechanism that substantially

Shen Zhuoran 194 Dec 15, 2022
Official PyTorch implementation of StyleGAN3

Modified StyleGAN3 Repo Changes Made tied to python 3.7 syntax .jpgs instead of .pngs for training sample seeds to recreate the 1024 training grid wit

Derrick Schultz (he/him) 83 Dec 15, 2022
Deep learning with dynamic computation graphs in TensorFlow

TensorFlow Fold TensorFlow Fold is a library for creating TensorFlow models that consume structured data, where the structure of the computation graph

1.8k Dec 28, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
Advanced yabai wooting scripts

Yabai Wooting scripts Installation requirements Both https://github.com/xiamaz/python-yabai-client and https://github.com/xiamaz/python-wooting-rgb ne

Max Zhao 3 Dec 31, 2021
Heterogeneous Temporal Graph Neural Network

Heterogeneous Temporal Graph Neural Network This repository contains the datasets and source code of HTGNN. run_mag.ipynb is the training and testing

15 Dec 22, 2022
DilatedNet in Keras for image segmentation

Keras implementation of DilatedNet for semantic segmentation A native Keras implementation of semantic segmentation according to Multi-Scale Context A

303 Mar 15, 2022
这是一个yolo3-tf2的源码,可以用于训练自己的模型。

YOLOV3:You Only Look Once目标检测模型在Tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 文件下载 Download 训练步骤 How2train 预测步骤 How2predict 评估步骤 How2eval 参考资料

Bubbliiiing 68 Dec 21, 2022
Exploring the link between uncertainty estimates obtained via "exact" Bayesian inference and out-of-distribution (OOD) detection.

Uncertainty-based OOD detection Exploring the link between uncertainty estimates obtained by "exact" Bayesian inference and out-of-distribution (OOD)

Christian Henning 1 Nov 05, 2022
N-Omniglot is a large neuromorphic few-shot learning dataset

N-Omniglot [Paper] || [Dataset] N-Omniglot is a large neuromorphic few-shot learning dataset. It reconstructs strokes of Omniglot as videos and uses D

11 Dec 05, 2022
[CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning

Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning [CVPR'21, Oral] By Zhicheng Huang*, Zhaoyang Zeng*, Yupan H

Multimedia Research 196 Dec 13, 2022