Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

Overview

Manifold-SCA

Research Artifact of USENIX Security 2022 Paper: Automated Side Channel Analysis of Media Software with Manifold Learning

The repo is organized as:

📂manifold-sca
 ┣ 📂vulnerability
 ┃ ┣ 📂contribution
 ┃ ┣ 📜{dataset}-{program}-count.json
 ┃ ┗ 📜{program}.dis
 ┣ 📂code
 ┃ ┣ 📂SCA
 ┃ ┣ 📂tools
 ┃ ┗ 📂pp
 ┣ 📂audio
 ┗ 📂output

Code

We release our code in folder code. The implementation of our framework is in folder code/SCA and tools we use to process input/output data are listed in folder code/tools. To launch Prime+Prob, you can use the code in code/pp.

Attack

To prepare the training data for learning data manifold, you first need to instrument the binary with the released pintool code/tools/pinatrace.cpp. You will get a sequence of instruction address: accessed address when the binary processes a media data. Then you need to fold the sequence of accessed address into a matrix and convert the matrix with correct format (e.g., tensor, or numpy array).

We release the scripts for training the framework in folder code/SCA. Before training you need to first customize data paths in each script. The training procedure ends after 100 epochs and takes less than 24 hours on one Nvidia GeForce RTX 2080 GPU.

Localize

Recall that we localize vulnerabilities by pinpointing records in a trace that contribute most to reconstructing media data. So, to perform localization, you need first train the framework as we introduced before.

After training the framework, you just need to run code/localize.py and code/pinpoint.py to localize records in a side channel trace. Note that what you get in this step are several accessed addresses with their indexes in the trace. You need further get the corresponding instruction addresses based on the instrument output you generated when preparing training data.

We release the localized vulnerabilities in folder vulnerability. In folder vulnerability/contribution, we list the corresponding instruction addresses of records that make primary contribution to the reconstruction of media data. We further map the pinpoined instructions back to the corresponding functions. These functions are regarded as side-channel vulnerable functions. We list the results in {dataset}-{program}-count.json, where higher counting indicates a higher possibility of being vulnerable.

Despite each program is evaluated on different datasets, we can still observe that highly consistent vulnerabilities are localized in the same program.

Prime+Probe

We use Mastik to launch Prime+Probe on L1 cache of Intel Xeon CPU and AMD Ryzen CPU. We release our scripts in folder code/pp.

The experiment is launched in Linux OS. You need first to install taskset and cpuset.

We assume victim and spy are on the same CPU core and no other process is runing on this CPU core. To isolate a CPU core, you need to run sudo cset shield --cpu {cpu_id}.

Then run sudo cset shield --exec python run_pp.py -- {cpu_id} {segment_id}. Note that we seperate the media data into several segments to speed up the side channel collection. code/pp/run_pp.py runs code/pp/pp_audio.py with taskset. code/pp/pp_audio.py is the coordinator which runs spy and victim on the same CPU core simultaneously and saves the collected cache set access.

Audio

We upload all (total 2,552) audios reconstructed by our framework under Prime+Probe to folder audio/sc09-pp for result verification. Each audio is named as {Number}_{hash}_{index}.wav and the {Number} is the content of the corresponding reference input, e.g., for a reconstructed audio One_94de6a6a_nohash_1.wav, the number said in the reference input is one. As we reported in the paper, most (~80%) of the audios have consistent contents (i.e., the numbers) with the reference inputs.

Output

We upload media data reconstructed by our framework in folder output.

Owner
Yuanyuan Yuan
Yuanyuan Yuan
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

943 Jan 07, 2023
Jax/Flax implementation of Variational-DiffWave.

jax-variational-diffwave Jax/Flax implementation of Variational-DiffWave. (Zhifeng Kong et al., 2020, Diederik P. Kingma et al., 2021.) DiffWave with

YoungJoong Kim 37 Dec 16, 2022
[ECCVW2020] Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DiMP)

Feel free to visit my homepage Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DIMP) [ECCVW2020 paper] Presentation

Seokeon Choi 35 Oct 26, 2022
Computer Vision Script to recognize first person motion, developed as final project for the course "Machine Learning and Deep Learning"

Overview of The Code BaseColab/MLDL_FPAR.pdf: it contains the full explanation of our work Base Colab: it contains the base colab used to perform all

Simone Papicchio 4 Jul 16, 2022
Python Wrapper for Embree

pyembree Python Wrapper for Embree Installation You can install pyembree (and embree) via the conda-forge package. $ conda install -c conda-forge pyem

Anthony Scopatz 67 Dec 24, 2022
Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.

Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.

Octavio Arriaga 5.3k Dec 30, 2022
Trafffic prediction analysis using hybrid models - Machine Learning

Hybrid Machine learning Model Clone the Repository Create a new Directory as assests and download the model from the below link Model Link To Start th

1 Feb 08, 2022
Explainability for Vision Transformers (in PyTorch)

Explainability for Vision Transformers (in PyTorch) This repository implements methods for explainability in Vision Transformers

Jacob Gildenblat 442 Jan 04, 2023
​ This is the Pytorch implementation of Progressive Attentional Manifold Alignment.

PAMA This is the Pytorch implementation of Progressive Attentional Manifold Alignment. Requirements python 3.6 pytorch 1.2.0+ PIL, numpy, matplotlib C

98 Nov 15, 2022
Course content and resources for the AIAIART course.

AIAIART course This repo will house the notebooks used for the AIAIART course. Part 1 (first four lessons) ran via Discord in September/October 2021.

Jonathan Whitaker 492 Jan 06, 2023
Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol.

Updated Updated for TTS(CE) = Also Known as TTN V3. The code requires the first server to be 'ttn' protocol. Introduction This balenaCloud (previously

Remko 1 Oct 17, 2021
Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Greg Werbin 5 Dec 27, 2021
Convert Apple NeuralHash model for CSAM Detection to ONNX.

Apple NeuralHash is a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression.

Asuhariet Ygvar 1.5k Dec 31, 2022
3.8% and 18.3% on CIFAR-10 and CIFAR-100

Wide Residual Networks This code was used for experiments with Wide Residual Networks (BMVC 2016) http://arxiv.org/abs/1605.07146 by Sergey Zagoruyko

Sergey Zagoruyko 1.2k Dec 29, 2022
Minimalistic PyTorch training loop

Backbone for PyTorch training loop Will try to keep it minimalistic. pip install back from back import Bone Features Progress bar Checkpoints saving/l

Kashin 4 Jan 16, 2020
Meaningful titles for tabs and PDF downloads! Also supports tab search.

arxiv-utils If you are a researcher that reads a lot on ArXiv, you'll benefit a lot from this web extension. Renames the title of PDF page to the pape

Johnson 174 Dec 20, 2022
ML-based medical imaging using Azure

Disclaimer This code is provided for research and development use only. This code is not intended for use in clinical decision-making or for any other

Microsoft Azure 68 Dec 23, 2022
Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021

Learning the Best Pooling Strategy for Visual Semantic Embedding Official PyTorch implementation of the paper Learning the Best Pooling Strategy for V

Jiacheng Chen 106 Jan 06, 2023
(Python, R, C/C++) Isolation Forest and variations such as SCiForest and EIF, with some additions (outlier detection + similarity + NA imputation)

IsoTree Fast and multi-threaded implementation of Extended Isolation Forest, Fair-Cut Forest, SCiForest (a.k.a. Split-Criterion iForest), and regular

141 Dec 29, 2022
A tensorflow model that predicts if the image is of a cat or of a dog.

Quick intro Hello and thank you for your interest in my project! This is the backend part of a two-repo application. The other part can be found here

Tudor Matei 0 Mar 08, 2022