Code for "My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack" paper

Overview

Myo Keylogging

This is the source code for our paper My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack by Matthias Gazzari, Annemarie Mattmann, Max Maass and Matthias Hollick in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume 5, Issue 4, 2021.

We include the software used for recording the dataset (record folder) and the software for training and running the neural networks (ml folder) as well as analyzing the results (analysis folder). The scripts folder provides some helper scripts for automating batches of hyperparameter optimization, model fitting, analyses and more. The results folder includes a pickled version of the predictions of our models, on which analyses can be run, e.g. to reproduce the paper results.

Installation

To install the project, first clone the repository and change directory into the fresh clone:

git clone https://github.com/seemoo-lab/myo-keylogging.git
cd myo-keylogging

You can use a python virtual environment (or any other virtual environment of your choice):

mkvirtualenv myo --system-site-packages
workon myo

To make sure you have the newest software versions you can run an upgrade:

pip install --upgrade pip setuptools

To install the requirements run:

pip install -r requirements.txt

Finally, import the training and test data into the project. The top level folder should include a folder train-data with all the records for training the models and a folder test-data with all the records for testing the models.

wget https://zenodo.org/record/5594651/files/myo-keylogging-dataset.zip
unzip myo-keylogging-dataset.zip

Using the record library, you can add you can extend this dataset.

Rerun of Results

To reproduce our results from the provided predictions of our models, go to the top level directory and run:

./scripts/create_results.sh

This will recreate all performance value files and plots in the subfolders of the results folder as used in the paper.

Run the following to list the fastest and slowest typists in order to determine their class imbalance in the results/train-data-skew.csv and the results/test-data-skew.csv files:

python -m analysis exp_key_data

To recreate the provided predictions and class skew files, execute the following from the top level directory:

./scripts/create_models.sh
./scripts/create_predictions.sh
./scripts/create_class_skew_files.sh

This will fit the models with the current choice of hyperparameters and run each model on the test dataset to create the required predictions for analysis. Additionally, the class skew files will be recreated.

To run the hyperparameter optimization either run the run_shallow_hpo.sh script or, alternatively, the slurm_run_shallow_hpo.sh script when on a SLURM cluster.

sbatch scripts/slurm_run_shallow_hpo.sh
./scripts/run_shallow_hpo.sh

Afterwards you can use the merge_shallow_hpo_runs.py script to combine the results for easier evaluation of the hyperparameters.

Fit Models

In order to fit and analyze your own models, go to the top level directory and run any of:

python -m ml crnn
python -m ml resnet
python -m ml resnet11
python -m ml wavenet

This will fit the respective model with the default parameters and in binary mode for keystroke detection. In order to fit multiclass models for keystroke identification, use the encoding parameter, e.g.:

python -m ml crnn --encoding "multiclass"

In order to test specific sensors, ignore the others (note that quaternions are ignored by default), e.g. to use only EMG on a CRNN model, use:

python -m ml crnn --ignore "quat" "acc" "gyro"

To run a hyperparameter optimization, run e.g.:

python -m ml crnn --func shallow_hpo --step 5

To gain more information on possible parameters, run e.g.:

python -m ml crnn --help

Some parameters for the neural networks are fixed in the code.

Analyze Models

In order to analyze your models, run apply_models to create the predictions as pickled files. On these you can run further analyses found in the analysis folder.

To run apply_models on a binary model, do:

python -m analysis apply_models --model_path results/<PATH_TO_MODEL> --encoding binary --data_path test-data/ --save_path results/<PATH_TO_PKL> --save_only --basenames <YOUR MODELS>

To run a multiclass model, do:

python -m analysis apply_models --model_path results/<PATH_TO_MODEL> --encoding multiclass --data_path test-data/ --save_path results/<PATH_TO_PKL> --save_only --basenames <YOUR MODELS>

To chain a binary and multiclass model, do e.g.:

python -m analysis apply_models --model_path results/<PATH_TO_MODEL> --encoding chain --data_path test-data/ --save_path results/<PATH_TO_PKL> --save_only --basenames <YOUR MODELS> --tolerance 10

Further parameters interesting for analyses may be a filter on the users with the parameter (--users known or --users unknown) or on the data (--data known or --data unknown) to include only users (not) in the training data or include only data typed by all or no other user respectively.

For more information, run:

python -m analysis apply_models --help

To later recreate model performance results and plots, run:

python -m analysis apply_models --encoding <ENCODING> --load_results results/<PATH_TO_PKL> --save_path results/<PATH_TO_PKL> --save_only

with the appropriate encoding of the model used to create the pickled results.

To run further analyses on the generated predictions, create or choose your analysis from the analysis folder and run:

python -m analysis <ANALYSIS_NAME>

Refer to the help for further information:

python -m analysis <ANALYSIS_NAME> --help

Record Data

In order to record your own data(set), switch to the record folder. To record sensor data with our recording software, you will need one to two Myo armbands connected to your computer. Then, you can start a training data recording, e.g.:

python tasks.py -s 42 -l german record touch_typing --left_tty <TTY_LEFT_MYO> --left_mac <MAC_LEFT_MYO> --right_tty <TTY_RIGHT_MYO> --right_mac <MAC_RIGHT_MYO> --kb_model TADA68_DE

for a German recording with seed 42, a touch typist and a TADA68 German physical keyboard layout or

python tasks.py -s 42 -l english record touch_typing --left_tty <TTY_LEFT_MYO> --left_mac <MAC_LEFT_MYO> --right_tty <TTY_RIGHT_MYO> --right_mac <MAC_RIGHT_MYO> --kb_model TADA68_US

for an English recording with seed 42, a hybrid typist and a TADA68 English physical keyboard layout.

In order to start a test data recording, simply run the passwords.py instead of the tasks.py.

After recording training data, please execute the following script to complete the meta data:

python update_text_meta.py -p ../train-data/

After recording test data, please execute the following two scripts to complete the meta data:

python update_pw_meta.py -p ../test-data/
python update_cuts.py -p ../test-data/

For further information, check:

python tasks.py --help
python passwords.py --help

Note that the recording software includes text extracts as outlined in the acknowledgments below.

Links

Acknowledgments

This work includes the following external materials to be found in the record folder:

  1. Various texts from Wikipedia available under the CC-BY-SA 3.0 license.
  2. The EFF's New Wordlists for Random Passphrases available under the CC-BY 3.0 license.
  3. An extract of the Top 1000 most common passwords by Daniel Miessler, Jason Haddix, and g0tmi1k available under the MIT license.

License

This software is licensed under the GPLv3 license, please also refer to the LICENSE file.

Owner
Secure Mobile Networking Lab
Secure Mobile Networking Lab
Code for "R-GCN: The R Could Stand for Random"

RR-GCN: Random Relational Graph Convolutional Networks PyTorch Geometric code for the paper "R-GCN: The R Could Stand for Random" RR-GCN is an extensi

PreDiCT.IDLab 31 Sep 07, 2022
Hcaptcha-challenger - Gracefully face hCaptcha challenge with Yolov5(ONNX) embedded solution

hCaptcha Challenger 🚀 Gracefully face hCaptcha challenge with Yolov5(ONNX) embe

593 Jan 03, 2023
Spatial Transformer Nets in TensorFlow/ TensorLayer

MOVED TO HERE Spatial Transformer Networks Spatial Transformer Networks (STN) is a dynamic mechanism that produces transformations of input images (or

Hao 36 Nov 23, 2022
Local-Global Stratified Transformer for Efficient Video Recognition

DualFormer This repo is the implementation of our manuscript entitled "Local-Global Stratified Transformer for Efficient Video Recognition". Our model

Sea AI Lab 19 Dec 07, 2022
A curated list of awesome papers for Semantic Retrieval (TOIS Accepted: Semantic Models for the First-stage Retrieval: A Comprehensive Review).

A curated list of awesome papers for Semantic Retrieval (TOIS Accepted: Semantic Models for the First-stage Retrieval: A Comprehensive Review).

Yinqiong Cai 189 Dec 28, 2022
Supporting code for "Autoregressive neural-network wavefunctions for ab initio quantum chemistry".

naqs-for-quantum-chemistry This repository contains the codebase developed for the paper Autoregressive neural-network wavefunctions for ab initio qua

Tom Barrett 24 Dec 23, 2022
Code and data accompanying our SVRHM'21 paper.

Code and data accompanying our SVRHM'21 paper. Requires tensorflow 1.13, python 3.7, scikit-learn, and pytorch 1.6.0 to be installed. Python scripts i

5 Nov 17, 2021
Analyzing basic network responses to novel classes

novelty-detection Analyzing how AlexNet responds to novel classes with varying degrees of similarity to pretrained classes from ImageNet. If you find

Noam Eshed 34 Oct 02, 2022
Research code for Arxiv paper "Camera Motion Agnostic 3D Human Pose Estimation"

GMR(Camera Motion Agnostic 3D Human Pose Estimation) This repo provides the source code of our arXiv paper: Seong Hyun Kim, Sunwon Jeong, Sungbum Park

Seong Hyun Kim 1 Feb 07, 2022
Build and run Docker containers leveraging NVIDIA GPUs

NVIDIA Container Toolkit Introduction The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includ

NVIDIA Corporation 15.6k Jan 01, 2023
TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

Microsoft 1.3k Dec 30, 2022
3D ResNet Video Classification accelerated by TensorRT

Activity Recognition TensorRT Perform video classification using 3D ResNets trained on Kinetics-400 dataset and accelerated with TensorRT P.S Click on

Akash James 39 Nov 21, 2022
Official implementation of our CVPR2021 paper "OTA: Optimal Transport Assignment for Object Detection" in Pytorch.

OTA: Optimal Transport Assignment for Object Detection This project provides an implementation for our CVPR2021 paper "OTA: Optimal Transport Assignme

217 Jan 03, 2023
Language Used: Python . Made in Jupyter(Anaconda) notebook.

FACE-DETECTION-ATTENDENCE-SYSTEM Made in Jupyter(Anaconda) notebook. Language Used: Python Steps to perform before running the program : Install Anaco

1 Jan 12, 2022
Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020)

Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020) Official implementation of: Forest R-CNN: Large-Vo

Jialian Wu 54 Jan 06, 2023
ML From Scratch

ML from Scratch MACHINE LEARNING TOPICS COVERED - FROM SCRATCH Linear Regression Logistic Regression K Means Clustering K Nearest Neighbours Decision

Tanishq Gautam 66 Nov 02, 2022
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

Raphael Sulzer 23 Jan 04, 2023
An Open-Source Toolkit for Prompt-Learning.

An Open-Source Framework for Prompt-learning. Overview • Installation • How To Use • Docs • Paper • Citation • What's New? Nov 2021: Now we have relea

THUNLP 2.3k Jan 07, 2023
Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

An official implementation of paper Data-Uncertainty Guided Multi-Phase Learning for Semi-supervised Object Detection

11 Nov 23, 2022
Code for "Continuous-Time Meta-Learning with Forward Mode Differentiation" (ICLR 2022)

Continuous-Time Meta-Learning with Forward Mode Differentiation ICLR 2022 (Spotlight) - Installation - Example - Citation This repository contains the

Tristan Deleu 25 Oct 20, 2022