Python package for analyzing sensor-collected human motion data

Overview

Installation | Requirements | Usage | Contribution | Getting Help

Sensor Motion

PyPI - Python Version PyPI GitHub issues https://readthedocs.org/projects/sensormotion/badge/?version=latest https://badges.gitter.im/gitterHQ/gitter.png

Python package for analyzing sensor-collected human motion data (e.g. physical activity levels, gait dynamics).

Dedicated accelerometer devices, such as those made by Actigraph, usually bundle software for the analysis of the sensor data. In my work I often collect sensor data from smartphones and have not been able to find any comparable analysis software.

This Python package allows the user to extract human motion data, such as gait/walking dynamics, directly from accelerometer signals. Additionally, the package allows for the calculation of physical activity (PA) or moderate-to-vigorous physical activity (MVPA) counts, similar to activity count data offered by companies like Actigraph.

Installation

You can install this package using pip:

pip install sensormotion

Requirements

This package has the following dependencies, most of which are just Python packages:

  • Python 3.x
    • The easiest way to install Python is using the Anaconda distribution, as it also includes the other dependencies listed below
    • Python 2.x has not been tested, so backwards compatibility is not guaranteed
  • numpy
    • Included with Anaconda. Otherwise, install using pip (pip install numpy)
  • scipy
    • Included with Anaconda. Otherwise, install using pip (pip install scipy)
  • matplotlib
    • Included with Anaconda. Otherwise, install using pip (pip install matplotlib)

Usage

Here is brief example of extracting step-based metrics from raw vertical acceleration data:

Import the package:

import sensormotion as sm

If you have a vertical acceleration signal x, and its corresponding time signal t, we can begin by filtering the signal using a low-pass filter:

b, a = sm.signal.build_filter(frequency=10,
                              sample_rate=100,
                              filter_type='low',
                              filter_order=4)

x_filtered = sm.signal.filter_signal(b, a, signal=x)

images/filter.png

Next, we can detect the peaks (or valleys) in the filtered signal, which gives us the time and value of each detection. Optionally, we can include a plot of the signal and detected peaks/valleys:

peak_times, peak_values = sm.peak.find_peaks(time=t, signal=x_filtered,
                                             peak_type='valley',
                                             min_val=0.6, min_dist=30,
                                             plot=True)

images/peak_detection.png

From the detected peaks, we can then calculate step metrics like cadence and step time:

cadence = sm.gait.cadence(time=t, peak_times=peak_times, time_units='ms')
step_mean, step_sd, step_cov = sm.gait.step_time(peak_times=peak_times)

Physical activity counts and intensities can also be calculated from the acceleration data:

x_counts = sm.pa.convert_counts(x, time, integrate='simpson')
y_counts = sm.pa.convert_counts(y, time, integrate='simpson')
z_counts = sm.pa.convert_counts(z, time, integrate='simpson')
vm = sm.signal.vector_magnitude(x_counts, y_counts, z_counts)
categories, time_spent = sm.pa.cut_points(vm, set_name='butte_preschoolers', n_axis=3)

images/pa_counts.png

For a more in-depth tutorial, and more workflow examples, please take a look at the tutorial.

I would also recommend looking over the documentation to see other functionalities of the package.

Contribution

I work on this package in my spare time, on an "as needed" basis for my research projects. However, pull requests for bug fixes and new features are always welcome!

Please see the develop branch for the development version of the package, and check out the issues page for bug reports and feature requests.

Getting Help

You can find the full documentation for the package here

Python's built-in help function will show documentation for any module or function: help(sm.gait.step_time)

You're encouraged to post questions, bug reports, or feature requests as an issue

Alternatively, ask questions on Gitter

Comments
  • Question

    Question

    I am using sensormotion.py package for finding peaks for one of my applications. I want to know how normalized min_value (0-1) in peak.find_peaks is related to minimum detectable peak value.

    opened by vivekmahadev 2
  • I need help using this library!

    I need help using this library!

    Hi

    I'm very interested in using this library in my project. I have a test of 2min walking at 100Hz and I collect the data from accelerometer, gyro and magnetometer of an Iphone 6.

    I'm trying to use the library with my data but I could understand some things. For example this function sm.peak.find_peaks(ac_lags, ac, peak_type='peak', min_val= 0.6, min_dist=32, plot=True). What are the suitable values of min_val and min_dist parameters? Are they problem dependent? I have tried with many values and the step estimation is not correct.

    Please, could you help me?

    Best regards

    opened by ogreyesp 1
  • sm.gait.step_regularity IndexError

    sm.gait.step_regularity IndexError

    step_reg, stride_reg = sm.gait.step_regularity(ac_peak_values) File ".../python3.6/site-packages/sensormotion-1.1.0-py3.6.egg/sensormotion/gait.py", line 128, in step_regularity ac_d2 = peaks_half[2] # second dominant period i.e. a stride (left-left) sm.gait.step_regularity IndexError: index 2 is out of bounds for axis 0 with size 2

    opened by jiakang 1
  • Example: Importing from live cvs file?

    Example: Importing from live cvs file?

    opened by RandoSY 1
  • Question about step regularity

    Question about step regularity

    Hey, I'm using your package right now to generate features for a dataset. I have looked at the paper by Moe Nilssen et al. and tried to follow the steps for calculating step and stride regularity. However, I wonder why you still do the following calculation at the end:

    step_reg = ac_d1 / ac_lag0 stride_reg = ac_d2 / ac_lag0

    Can you help me with this?

    opened by vanessabin 1
Releases(1.1.4)
Owner
Simon Ho
Data Science | Machine Learning | Statistics | Gaming
Simon Ho
Building house price data pipelines with Apache Beam and Spark on GCP

This project contains the process from building a web crawler to extract the raw data of house price to create ETL pipelines using Google Could Platform services.

1 Nov 22, 2021
We're Team Arson and we're using the power of predictive modeling to combat wildfires.

We're Team Arson and we're using the power of predictive modeling to combat wildfires. Arson Map Inspiration There’s been a lot of wildfires in Califo

Jerry Lee 3 Oct 17, 2021
Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Tweetmetric Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile. The

Mathis HAMMEL 29 Oct 18, 2022
Implementation in Python of the reliability measures such as Omega.

OmegaPy Summary Simple implementation in Python of the reliability measures: Omega Total, Omega Hierarchical and Omega Hierarchical Total. Name Link O

Rafael Valero Fernández 2 Apr 27, 2022
Create HTML profiling reports from pandas DataFrame objects

Pandas Profiling Documentation | Slack | Stack Overflow Generates profile reports from a pandas DataFrame. The pandas df.describe() function is great

10k Jan 01, 2023
Very useful and necessary functions that simplify working with data

Additional-function-for-pandas Very useful and necessary functions that simplify working with data random_fill_nan(module_name, nan) - Replaces all sp

Alexander Goldian 2 Dec 02, 2021
A distributed block-based data storage and compute engine

Nebula is an extremely-fast end-to-end interactive big data analytics solution. Nebula is designed as a high-performance columnar data storage and tabular OLAP engine.

Columns AI 131 Dec 26, 2022
DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis.

DaDRA (day-druh) is a Python library for Data-Driven Reachability Analysis. The main goal of the package is to accelerate the process of computing estimates of forward reachable sets for nonlinear dy

2 Nov 08, 2021
follow-analyzer helps GitHub users analyze their following and followers relationship

follow-analyzer follow-analyzer helps GitHub users analyze their following and followers relationship by providing a report in html format which conta

Yin-Chiuan Chen 2 May 02, 2022
Ejercicios Panda usando Pandas

Readme Below we add configuration details to locally test your application To co

1 Jan 22, 2022
Option Pricing Calculator using the Binomial Pricing Method (No Libraries Required)

Binomial Option Pricing Calculator Option Pricing Calculator using the Binomial Pricing Method (No Libraries Required) Background A derivative is a fi

sammuhrai 1 Nov 29, 2021
An interactive grid for sorting, filtering, and editing DataFrames in Jupyter notebooks

qgrid Qgrid is a Jupyter notebook widget which uses SlickGrid to render pandas DataFrames within a Jupyter notebook. This allows you to explore your D

Quantopian, Inc. 2.9k Jan 08, 2023
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

Blaze 3.1k Jan 05, 2023
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
peptides.py is a pure-Python package to compute common descriptors for protein sequences

peptides.py Physicochemical properties and indices for amino-acid sequences. 🗺️ Overview peptides.py is a pure-Python package to compute common descr

Martin Larralde 32 Dec 31, 2022
scikit-survival is a Python module for survival analysis built on top of scikit-learn.

scikit-survival scikit-survival is a Python module for survival analysis built on top of scikit-learn. It allows doing survival analysis while utilizi

Sebastian Pölsterl 876 Jan 04, 2023
Python package for processing UC module spectral data.

UC Module Python Package How To Install clone repo. cd UC-module pip install . How to Use uc.module.UC(measurment=str, dark=str, reference=str, heade

Nicolai Haaber Junge 1 Oct 20, 2021
AWS Glue ETL Code Samples

AWS Glue ETL Code Samples This repository has samples that demonstrate various aspects of the new AWS Glue service, as well as various AWS Glue utilit

AWS Samples 1.2k Jan 03, 2023
Cleaning and analysing aggregated UK political polling data.

Analysing aggregated UK polling data The tweet collection & storage pipeline used in email-service is used to also collect tweets from @britainelects.

Ajay Pethani 0 Dec 22, 2021