Sample data associated with the Aurora-BP study

Overview

The Aurora-BP Study and Dataset

This repository contains sample code, sample data, and explanatory information for working with the Aurora-BP dataset released alongside the publication of the Aurora-BP study, i.e., Mieloszyk, Rebecca, et al. "A Comparison of Wearable Tonometry, Photoplethysmography, and Electrocardiography for Cuffless Measurement of Blood Pressure in an Ambulatory Setting." IEEE Journal of Biomedical and Health Informatics (2022). The dataset includes de-identified participant information, raw sensor data aligned with each measurement, and a wide variety of features derived from sensor data. The publishing of this dataset as well as the characterization of multiple feature groups across a broad population and multiple settings are intended to aid future cardiovascular research.

Note that the data contained in this repository represent a very small sample of the full dataset, meant only to illustrate the structure of the files and allow testing with the sample code. For access to the full dataset, see the Data Use Application section below.

Navigation:

  • docs:
    • Data file descriptions, a detailed overview of the Aurora-BP Study protocol, and supplemental results not included in the Aurora-BP Study publication
  • notebooks:
    • Sample Jupyter notebooks and environment files for basic analyses using Aurora-BP Study data
  • sample:
    • Example data files, to run sample Jupyter notebooks and provide researchers a direct look at the data format before application for full data access.

Citation

If you use this repository, part or all of the full dataset, and/or our paper as part of your research, please refer to the dataset as the Aurora-BP dataset and cite the publication as below:


Data Access

Data Access Committee

Requests for data access are reviewed by the Data Access Committee. During review, the submitting investigator and primary investigator may be contacted for verification. The information you will need to gather to submit a Data Use Application as well as a link to the form are listed below. For additional questions regarding data access, contact: [email protected]


Data Use Application

Full data files are stored separately from this repo within an Azure data lake. To gain access to these data files, a data use application (detailed below and on the data lake landing page) must be submitted. Any researcher may submit a data use application, which includes:

  • Principal investigator information
    • Academic credentials, affiliation, contact information, curriculum vitae, signature attesting accuracy of data use application
  • Additional investigator information
    • Academic credentials, affiliation, contact information
  • Research proposal
  • Acknowledgement to comply with data use agreement. Key points are listed below:
    • No sharing of data with anyone outside of approved PI and other specified investigators. New investigators must be reviewed.
    • No data use outside of stated proposal scope
    • No joining of data with other data sources
    • No attempt to identify participants, contact participants, or reconstruct PII
    • Storage with appropriate access control and best practices
    • You may publish (or present papers or articles) on your results from using the data provided that no confidential information of Microsoft and no Personal Information are included in any such publication or presentation
    • Any publication or presentation resulting from use of the data should include reference to the Aurora-BP Study, with full reference to the source publication when appropriate
    • Aurora-BP Study authors and Microsoft are under no obligation to provide any support or additional materials related to the use of these data
    • Aurora-BP Study authors and Microsoft are not liable for any losses, damages, or harms of any kind in connection to the use of these data
    • Aurora-BP Study authors and Microsoft are not responsible or liable for the accuracy, usefulness or availability of these data
    • Primary Investigator will provide a signature of attestation that they have read, understood, and accept the data use agreement
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
The code for the Subformer, from the EMNLP 2021 Findings paper: "Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers", by Machel Reid, Edison Marrese-Taylor, and Yutaka Matsuo

Subformer This repository contains the code for the Subformer. To help overcome this we propose the Subformer, allowing us to retain performance while

Machel Reid 10 Dec 27, 2022
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

Bethge Lab 61 Dec 21, 2022
Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP

Pretrain and Fine-tune a T5 model with Flax on GCP This tutorial details how pretrain and fine-tune a FlaxT5 model from HuggingFace using a TPU VM ava

Gabriele Sarti 41 Nov 18, 2022
In this project, we aim to achieve the task of predicting emojis from tweets. We aim to investigate the relationship between words and emojis.

Making Emojis More Predictable by Karan Abrol, Karanjot Singh and Pritish Wadhwa, Natural Language Processing (CSE546) under the guidance of Dr. Shad

Karanjot Singh 2 Jan 17, 2022
LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation

LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation Tasks | Datasets | LongLM | Baselines | Paper Introduction LOT is a ben

46 Dec 28, 2022
Tokenizer - Module python d'analyse syntaxique et de grammaire, tokenization

Tokenizer Le Tokenizer est un analyseur lexicale, il permet, comme Flex and Yacc par exemple, de tokenizer du code, c'est à dire transformer du code e

Manolo 1 Aug 15, 2022
The official implementation of "BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?, ACL 2021 main conference"

BERT is to NLP what AlexNet is to CV This is the official implementation of BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Iden

Asahi Ushio 20 Nov 03, 2022
Framework for fine-tuning pretrained transformers for Named-Entity Recognition (NER) tasks

NERDA Not only is NERDA a mesmerizing muppet-like character. NERDA is also a python package, that offers a slick easy-to-use interface for fine-tuning

Ekstra Bladet 141 Dec 30, 2022
The source code of HeCo

HeCo This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning". Paper Link: htt

Nian Liu 106 Dec 27, 2022
Snips Python library to extract meaning from text

Snips NLU Snips NLU (Natural Language Understanding) is a Python library that allows to extract structured information from sentences written in natur

Snips 3.7k Dec 30, 2022
Experiments in converting wikidata to ftm

FollowTheMoney / Wikidata mappings This repo will contain tools for converting Wikidata entities into FtM schema. Prefixes: https://www.mediawiki.org/

Friedrich Lindenberg 2 Nov 12, 2021
Azure Text-to-speech service for Home Assistant

Azure Text-to-speech service for Home Assistant The Azure text-to-speech platform uses online Azure Text-to-Speech cognitive service to read a text wi

Yassine Selmi 2 Aug 06, 2022
Harvis is designed to automate your C2 Infrastructure.

Harvis Harvis is designed to automate your C2 Infrastructure, currently using Mythic C2. 📌 What is it? Harvis is a python tool to help you create mul

Thiago Mayllart 99 Oct 06, 2022
Repository for fine-tuning Transformers 🤗 based seq2seq speech models in JAX/Flax.

Seq2Seq Speech in JAX A JAX/Flax repository for combining a pre-trained speech encoder model (e.g. Wav2Vec2, HuBERT, WavLM) with a pre-trained text de

Sanchit Gandhi 21 Dec 14, 2022
Topic Modelling for Humans

gensim – Topic Modelling in Python Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Targ

RARE Technologies 13.8k Jan 02, 2023
Creating a python chatbot that Starbucks users can text to place an order + help cut wait time of a normal coffee.

Creating a python chatbot that Starbucks users can text to place an order + help cut wait time of a normal coffee.

2 Jan 20, 2022
Main repository for the chatbot Bobotinho.

Bobotinho Bot Main repository for the chatbot Bobotinho. â„šī¸ Introduction Twitch chatbot with entertainment commands. ‎ đŸ’ģ Technologies Concurrent code

Bobotinho 14 Nov 29, 2022
QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

Moment-DETR QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries Jie Lei, Tamara L. Berg, Mohit Bansal For dataset de

Jie Lei é›ˇæ° 133 Dec 22, 2022
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022
A 10000+ hours dataset for Chinese speech recognition

A 10000+ hours dataset for Chinese speech recognition

309 Dec 16, 2022