VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

Overview

This repository contains the introduction to the collected VRViewportPose dataset and the code for the IEEE INFOCOM 2022 paper: "VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations" by Ying Chen, Hojung Kwon, Hazer Inaltekin, and Maria Gorlatova.

Outline

I. VRViewportPose Dataset

1. Data Collection

We conducted an IRB-approved data collection of the viewport pose in 3 VR games and across 3 different types of VR user interfaces, with over 5.5 hours of user data in total.

A. Stimuli

We collected the viewport pose for desktop, headset, and phone-based virtual reality (VR), with open-source VR games with different scene complexities from Unity store, containing 1 indoor (Office [1]) and 2 outdoor (Viking Village [2], Lite [3]) scenarios. In desktop VR, rotational and translational movements are made using the mouse and up arrow key. The poses in the headset VR are collected with a standalone Oculus Quest 2, where rotational and translational movements are made by moving the head and by using the controller thumbstick. The poses in the phone-based VR are collected with Google Pixel 2 XL and Nokia 7.1 with Android 9, and rotational and translational movements are made by moving the motion-sensor-equipped phone and by tapping on the screen using one finger.

Figure 1: Open-source VR games used for the data collection: (a) Office; (b) Viking Village; (c) Lite.

B. Procedure

The data collection, conducted under COVID-19 restrictions, involved unaided and Zoom-supported remote data collection by distributing desktop and phone-based VR apps, and a small number of socially distanced in-lab experiments for headset and phone-based VR. We recorded the viewport poses of 20 participants (9 male, 11 female, age 20-48), 5 participants (2 male, 3 female, age 23-33), and 5 participants (3 male, 2 female, age 23-33) in desktop, headset, and phone-based VR, respectively. The participants were seated in front of a PC, wore the headset while standing, and held a phone in landscape mode while standing in desktop, headset, and phone-based VR, respectively. For desktop and phone-based VR, each participant explored VK, Lite, and Office for 5, 5, and 2 minutes, respectively. For headset VR, the participants only explored each game for 2 minutes to avoid simulator sickness. Considering the device computation capability and the screen refresh rate, the timestamp and viewport pose of each participant are recorded at a target frame rate of 60 Hz, 72 Hz, and 60 Hz for desktop, headset, and phone-based VR, respectively. For each frame, we record the timestamp, the x, y, z positions and the roll, pitch, and yaw Euler orientation angles. For the Euler orientation angles β, γ, α, the intrinsic rotation orders are adopted, i.e., the viewport pose is rotated α degrees around the z-axis, β degrees around the x-axis, and γ degrees around the y-axis. We randomize the initial viewport position in VR games over the whole bounding area. We fix the initial polar angle of the viewport to be 90 degree, and uniformly randomize the initial azimuth angle on [-180,180) degree.

2. Download the Dataset

The dataset can be download here.

A. The structure of the dataset

The dataset follows the hierarchical file structure shown below:

VR_Pose
└───data_Desktop
│   │
│   └───Office_Desktop_1.txt
│   └───VikingVillage_Desktop_1.txt
│   └───Lite_Desktop_1.txt
│   └───Office_Desktop_2.txt
│   └───VikingVillage_Desktop_2.txt
│   └───Lite_Desktop_2.txt
│   ...
│
└───data_Oculus
│   │
│   └───Office_Oculus_1.txt
│   └───VikingVillage_Oculus_1.txt
│   └───Lite_Oculus_1.txt
│   └───Office_Oculus_2.txt
│   └───VikingVillage_Oculus_2.txt
│   └───Lite_Oculus_2.txt
|   ...
|
└───data_Phone
...

There are 3 sub-folders corresponding to the different VR interfaces. In the subfolder of data_Desktop, there are 60 TXT files, corresponding to 20 participants, each of them experiencing 3 VR games. There are 15 TXT files in both the data_Oculus and data_Phone subfolders, corresponding to 5 participants experiencing 3 VR games. In total, there are over 5.5 hours of user data.

3. Extract the Orientation and Position Models

The OrientationModel.py and PositionModel.py are used to extract the orientation and position models for VR viewport pose, respectively. Before running the scripts in this repository, you need to download the repository and install the necessary tools and libraries on your computer, including scipy, numpy, pandas, fitter, and matplotlib.

A. Orientation model

Data processing

We convert the recorded Euler angles to polar angle θ and azimuth angle ϕ. After applying rotation matrix R, we have

From the above equation, θ is calculated as θ=sinαsinγ-cosαsinβcosγ, and ϕ is given by

where ϕ=atan((cosγsinαsinβ+cosαsinγ)/(cosβcosγ)).

After we obtain the polar and azimuth angles, we fit the polar angle, polar angle change, and azimuth angle change to a set of statistical models and mixed models (of two statistical models).

Orientation model script

The orientaion model script is provided via https://github.com/VRViewportPose/VRViewportPose/blob/main/OrientationModel.py. To obtain the orientation model, follow the procedure below:

a. Download and extract the VR viewport pose dataset.

b. Change the filePath variable in OrientationModel.py to the file location of the pose dataset.

c. You can directly run OrientationModel.py (python .\OrientationModel.py). It will automatically run the pipeline.

d. The generated EPS images named "polar_fit_our_dataset.eps", "polar_change.eps", "azimuth_change.eps", and "ACF_our_dataset.eps" will be saved in a folder. "polar_fit_our_dataset.eps", "polar_change.eps", and "azimuth_change.eps" show the distribution of the experimental data for polar angle, polar angle change, and azimuth angle change fitted by different statistical distributions, respectively. "ACF_our_dataset.eps" shows the autocorrelation function (ACF) of polar and azimuth angle samples that are Δt s apart.

B. Position model

Data processing

We apply the standard angle model proposed in [5] to extract flights from the trajectories. An example of the collected trajectory for one user in Lite and the extracted flights is shown below.

Position model script

The position model script is provided via https://github.com/VRViewportPose/VRViewportPose/blob/main/PositionModel.py. To obtain the position model, follow the procedure below:

a. Download and extract the VR viewport pose dataset.

b. Change the filePath variable in PositionModel.py to the file location of the pose dataset.

c. You can directly run PositionModel.py (python .\PositionModel.py). It will automatically run the pipeline.

d. The generated EPS images named "flight_sample.eps", "flight.eps", "pausetime_distribution.eps", and "correlation.eps" will be saved in a folder. "flight_sample.eps" shows an example of the collected trajectories and the corresponding flights. "flight.eps" and "pausetime_distribution.eps" show distributions of the flight time and the pause duration for collected samples, respectively. "correlation.eps" shows the correlation of the azimuth angle and the walking direction.

II. Visibility Similarity

4. Analytical Results

The codes for analyzing the visibility similarity can be download here.

a. You will see three files after extracting the ZIP file. Analysis_Visibility_Similarity.m sets the parameters for the orientation model, position model, and the visibility similarity model, and calculates the analytical results of visibility similarity. calculate_m_k.m calculates the k-th moment of the position displacement, and calculate_hypergeom.m is used to calculate the hypergeometric function. b. Run the Analysis_Visibility_Similarity.m. You can get the analytical results of visibility similarity.

5. Implementation of ALG-ViS

The codes for implementing the ALG-ViS can be downloaded here. Tested with Unity 2019.2.14f1 and Oculus Quest 2 with build 30.0.

a. In Unity Hub, create a new 3D Unity project. Download ZIP file and unzip in the "Assets" folder of the Unity project.

b. Install Android 9.0 'Pie' (API Level 28) or higher installed using the SDK Manager in Android Studio.

c. Navigate to File>Build Settings>Player Settings. Set 'Minimum API Level' to be Android 9.0 'Pie' (API Level 28) or higher. In 'Other Settings', make sure only 'OpenGLES3' is selected. In 'XR Settings', check 'Virtual Reality Selected' and add 'Oculus' to the 'Virtual Reality SDKs'. Rename your 'CompanyName' and 'GameName', and the Bundle Identifier string com.CompanyName.GameName will be the unique package name of your application installed on the Oculus device.

d. Copy the "pose.txt" and "visValue.txt" to the Application.persistentDataPath which points to /storage/emulated/0/Android/data/ /files, where is com.CompanyName.GameName.

e. Navigate to Window>Asset Store. Search for the virtual reality game (e.g., the 'Make Your Fantasy Game - Lite' game [3]) in the Asset Store, and select 'Buy Now' and 'Import'.

f. Make sure only the 'ALG_ViS' scene is selected in 'Scenes in Build'. Select your connected target device (Oculus Quest 2) and click 'Build and Run'.

g. The output APK package will be saved to the file path you specify, while the app will be installed on the Oculus Quest 2 device connected to your computer.

h. Disconnect the Oculus Quest 2 from the computer. After setting up a new Guardian Boundary, the vritual reality game with ALG-ViS will be automatically loaded.

Citation

Please cite the following paper in your publications if the dataset or code helps your research.

 @inproceedings{Chen22VRViewportPose,
  title={{VR} Viewport Pose Model for Quantifying and Exploiting Frame Correlations},
  author={Chen, Ying and Kwon, Hojung and Inaltekin, Hazer and Gorlatova, Maria},
  booktitle={Proc. IEEE INFOCOM},
  year={2022}
}

Acknowledgments

We thank the study's participants for their time in the data collection. The contributors of the dataset and code are Ying Chen and Maria Gorlatova. For questions on this repository or the related paper, please contact Ying Chen at yc383 [AT] duke [DOT] edu.

References

[1] Unity Asset Store. (2020) Office. https://assetstore.unity.com/packages/3d/environments/snapsprototype-office-137490

[2] Unity Technologies. (2015) Viking Village. https://assetstore.unity.com/packages/essentials/tutorialprojects/viking-village-29140

[3] Xiaolianhua Studio. (2017) Lite. https://assetstore.unity.com/packages/3d/environments/fantasy/makeyour-fantasy-game-lite-8312

[4] Oculus. (2021) Oculus Quest 2. https://www.oculus.com/quest-2/

[5] I. Rhee, M. Shin, S. Hong, K. Lee, and S. Chong, “On the Levy-walk nature of human mobility,” in Proc. IEEE INFOCOM, 2008.

2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案

2020CCF-NER 2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案 bert base + flat + crf + fgm + swa + pu learning策略 + clue数据集 = test1单模0.906 词向量

67 Oct 19, 2022
MLJetReconstruction - using machine learning to reconstruct jets for CMS

MLJetReconstruction - using machine learning to reconstruct jets for CMS The C++ data extraction code used here was based heavily on that foundv here.

ALPhA Davidson 0 Nov 17, 2021
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 06, 2023
Deep-Learning-Image-Captioning - Implementing convolutional and recurrent neural networks in Keras to generate sentence descriptions of images

Deep Learning - Image Captioning with Convolutional and Recurrent Neural Nets ========================================================================

23 Apr 06, 2022
UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model

UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model Official repository for the ICCV 2021 paper: UltraPose: Syn

MomoAILab 92 Dec 21, 2022
Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks

Uniformer - Pytorch Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification ta

Phil Wang 90 Nov 24, 2022
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
Code for ICCV2021 paper SPEC: Seeing People in the Wild with an Estimated Camera

SPEC: Seeing People in the Wild with an Estimated Camera [ICCV 2021] SPEC: Seeing People in the Wild with an Estimated Camera, Muhammed Kocabas, Chun-

Muhammed Kocabas 187 Dec 26, 2022
Code for SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes (NeurIPS 2021)

SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes (NeurIPS 2021) SyncTwin is a treatment effect estimation method tailored for observat

Zhaozhi Qian 3 Nov 03, 2022
Repository of best practices for deep learning in Julia, inspired by fastai

FastAI Docs: Stable | Dev FastAI.jl is inspired by fastai, and is a repository of best practices for deep learning in Julia. Its goal is to easily ena

FluxML 532 Jan 02, 2023
Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Fellowship Prediction GitHub Profile Comparative Analysis Tool Built with BentoML Table of Contents: Features Disclaimer Technologies Used Contributin

Damir Temir 51 Dec 29, 2022
Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition"

Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition", accepted at ACL 2021. For details of the model and experiments, please see our paper.

tricktreat 87 Dec 16, 2022
Replication Package for AequeVox:Automated Fariness Testing for Speech Recognition Systems

AequeVox Replication Package for AequeVox:Automated Fariness Testing for Speech Recognition Systems README under development. Python Packages Required

Sai Sathiesh 2 Aug 28, 2022
FeTaQA: Free-form Table Question Answering

FeTaQA: Free-form Table Question Answering FeTaQA is a Free-form Table Question Answering dataset with 10K Wikipedia-based {table, question, free-form

Language, Information, and Learning at Yale 40 Dec 13, 2022
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 05, 2022
A pytorch reproduction of { Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation }.

A PyTorch Reproduction of HCN Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation. Ch

Guyue Hu 210 Dec 31, 2022
On Evaluation Metrics for Graph Generative Models

On Evaluation Metrics for Graph Generative Models Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor This is the offic

13 Jan 07, 2023
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
1st place solution in CCF BDCI 2021 ULSEG challenge

1st place solution in CCF BDCI 2021 ULSEG challenge This is the source code of the 1st place solution for ultrasound image angioma segmentation task (

Chenxu Peng 30 Nov 22, 2022
Gesture recognition on Event Data

Event based Gesture Recognition Gesture recognition on Event Data usually involv

2 Feb 14, 2022