A study project using the AA-RMVSNet to reconstruct buildings from multiple images

Overview

3d-building-reconstruction

This is part of a study project using the AA-RMVSNet to reconstruct buildings from multiple images.

Introduction

It is exciting to connect the 2D world with 3D world using Multi-view Stereo(MVS) methods. In this project, we aim to reconstruct several architecture in our campus. Since it's outdoor reconstruction, We chose to use AA-RMVSNet to do this work for its marvelous performance is outdoor datasets after comparing some similar models such as CasMVSNet and D2HC-RMVSNet. The code is retrieved from here with some modification.

Reproduction

Here we summarize the main steps we took when doing this project. You can reproduce our result after these steps.

Installation

First, you need to create a virtual environment and install the necessary dependencies.

conda create -n test python=3.6
conda activate test
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=10.0 -c pytorch
conda install -c conda-forge py-opencv plyfile tensorboardx

Other cuda versions can be found here

Struct from Motion

Camera parameters are required to conduct the MVSNet based methods. Please first download the open source software COLMAP.

The workflow is as follow:

  1. Open the COLMAP, then successively click reconstruction-Automatic reconstruction options.
  2. Select your Workspace folder and Image folder.
  3. (Optional) Unclick Dense model to accelerate the reconstruction procedure.
  4. Click Run.
  5. After the completion of reconstruction, you should be able to see the result of sparse reconstruction as well as position of cameras.(Fig )
  6. Click File - Export model as text. There should be a camera.txt in the output folder, each line represent a photo. In case there are photos that remain mismatched, you should dele these photos and rematch. Repeat this process until all the photos are mathced.
  7. Move the there txts to the sparse folder.

img

AA-RMVSNet

To use AA-RMVSNet to reconstruct the building, please follow the steps listed below.

  1. Clone this repository to a local folder.

  2. The custom testing folder should be placed in the root directory of the cloned folder. This folder should have to subfolders names images and sparse. The images folder is meant to place the photos, and the sparse folder should have the three txt files recording the camera's parameters.

  3. Find the file list-dtu-test.txt, and write the name of the folder which you wish to be tested.

  4. Run colmap2mvsnet.py by

    python ./sfm/colmap2mvsnet.py --dense_folder name --interval_scale 1.06 --max_d 512
    

    The parameter dense_folder is compulsory, others being optional. You can also change the default value in the following shells.

  5. When you get the result of the previous step, run the following commands

    sh ./scripts/eval_dtu.sh
    sh ./scripts/fusion_dtu.sh
    
  6. Then you are should see the output .ply files in the outputs_dtu folder.

Here dtu means the data is organized in the format of DTU dataset.

Results

We reconstructed various spot of out campus. The reconstructed point cloud files is available here (Code: nz1e). You can visualize the file with Meshlab or CloudCompare .

Finetuning Pipeline

KLUE Baseline Korean(한국어) KLUE-baseline contains the baseline code for the Korean Language Understanding Evaluation (KLUE) benchmark. See our paper fo

74 Dec 13, 2022
Annotate with anyone, anywhere.

h h is the web app that serves most of the https://hypothes.is/ website, including the web annotations API at https://hypothes.is/api/. The Hypothesis

Hypothesis 2.6k Jan 08, 2023
Demo for the paper "Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation"

Streaming speaker diarization Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation by Juan Manuel Coria, Hervé

Juanma Coria 187 Jan 06, 2023
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
NeROIC: Neural Object Capture and Rendering from Online Image Collections

NeROIC: Neural Object Capture and Rendering from Online Image Collections This repository is for the source code for the paper NeROIC: Neural Object C

Snap Research 647 Dec 27, 2022
A Pytorch implement of paper "Anomaly detection in dynamic graphs via transformer" (TADDY).

TADDY: Anomaly detection in dynamic graphs via transformer This repo covers an reference implementation for the paper "Anomaly detection in dynamic gr

Yue Tan 21 Nov 24, 2022
Hummingbird compiles trained ML models into tensor computation for faster inference.

Hummingbird Introduction Hummingbird is a library for compiling trained traditional ML models into tensor computations. Hummingbird allows users to se

Microsoft 3.1k Dec 30, 2022
Implementation of "Debiasing Item-to-Item Recommendations With Small Annotated Datasets" (RecSys '20)

Debiasing Item-to-Item Recommendations With Small Annotated Datasets This is the code for our RecSys '20 paper. Other materials can be found here: Ful

Microsoft 34 Aug 10, 2022
Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences

Model-free Vehicle Tracking and State Estimation in Point Cloud Sequences 1. Introduction This project is for paper Model-free Vehicle Tracking and St

TuSimple 92 Jan 03, 2023
This repo is customed for VisDrone.

Object Detection for VisDrone(无人机航拍图像目标检测) My environment 1、Windows10 (Linux available) 2、tensorflow = 1.12.0 3、python3.6 (anaconda) 4、cv2 5、ensemble

53 Jul 17, 2022
Sentinel-1 vessel detection model used in the xView3 challenge

sar_vessel_detect Code for the AI2 Skylight team's submission in the xView3 competition (https://iuu.xview.us) for vessel detection in Sentinel-1 SAR

AI2 6 Sep 10, 2022
Implementation of Neonatal Seizure Detection using EEG signals for deploying on edge devices including Raspberry Pi.

NeonatalSeizureDetection Description Link: https://arxiv.org/abs/2111.15569 Citation: @misc{nagarajan2021scalable, title={Scalable Machine Learn

Vishal Nagarajan 11 Nov 08, 2022
Non-Attentive-Tacotron - This is Pytorch Implementation of Google's Non-attentive Tacotron.

Non-attentive Tacotron - PyTorch Implementation This is Pytorch Implementation of Google's Non-attentive Tacotron, text-to-speech system. There is som

Jounghee Kim 46 Dec 19, 2022
Mmrotate - OpenMMLab Rotated Object Detection Benchmark

OpenMMLab website HOT OpenMMLab platform TRY IT OUT 📘 Documentation | 🛠️ Insta

OpenMMLab 1.2k Jan 04, 2023
An implementation of the efficient attention module.

Efficient Attention An implementation of the efficient attention module. Description Efficient attention is an attention mechanism that substantially

Shen Zhuoran 194 Dec 15, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ELECTRA Introduction ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using

Google Research 2.1k Dec 28, 2022
EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers

EntityQuestions This repository contains the EntityQuestions dataset as well as code to evaluate retrieval results from the the paper Simple Entity-ce

Princeton Natural Language Processing 119 Sep 28, 2022
Implemented fully documented Particle Swarm Optimization algorithm (basic model with few advanced features) using Python programming language

Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social lea

9 Nov 29, 2022
MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction This is the official implementation for the ICCV 2021 paper Learning Sign

110 Dec 20, 2022