A novel framework to automatically learn high-quality scanning of non-planar, complex anisotropic appearance.

Overview

appearance-scanner

About

This repository is an implementation of the neural network proposed in Free-form Scanning of Non-planar Appearance with Neural Trace Photography

For any questions, please email xiaohema98 at gmail.com

Usage

System Requirement

  • Windows or Linux(The codes are validated on Win10, Ubuntu 18.04 and Ubuntu 16.04)
  • Python >= 3.6.0
  • Pytorch >= 1.6.0
  • tensorflow>=1.11.0, meshlab and matlab are needed if you process the test data we provide

Training

  1. move to appearance_scanner
  2. run train.bat or train.sh according to your own platform

Notice that the data generation step

python data_utils/origin_parameter_generator_n2d.py %data_root% %Sample_num% %train_ratio%

should be run only once.

Training Visulization

When training is started, you can open tensorboard to observe the training process. There will be two log images of a certain training sample, one is the sampled lumitexels from 64 views and the other is an composite image from six images in the order of groundtruth lumitexel, groundtruth diffuse lumitexel, groundtruth specular lumitexel, predicted lumitexel, predicted diffuse lumitexel and predicted specular lumitexel.

Trained lighting pattern will also be showed. Trained model will be found in the log_dir set in train.bat/train.sh.

License

Our source code is released under the GPL-3.0 license for acadmic purposes. The only requirement for using the code in your research is to cite our paper:

@article{Ma:2021:Scanner,
author = {Ma, Xiaohe and Kang, Kaizhang and Zhu, Ruisheng and Wu, Hongzhi and Zhou, Kun},
title = {Free-Form Scanning of Non-Planar Appearance with Neural Trace Photography},
year = {2021},
issue_date = {August 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {40},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3450626.3459679},
doi = {10.1145/3450626.3459679},
journal = {ACM Trans. Graph.},
month = jul,
articleno = {124},
numpages = {13},
keywords = {illumination multiplexing, SVBRDF, optimal lighting pattern}
}

For commercial licensing options, please email hwu at acm.org. See COPYING for the open source license.

Reconstruction process

The reconstruction needs photographs taken with our scanner, a pre-trained network model and a pre-captured geometry shape as input. First, perform structure-from-motion with COLMAP, resulting in a 3D point cloud and camera poses with respect to it. Next, this point cloud is precisely aligned with the pre-captured shape. Then the view information of each vertex can be assembled as the input of the network. Last, we fit the predicted grayscale specular lumitexel with L-BFGS-B, to obtain the refletance parameters.

Download our Cheongsam test data and unzip it in appearance_scanner/data/.

Three sample photographs captured from the Cheongsam object. The brightness of the original images has been doubled for a better visualization.

Download our model and unzip it in appearance_scanner/.

1. Camera Registration

1.1 Run SFM/run.bat first to brighten the raw images

1.2 Open Colmap and do the following steps

1.2.1 New project

1.2.2 Feature extraction

Copy the parameters of our camera in device_configuration/cam.txt to Custom parameters.

1.2.3 Feature matching

Tick guided_matching and run.

1.2.4 Reconstruction options

Do not tick multiple_models in the General sheet.

Do not tick refine_focal_length/refine_extra_params/use_pba in the Bundle sheet.

Start reconstruction.

1.2.5 Bundle adjustment

Do not tick refine_focal_length/refine_principal_point/refine_extra_params.

1.2.6

Make a folder named undistort_feature in Cheongsam/ and export model as text in undistort_feature folder. Three files including cameras.txt, images.txt and point3D.txt will be saved.

1.2.7

Dense reconstruction -> select undistort_feature folder -> Undistortion -> Stereo

Since we upload all the photos we taken, it will take a long time to run this step. We strongly recommend you run

colmap stereo_fusion --workspace_path path --input_type photometric --output_path path/fused.ply

//change path to undistort_feature folder

when the files amount in undistort_feature/stereo/normal_maps arise to around 200-250. It will output a coarse point cloud in undistort_feature/ .

Delete the noise points and the table plane.

Save fused.ply.

2. Extract measurements

move your own model to models/ and run appearance_scanner/test_files/prepare_pattern.bat

run extract_measurements/run.bat

3. Align mesh

3.1 Use meshlab to align mesh roughly

Open fused.ply and Cheongsam/scan/Cheongsam.ply in the same meshlab window. Cheongsam.ply is pre-capptured with a commercial mobile 3D scanner, EinScan Pro 2X Plus.

Align two mesh and save project file in Cheongsam/scan/Cheongsam.aln, which records the transform matrix between two meshes.

run CoherentPointDrift/run.bat to align Cheongsam.ply to fused.ply.

3.2 Further Alignment

run CoherentPointDrift/CoherentPointDrift-master/simplify/run.bat to simplify two meshes. It will call meshlabserver to simplify two meshes so that save the processing time.

Open the CPD project in Matlab and run main.m.

After alignment done, run CoherentPointDrift/run_pass2.bat. meshed-poisson_obj.ply will be saved in undistort_feature/ .

You should open fused.ply and meshed-poisson_obj.ply in the same meshlab window to check the quality of alignment. It is a key factor in the final result.

4. Generate view information from registrated cameras

4.1 Remesh

run ACVD/aarun.bat

save undistort_feature/meshed-poisson_obj_remeshed.ply as undistort_feature/meshed-poisson_obj_remeshed.obj

It is not necessary to reconstruct all the vertices on the pre-captured shape in our case. The remesh step will output an optimized 3D triangular mesh with a user defined vertex budget, which is controlled by NVERTICES in aarun.bat.

4.2 uvatlas

copy data_processing/device_configuration/extrinsic.bin to undistort_feature/ copy Cheongsam/512.exr and 1024.exr to undistort_feature/

run generate_texture/trans.bat to transform mesh from colmap frame to world frame in our system and generate uv maps.

We recommend that you generate uv maps with resolution of 512x512 because it will save a lot of time and retain most details. The resolution of the results in our paper is 1024x1024.

You can set UVMAP_WIDTH and UVMAP_HEIGHT to 1024 in uv/uv_generator.bat if you pursue higher quality.

4.3 Compute view information

Downloads embree and copy bin/embree3.dll, glfw3.dll, tbb12.dll to generate_texture/.

Downloads opencv and copy opencv_world#v.dll to generate_texture/. We use opencv3.4.3 in our project.

in generate_texture/texgen.bat, set TEXTURE_RESOLUTION to the certain resolution

choose the same line or the other reference on meshed-poisson_obj_remeshed.obj and on the physical object, then meature the lengths of both. Set the results to COLMAP_L and REAL_L. REAL_L in mm.

The marker cylinder's diameter is 10cm, so we set REAL_L to 100.

run generate_texture/texgen.bat to output view information of all registrated cameras.

5. Gather data

run gather_data/run.bat to gather the inputs to the network for each valid pixel on the texture map. A folder named images_{resolution} will be made in Cheongsam/.

6. Fitting

  1. Change %DATA_ROOT% and %TEXTURE_MAP_SIZE% in fitting/tf_ggx_render/run.bat. Then run fitting/tf_ggx_render/run.bat.
  2. A folder named fitting_folder_for_server will be generated under texture_{resolution}.
  3. Upload the entire folder generated in previous step to a linux server.
  4. Change current path of terminal to fitting_folder_for_server\fitting_temp\tf_ggx_render, then run split.sh or split1024.sh according to the resolution you chosen. (split.sh is for 512. If you want to use custom texture map resolution, you may need to modify the $TEX_RESOLUTION in split.sh)
  5. When the fitting procedure finished, a folder named Cheongsam/images_{resolution}/data_for_server/data/images/data_for_server/fitted_grey will be generated. It contains the final texture maps, including normal_fitted_global.exr, tangent_fitted_global.exr, axay_fitted.exr, pd_fitted.exr and ps_fitted.exr.
    Note: If you find the split.sh cannot run properly and complain about abscent which_server argument, it's probably caused by the difference of linux and windows. Reading in the sh file and writing it with no changing of content on sever can fix this issue.
diffuse specular roughness
normal tangent

7. Render results

We use the anisotropic GGX model to represent reflectance. The object can be rendered with path tracing using NVIDIA OptiX or openGL.

Reference & Third party tools

Shining3D. 2021. EinScan Pro 2X Plus Handheld Industrial Scanner. Retrieved January, 2021 from https://www.einscan.com/handheld-3d-scanner/2x-plus/

Colmap: https://demuc.de/colmap/

Coherent Point Drift: https://ieeexplore.ieee.org/document/5432191

ACVD: https://github.com/valette/ACVD

Embree: https://www.embree.org/

OpenCV: https://opencv.org/

Owner
Xiaohe Ma
Xiaohe Ma
Domain Generalization with MixStyle, ICLR'21.

MixStyle This repo contains the code of our ICLR'21 paper, "Domain Generalization with MixStyle". The OpenReview link is https://openreview.net/forum?

Kaiyang 208 Dec 28, 2022
Runtime type annotations for the shape, dtype etc. of PyTorch Tensors.

torchtyping Type annotations for a tensor's shape, dtype, names, ... Turn this: def batch_outer_product(x: torch.Tensor, y: torch.Tensor) - torch.Ten

Patrick Kidger 1.2k Jan 03, 2023
An end-to-end implementation of intent prediction with Metaflow and other cool tools

You Don't Need a Bigger Boat An end-to-end (Metaflow-based) implementation of an intent prediction flow for kids who can't MLOps good and wanna learn

Jacopo Tagliabue 614 Dec 31, 2022
Natural Intelligence is still a pretty good idea.

Human Learn Machine Learning models should play by the rules, literally. Project Goal Back in the old days, it was common to write rule-based systems.

vincent d warmerdam 641 Dec 26, 2022
Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

Time2box Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

LingCai 4 Aug 23, 2022
Plotting points that lie on the intersection of the given curves using gradient descent.

Plotting intersection of curves using gradient descent Webapp Link --- What's the app about Why this app Plotting functions and their intersection. A

Divakar Verma 2 Jan 09, 2022
Lightweight plotting to the terminal. 4x resolution via Unicode.

Uniplot Lightweight plotting to the terminal. 4x resolution via Unicode. When working with production data science code it can be handy to have plotti

Olav Stetter 203 Dec 29, 2022
The official codes for the ICCV2021 presentation "Uniformity in Heterogeneity: Diving Deep into Count Interval Partition for Crowd Counting"

UEPNet (ICCV2021 Poster Presentation) This repository contains codes for the official implementation in PyTorch of UEPNet as described in Uniformity i

Tencent YouTu Research 15 Dec 14, 2022
A visualization tool to show a TensorFlow's graph like TensorBoard

tfgraphviz tfgraphviz is a module to visualize a TensorFlow's data flow graph like TensorBoard using Graphviz. tfgraphviz enables to provide a visuali

44 Nov 09, 2022
A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis

WaveGlow A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis Quick Start: Install requirements: pip install

Yuchao Zhang 204 Jul 14, 2022
[ICML 2020] DrRepair: Learning to Repair Programs from Error Messages

DrRepair: Learning to Repair Programs from Error Messages This repo provides the source code & data of our paper: Graph-based, Self-Supervised Program

Michihiro Yasunaga 155 Jan 08, 2023
Simple embedding based text classifier inspired by fastText, implemented in tensorflow

FastText in Tensorflow This project is based on the ideas in Facebook's FastText but implemented in Tensorflow. However, it is not an exact replica of

Alan Patterson 306 Dec 02, 2022
Python utility to generate filesystem content for Obsidian.

Security Vault Generator Quickly parse, format, and output common frameworks/content for Obsidian.md. There is a strong focus on MITRE ATT&CK because

Justin Angel 73 Dec 02, 2022
A self-supervised learning framework for audio-visual speech

AV-HuBERT (Audio-Visual Hidden Unit BERT) Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction Robust Self-Supervised A

Meta Research 431 Jan 07, 2023
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
Implement object segmentation on images using HOG algorithm proposed in CVPR 2005

HOG Algorithm Implementation Description HOG (Histograms of Oriented Gradients) Algorithm is an algorithm aiming to realize object segmentation (edge

Leo Hsieh 2 Mar 12, 2022
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 410 Jan 03, 2023
Label Studio is a multi-type data labeling and annotation tool with standardized output format

Website • Docs • Twitter • Join Slack Community What is Label Studio? Label Studio is an open source data labeling tool. It lets you label data types

Heartex 11.7k Jan 09, 2023
Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction This is the code for the paper Combining E

Robotics and Perception Group 69 Dec 26, 2022
Platform-agnostic AI Framework 🔥

🇬🇧 TensorLayerX is a multi-backend AI framework, which can run on almost all operation systems and AI hardwares, and support hybrid-framework progra

TensorLayer Community 171 Jan 06, 2023