TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers.

Overview
Comments
  • abs_depth_error

    abs_depth_error

    I find ABS_DEPTH_ERROR is close to 6 or even 7 during training, is this normal? Here are the training results for Epoch 5. Is it because of the slow convergence?

    avg_test_scalars: {'loss': 4.360309665948113, 'depth_loss': 6.535046514014081, 'entropy_loss': 4.360309665948113, 'abs_depth_error': 6.899323051878795, 'thres2mm_error': 0.16829867261163733, 'thres4mm_error': 0.10954744909229193, 'thres8mm_error': 0.07844322964626443, 'thres14mm_error': 0.06323695212957076, 'thres20mm_error': 0.055751020700780536, 'thres2mm_abserror': 0.597563438798779, 'thres4mm_abserror': 2.7356186663791666, 'thres8mm_abserror': 5.608324628466483, 'thres14mm_abserror': 10.510002394554125, 'thres20mm_abserror': 16.67409769420184, 'thres>20mm_abserror': 78.15814284054947}

    opened by zhang-snowy 7
  • About the fusion setting in DTU

    About the fusion setting in DTU

    Thank you for your great contribution. The script use the gipuma as the fusion method with num_consistent=5prob_threshold=0.05disp_threshold=0.25. However, it produces point cloud results with only 1/2 points compared with the point cloud results you provide in DTU, leading to a much poorer result in DTU. Is there any setting wrong in the script? Or because it does not use the dynamic fusion method described in the paper. Could you provide the dynamic fusion process in DTU?

    opened by DIVE128 5
  • Testing on TnT advanced dataset

    Testing on TnT advanced dataset

    Hi, thank you for sharing this great work!

    I'm try to test transmvsnet on tnt advanced dataset, but meet some problem. My test environment is ubuntu16.04 with cuda11.3 and pytorch 1.10.

    The first thing is that there is no cams_1 folder under tnt dataset, is it a revised version of original cams folder or you just changed the folder name?

    I just changed the folder name, then run scripts/test_tnt.sh, but I find the speed is rather slow, about 10 seconds on 1080ti for a image (1056 x 1920), is it normal?

    Finally I get the fused point cloud, but the cloud is meaningless, I checked the depth map and confidence map, all of the data are very strange, apperantly not right.

    Can you help me with these problems?

    opened by CanCanZeng 4
  • Some implement details about the paper

    Some implement details about the paper

    Firstly thanks for your paper and I'm looking forward to your open-sourced code.

    And I have some questions about your paper: (Hopefully you can reply, thanks in advance!) (1) In section 4.2, "The model is trained with Adam for 10 epochs with an initial learning rate of 0.001, which decays by a factor of 0.5 respectively after 6, 8, and 12 epochs." I'm confused about the epochs. And I also noticed that this training strategy is different from CasMVSNet. Did you try the training strategy in CasMVSNet? What's the difference? (2) In Table4(b), focal loss(what is the value of \gamma?) suppresses CE loss by 0.06. However, In Table4(e) and Table 6, we infer that the best model use CE loss(FL with \gamma=0). My question is: did you keep Focal loss \gamma unchanged in the Ablation study in Table4? If not, how \gamma changes? Could you elaborate?

    Really appreciate it!

    opened by JeffWang987 4
  • source code

    source code

    Hi, @Lxiangyue Thank you for the nice paper.

    It's been over a month since authors announced that the code will be available. May I know when the code will be released? (or whether it will not be released)

    opened by Ys-Jung77 3
  • Testing on my own dataset

    Testing on my own dataset

    Hi thanks for your interesting work. I tested your code on one of the DTU dataset (Moda). as you can see from the following image, the results are quite well. image

    but I got a very bad result, when i tried to tested on one of my dataset (see the following pic) using your pretrained model (model_dtu). Now, my question is that do you thing that the object is too complicated and different compared to DTU dataset and it is all we can get from the pretrain model without retraining it? is it possible to improve by changing the input parameters? In general, would you please share your opinion about this result? image

    opened by AliKaramiFBK 1
  • generate dense 3D point cloud

    generate dense 3D point cloud

    thanks for your greate work I just tried to do a test on DTU testing dataset I got the depth map for each view but I got a bit confised on how to generate 3D point cloud using your code would you please let me know Best

    opened by AliKaramiFBK 1
  • GPU memory consumption

    GPU memory consumption

    Hi! Thanks for your excellent work! When I tested on the DTU dataset with pretrained model, the gpu memory consumption is 4439MB, but the paper gives 3778MB.

    I do not know where the problem is.

    opened by JianfeiJ 0
  • Using my own data

    Using my own data

    If I have the intrinsic matrics and extrinsic matrics of cameras, which means I don't need to run SFM in COLMAP, how should I struct my data to train the model?

    opened by PaperDollssss 2
  • TnT dataset results

    TnT dataset results

    Thanks for the great job. I follow the instruction and upload the reconstruction result of tnt but find the F-score=60.29, and I find the point cloud sizes are a larger than the upload ones. Whether the reconstructed point cloud use the param settting of test_tnt.sh or it should be tuned manually? :smile:

    opened by CC9310 1
  • TankAndTemple Test

    TankAndTemple Test

    Hi, 我测试了TAT数据集中的Family,使用的是默认脚本test_tnt.sh,采用normal融合,最近仅得到13MB点云文件。经检查发现生成的mask文件夹中的_geo.png都是大部分区域黑色图片,从而最后得到的 final.png的大部分区域都是无效的。geometric consistency阈值分别是默认的0.01和1。不知道您这边是否有一样的问题?

    opened by lt-xiang 13
  • Why is there a big gap between the reproducing results and the paper results?

    Why is there a big gap between the reproducing results and the paper results?

    I have tried the pre-trained model you offered on DTU dataset. But the results I got are mean_acc=0.299, mean_comp=0.385, overall=0.342, and the results you presented in the paper are mean_acc=0.321, mean_comp=0.289, overall=0.305.

    I do not know where the problem is.

    opened by cainsmile 14
Releases(T&T_ply)
Owner
旷视研究院 3D 组
旷视科技(Face++)研究院 3D 组(原 SLAM 组)
旷视研究院 3D 组
FridaHookAppTool - Frida Hook App Tool With Python

FridaHookAppTool(以下是Hook mpaas框架的例子) mpaas移动开发框架ios端抓包hook脚本 使用方法:链接数据线,开启burp设置

13 Nov 30, 2022
Action Segmentation Evaluation

Reference Action Segmentation Evaluation Code This repository contains the reference code for action segmentation evaluation. If you have a bug-fix/im

5 May 22, 2022
This repository is for our paper Exploiting Scene Graphs for Human-Object Interaction Detection accepted by ICCV 2021.

SG2HOI This repository is for our paper Exploiting Scene Graphs for Human-Object Interaction Detection accepted by ICCV 2021. Installation Pytorch 1.7

HT 10 Dec 20, 2022
PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020).

Scaffold-Federated-Learning PyTorch implementation of SCAFFOLD (Stochastic Controlled Averaging for Federated Learning, ICML 2020). Environment numpy=

KI 30 Dec 29, 2022
This is the source code for the experiments related to the paper Unsupervised Audio Source Separation Using Differentiable Parametric Source Models

Unsupervised Audio Source Separation Using Differentiable Parametric Source Models This is the source code for the experiments related to the paper Un

30 Oct 19, 2022
Open-Ended Commonsense Reasoning (NAACL 2021)

Open-Ended Commonsense Reasoning Quick links: [Paper] | [Video] | [Slides] | [Documentation] This is the repository of the paper, Differentiable Open-

(Bill) Yuchen Lin 31 Oct 19, 2022
MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

OpenMMLab 254 Jan 04, 2023
FOSS Digital Asset Distribution Platform built on Frappe.

Digistore FOSS Digital Assets Marketplace. Distribute digital assets, like a pro. Video Demo Here Features Create, attach and list digital assets (PDF

Mohammad Hussain Nagaria 30 Dec 08, 2022
Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting Note: You can find here the accompanying seq2seq RNN forecas

Guillaume Chevalier 1k Dec 25, 2022
Optimizaciones incrementales al problema N-Body con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámbito de HPC.

Python HPC Optimizaciones incrementales de N-Body (all-pairs) con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámb

Andrés Milla 12 Aug 04, 2022
Computational inteligence project on faces in the wild dataset

Table of Contents The general idea How these scripts work? Loading data Needed modules and global variables Parsing the arrays in dataset Extracting a

tooraj taraz 4 Oct 21, 2022
Data, model training, and evaluation code for "PubTables-1M: Towards a universal dataset and metrics for training and evaluating table extraction models".

PubTables-1M This repository contains training and evaluation code for the paper "PubTables-1M: Towards a universal dataset and metrics for training a

Microsoft 365 Jan 04, 2023
Isaac Gym Reinforcement Learning Environments

Isaac Gym Reinforcement Learning Environments

NVIDIA Omniverse 714 Jan 08, 2023
Voila - Voilà turns Jupyter notebooks into standalone web applications

Rendering of live Jupyter notebooks with interactive widgets. Introduction Voilà turns Jupyter notebooks into standalone web applications. Unlike the

Voilà Dashboards 4.5k Jan 03, 2023
Painting app using Python machine learning and vision technology.

AI Painting App We are making an app that will track our hand and helps us to draw from that. We will be using the advance knowledge of Machine Learni

Badsha Laskar 3 Oct 03, 2022
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 02, 2023
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
My implementation of DeepMind's Perceiver

DeepMind Perceiver (in PyTorch) Disclaimer: This is not official and I'm not affiliated with DeepMind. My implementation of the Perceiver: General Per

Louis Arge 55 Dec 12, 2022
Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.

Pano3D A Holistic Benchmark and a Solid Baseline for 360o Depth Estimation Pano3D is a new benchmark for depth estimation from spherical panoramas. We

Visual Computing Lab, Information Technologies Institute, Centre for Reseach and Technology Hellas 50 Dec 29, 2022
Woosung Choi 63 Nov 14, 2022