当前位置:网站首页>[learn] HF net training
[learn] HF net training
2022-04-23 06:38:00 【mightbxg】
HF-Net Image descriptors can be extracted (global_descriptors) And feature points in the image (keypoints) And its description (local_descriptors), The former is used for image retrieval , The latter cooperates with SuperGlue/NN The equal feature matching algorithm can be used for camera pose calculation . therefore HF-Net The application scenario is SLAM Restore the position and position of the map .
Although the official warehouse has perfect training scripts and detailed instructions , But there are some details that need attention , Record here . Advance reminder :HF-Net Training produces 1.4TB The middle data of , Please make sure that the data disk has 1.5TB~2TB Free space for !
Material preparation
First of all, from the github pull Source code , I usually put it home
Under the table of contents , such as /home/abc/Sources/hfnet
.hfnet You need to provide two paths when setting :DATA_PATH
and EXPER_PATH
, The former stores training images and pre training model weights , The latter is stored hfnet Training output of . For training , The things that need to be downloaded are :
- GoogleLandmarks Data sets : To download Picture index , And then use
setup/scripts/download_google_landmarks.py
Script Download . There are... In the index file 1098k A link to the picture , But some are invalid . Random Download 185k Pictures ( about 55GB) that will do , The extra pictures will be discarded during training ( See configuration filehfnet_train_distill.yaml
). - BerkeleyDeepDrive Data sets : Go directly to Official website , After simple registration, on the download page
BDD100K
Download... From the tab100K Images
andLabels
, A total of about 5.4GB. - NetVLAD Model parameters : Download address
- SuperPoint Model parameters : Download address
- Mobilenet V2 Model parameters : stay The download page find
float_v2_0.75_224
download .
BDD100k The data set needs to be picked out night and dawn Image of category , I wrote one Script To perform the task . decompression images_100k and labels , download bdd_extract_images_for_hfnet.py
after , Organize folders as follows :
bdd100k
├── bdd_extract_images_for_hfnet.py
├── images
│ └── 100k
└── labels
├── bdd100k_labels_images_train.json
└── bdd100k_labels_images_val.json
And then execute bdd_extract_images_for_hfnet.py
, You can get dawn_images_vga
and night_images_vga
Two folders .
These things prepared above are put in DATA_PATH
Under the table of contents , Final DATA_PATH
is :
HfNetDataset
├── bdd
│ ├── dawn_images_vga # Folder
│ └── night_images_vga # Folder
├── google_landmarks
│ └── images # Folder
└── weights
├── mobilenet_v2_0.75_224 # Folder
├── superpoint_v1.pth
└── vd16_pitts30k_conv5_3_vlad_preL2_intra_white # Folder
Environmental preparation
HF-Net It uses tensorflow 1.12, Actually measured tf 1.15.5 It can also be used . Based on images tensorflow/tensorflow:1.15.5-gpu
establish docker Containers :
docker run --gpus all --name "hfnet" -v /home/abc:/home/abc -it tensorflow/tensorflow:1.15.5-gpu /bin/bash
Only... Is mounted here home
Catalog , Because the code and data I use are home
Under the table of contents , If your DATA_PATH
On another plate , It also needs to be mounted together ( Add one -v /host_dir:/docker_dir
). At the beginning of this article, I mentioned DATA_PATH
You need at least 1.5TB Free space for !
After entering the container, you need to install a package :libgl1-mesa-glx
, For ease of operation ,tmux and vim You can also install .
And then configuration hfnet Warehouse , Before that, change setup/requirements.txt
file , hold tensorflow-gpu==1.12
hinder ==1.12
Delete , Otherwise, the existing in the container will be tf1.15.5 Downgrade . stay hfnet Execute... In the root directory make install
Install configuration . here DATA_PATH
and EXPER_PATH
Absolute path must be filled in .
model training
although docker The container has all enabled GPU, But not all the graphics cards on the server can be used , It can be set at the terminal CUDA_VISIBLE_DEVICES
Environment variables to control GPU Use , such as :
export CUDA_VISIBLE_DEVICES=1 # Only enabled 1 Number GPU
export CUDA_VISIBLE_DEVICES=1,2 # Only enabled 1、2 Number GPU
Before real training , Need to export first NetVLAD and SuperPoint The prediction of the model , As a data set label(HF-Net Using the model distillation training method , namely NetVLAD and SuperPoint Two large models supervise training hfnet This little model ). stay hfnet The root directory is executed separately :
python3 hfnet/export_predictions.py \
hfnet/configs/netvlad_export_distill.yaml \
global_descriptors \
--keys global_descriptor \
--as_dataset
python3 hfnet/export_predictions.py \
hfnet/configs/superpoint_export_distill.yaml \
superpoint_predictions \
--keys local_descriptor_map,dense_scores \
--as_dataset
This step will be in DATA_PATH
I'll get global_descriptors
(4.6GB) and superpoint_predictions
(1.4TB) Two folders , It can take hours .
The data set structure required for training is as follows (global_descriptors and superpoint_predictions On the same level as the image folder ):
├── bdd
│ ├── dawn_images_vga
│ ├── global_descriptors
│ ├── night_images_vga
│ └── superpoint_predictions
└── google_landmarks
├── global_descriptors
├── images
└── superpoint_predictions
But the folder structure obtained earlier is like this :
├── bdd
│ ├── dawn_images_vga
│ └── night_images_vga
├── global_descriptors
├── google_landmarks
│ └── images
└── superpoint_predictions
One way is to global_descriptors and superpoint_predictions The parts of the folder belonging to the two data sets are placed separately in their respective folders , A simpler way is to directly bdd and google_landmarks Create... In the folder global_descriptors and superpoint_predictions Soft connection of folders , Because the training is to find the image first , Then find the corresponding... According to the image name label, There is redundancy under a data set label It doesn't affect the training .
Then there's training ( Several hours to more than ten hours ):
python3 hfnet/train.py hfnet/configs/hfnet_train_distill.yaml hfnet
Training process log And the trained model is saved in EXPER_PATH/hfnet/
, It can be used tensorboard Monitor the training process :
tensorboard --logdir=$EXPER_PATH/hfnet --host=0.0.0.0
# Local browser access http://server_ip:port/ see
Finally, it can be exported as pb Model (EXPER_PATH/saved_models/hfnet/
):
python3 hfnet/export_model.py hfnet/configs/hfnet_train_distill.yaml hfnet
版权声明
本文为[mightbxg]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204230546583333.html
边栏推荐
猜你喜欢
1007 go running (hdu6808) in the fourth game of 2020 Hangzhou Electric Multi school competition
【UDS统一诊断服务】三、应用层协议(1)
C语言实用小技巧合集(持续更新)
C语言的运算符
The most practical chrome plug-in
【UDS统一诊断服务】(补充)五、ECU bootloader开发要点详解 (2)
【UDS统一诊断服务】(补充)五、ECU bootloader开发要点详解 (1)
For() loop parameter call order
[ThreadX] h743zi + lan8720 + ThreadX + netx duo transplantation
【UDS统一诊断服务】四、诊断典型服务(2)— 数据传输功能单元
随机推荐
声明为全局变量
用二进制进行权限管理
1006 finding a mex (hdu6756)
NVIDIA Jetson: GStreamer 和 openMAX(gst-omx) 插件
Rust: Tcp 服务器与客户端的一个简单例子
Flask - 中间件
类和对象的初始化(构造函数与析构函数)
【UDS统一诊断服务】四、诊断典型服务(4)— 在线编程功能单元(0x34-0x38)
C语言实现2048小游戏方向合并逻辑
安全授信
vs中能编译通过,但是会有红色下划线提示未定义标示符问题
Flask操作多个数据库
搭建jpress个人博客
利用文件保存数据(c语言)
多线程爬取马可波罗网供应商数据
[UDS unified diagnostic service] IV. typical diagnostic service (5) - function / component test function unit (routine function unit 0x31)
[untitled]
客户端软件增量更新
基于Sentinel+Nacos 对Feign Client 动态添加默认熔断规则
C#中?的这种形式