当前位置:网站首页>Opencv + clion face recognition + face model training
Opencv + clion face recognition + face model training
2022-04-23 04:45:00 【Vivid_ Mm】
OpenCV windows Version compilation +CLion Project import reference :
CLion+OpenCV Identification ID number --- Identification card number _xxwbwm The blog of -CSDN Blog
Code :
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
class CascadeDetectorAdapter : public DetectionBasedTracker::IDetector {
public:
CascadeDetectorAdapter(cv::Ptr<cv::CascadeClassifier> detector) :
IDetector(),
Detector(detector) {
CV_Assert(detector);
}
void detect(const cv::Mat &Image, std::vector<cv::Rect> &objects) {
Detector->detectMultiScale(Image, objects, scaleFactor, minNeighbours, 0, minObjSize, maxObjSize);
}
~CascadeDetectorAdapter() {
}
private:
CascadeDetectorAdapter();
cv::Ptr<cv::CascadeClassifier> Detector;
};
DetectionBasedTracker *getTracker() {
// OpenCV Built in model location F:\opencvWin\opencv\build\etc\lbpcascades
String path = "F:\\opencvWin\\opencv\\build\\etc\\lbpcascades\\lbpcascade_frontalface.xml";
// String path = "F:\\opencvWin\\facetrain\\samples\\data\\cascade.xml";// Can't recognize
Ptr<CascadeClassifier> classifier = makePtr<CascadeClassifier>(path);
// Adapter
Ptr<CascadeDetectorAdapter> mainDetector = makePtr<CascadeDetectorAdapter>(classifier);
Ptr<CascadeClassifier> classifier1 = makePtr<CascadeClassifier>(path);
// Adapter
Ptr<CascadeDetectorAdapter> trackingDetector = makePtr<CascadeDetectorAdapter>(classifier1);
// tracker
DetectionBasedTracker::Parameters DetectorParams;
DetectionBasedTracker *tracker = new DetectionBasedTracker(mainDetector, trackingDetector, DetectorParams);
return tracker;
}
int main() {
DetectionBasedTracker *tracker = getTracker();
// Turn on the tracker
tracker->run();
// Get camera data
VideoCapture capture(0);
Mat Sourceimg;
Mat gray;
Mat test;
while (1) {
capture >> Sourceimg;
// Gray image processing
cvtColor(Sourceimg, gray, COLOR_BGR2GRAY);
// Enhance contrast ( Histogram equalization )
equalizeHist(gray, gray);
// Create and save a vector set of detected faces
std::vector<Rect> faces;
// Gray image recognition processing
tracker->process(gray);
// To get the results
tracker->getObjects(faces);
for (Rect face : faces) {
// Assign separately bgra
if (face.x < 0 || face.width < 0 || face.x + face.width > Sourceimg.cols ||
face.y < 0 || face.height < 0 || face.y + face.height > Sourceimg.rows) {
continue;
}
// Draw the frame in the original picture
rectangle(Sourceimg, face, Scalar(255, 0, 255));
#if 0
int i = 0;
while (true){
// Make positive samples
Mat m;
// hold img Copy the face part in to m in
Sourceimg(face).copyTo(m);
// Put the face Reset to 24x24 Size picture
resize(m, m, Size(24, 24));
// Converted to grayscale
cvtColor(m, m, COLOR_BGR2GRAY);
char p[200];
sprintf(p, "F:/opencvWin/facetrain/samples/vivid/%d.jpg", i++);
// hold mat Write as jpg file
imwrite(p, m);
m.release();
if(i == 100){
break;
}
}
#endif
}
imshow("camera", Sourceimg);
//27 == ESC wait for 30 MS exit
if (waitKey(30) == 27) {
break;
}
}
if (!Sourceimg.empty()) Sourceimg.release();
if (!gray.empty()) gray.release();
capture.release();
tracker->stop();
delete tracker;
return 0;
}
Loaded above lbpcascade_frontalface.xml The face model is opencv The source code comes with , We can use this model to train our own face model , Next, I will introduce training my own face model , Open this code , When we recognize the human face, we save the face information into a size of 24*24( Pixels ) Pictures of the , keep in storage 100 Zhang , reason : I only 300 Negative sample picture ( Do not include face )
#if 0
int i = 0;
while (true){
// Make positive samples
Mat m;
// hold img Copy the face part in to m in
Sourceimg(face).copyTo(m);
// Put the face Reset to 24x24 Size picture
resize(m, m, Size(24, 24));
// Converted to grayscale
cvtColor(m, m, COLOR_BGR2GRAY);
char p[200];
sprintf(p, "F:/opencvWin/facetrain/samples/vivid/%d.jpg", i++);
// hold mat Write as jpg file
imwrite(p, m);
m.release();
if(i == 100){
break;
}
}
#endif
Positive sample :
Sort the positive sample information into xxx.xxx( Any file name , Any file suffix ) I saved it here to vivid.data file
vivid.data The contents of the document :
vivid/0.jpg 1 0 0 24 24
vivid/1.jpg 1 0 0 24 24
vivid/2.jpg 1 0 0 24 24
vivid/3.jpg 1 0 0 24 24
vivid/4.jpg 1 0 0 24 24
vivid/5.jpg 1 0 0 24 24
vivid/6.jpg 1 0 0 24 24
vivid/7.jpg 1 0 0 24 24
vivid/8.jpg 1 0 0 24 24
vivid/9.jpg 1 0 0 24 24
Parameter meaning :
vivid/x.jpg Sample location ( The full path is :F:\opencvWin\facetrain\samples\vivid\x.jpg)
1 It means there is only one face
0 0 Indicates the starting position of the face
24 24 Represents the size of the face , That is, the end position
If one picture has more than one face ,eg:2 Zhang Renren
vivid/0.jpg 2 0 0 50 50 80 80 130 130
0 0 50 50 The starting and ending point of the first face ,80 80 130 130 The starting and ending point of the second face
Writing of document contents , Can be done in code :
import java.io.*;
public class GeneateFile{
public static void main(String[] args) throws Exception{//FileOutputStream
FileOutputStream fos = new FileOutputStream("F:/opencvWin/facetrain/samples/vivid/vivid.data");
for(int i =0 ;i<100;i++){
String content = String.format("vivid/%d.jpg 1 0 0 24 24\n",i);
fos.write(content.getBytes());
}
fos.close();
}
}
take vivid.data File conversion to vec Sample file
opencv_createsamples -info vivid.data -vec vivid.vec -num 100 -w 24 -h 24
-info: Positive sample description
-vec : Output positive sample file
-num : Number of positive samples
-w -h: The size of the output sample
opencv_createsamples -info vivid.data -vec vivid.vec -num 100 -w 24 -h 24
result :
F:\opencvWin\facetrain\samples>opencv_createsamples -info vivid.data -vec vivid.vec -num 100 -w 24 -h 24
Info file name: vivid.data
Img file name: (NULL)
Vec file name: vivid.vec
BG file name: (NULL)
Num: 100
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 24
Height: 24
Max Scale: -1
Create training samples from images collection...
Done. Created 100 samples
F:\opencvWin\facetrain\samples>
The above display indicates success .
Modify generation vivid.data Of java Code , Generating negative samples bg.data file
import java.io.*;
public class GeneateFileOpp{
public static void main(String[] args) throws Exception{//FileOutputStream
FileOutputStream fos = new FileOutputStream("F:/opencvWin/facetrain/samples/bg/bg.data");
for(int i =0 ;i<300;i++){
String content = String.format("bg/%d.jpg\n",i);
fos.write(content.getBytes());
}
fos.close();
}
}
bg.data The contents of the document ( Omit the part ): Note that there is no face information
bg/0.jpg
bg/1.jpg
bg/2.jpg
bg/3.jpg
bg/4.jpg
bg/5.jpg
bg/6.jpg
bg/7.jpg
bg/8.jpg
bg/9.jpg
bg/10.jpg
Training :
opencv_traincascade -data data -vec vivid.vec -bg bg.data -numPos 100 -numNeg 300 -numStages
15 -featureType LBP -w 24 -h 24
-data : The target directory needs to be created manually , Used to store the generated model , The name can be customized
-vec : Positive sample
-bg : Negative sample
-numPos : The number of positive samples used in the training of each classifier
-numNeg : The number of negative samples used in the training of each classifier , Can be greater than -bg number
-numStages: Train the number of classifiers , If there are many layers , The error of classifier is smaller , But the detection speed is slow .(15-20)
-featureType: LBP
-w -h
Execution success results :
Training until now has taken 0 days 0 hours 0 minutes 18 seconds.
===== TRAINING 7-stage =====
<BEGIN
POS count : consumed 100 : 100
NEG count : acceptanceRatio 0 : 0
Required leaf false alarm rate achieved. Branch training terminated.
At this time in data The directory will have the following files :

Modify the above model loading path , You can check whether the model is valid , The operation is as follows :

版权声明
本文为[Vivid_ Mm]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/04/202204220558084086.html
边栏推荐
- [pytoch foundation] torch Split() usage
- Better way to read configuration files than properties
- MySQL - data read / write separation, multi instance
- AWS eks add cluster user or Iam role
- Flink's important basics
- 補:注解(Annotation)
- Flink case - Kafka, MySQL source
- IEEE Transactions on systems, man, and Cybernetics: Notes for systems (TSMC)
- leetcode004--罗马数字转整数
- Leetcode009 -- search the target value in the array with binary search
猜你喜欢

Installation du compilateur croisé de la plateforme zynq

阿里十年技术专家联合打造“最新”Jetpack Compose项目实战演练(附Demo)

C language: spoof games

What is a data island? Why is there still a data island in 2022?

C language: Advanced pointer

Use recyclerview to realize left-right side-by-side classification selection

Improving 3D object detection with channel wise transformer

Shanghai Hangxin technology sharing 𞓜 overview of safety characteristics of acm32 MCU

协程与多进程的完美结合

Spark FAQ sorting - must see before interview
随机推荐
IDE idea automatic compilation and configuration of on update action and on frame deactivation
Unity攝像頭跟隨鼠標旋轉
MySQL queries users logged in for at least N consecutive days
Leetcode008 -- implement strstr() function
数据孤岛是什么?为什么2022年仍然存在数据孤岛?
Spark FAQ sorting - must see before interview
zynq平台交叉编译器的安装
[paper reading] [3D object detection] voxel transformer for 3D object detection
Detailed explanation of life cycle component of jetpack
Leetcode002 -- inverts the numeric portion of a signed integer
leetcode001--返回和为target的数组元素的下标
协程与多进程的完美结合
Record your own dataset with d435i, run orbslam2 and build a dense point cloud
Brushless motor drive scheme based on Infineon MCU GTM module
L2-011 玩转二叉树(建树+BFS)
QML进阶(五)-通过粒子模拟系统实现各种炫酷的特效
Flink's important basics
AWS eks add cluster user or Iam role
Go reflection - go language Bible learning notes
Recommended scheme of national manufactured electronic components for intelligent electronic scales