当前位置:网站首页>camshift achieves target tracking
camshift achieves target tracking
2022-08-10 12:32:00 【Raine_Yang】
注:The class for object tracking is not presentopencv-python库里,but with additional functionsopencv-contrib-python
安装:
pip install opencv-contrib-python==4.6.0.66 -i https://pypi.tuna.tsinghua.edu.cn/simple/
camshift Convert the image to a color probability distribution map using the color histogram model of the target,Initialize a search window size Small and location,And adaptively adjust the position and size of the search window according to the results obtained in the previous frame,Thereby, the center position of the target in the current image is located
实现步骤:
1 初始化搜索窗
2 反向投影:Convert the image to be less affected by lightingHSV模型.对HThe components are histograms representing the differencesHThe probability of occurrence of the value,Get the color probability lookup table.The color probability distribution map is obtained by replacing each pixel value of the image with the color occurrence probability
3 运行meanshiftThe algorithm obtains the new size and position of the search window.meanshiftIteratively finds the probability extremum to locate the target
4 重复2,3Step trace image
track superclass:
Image tracking images will inherit from this tracking superclass.This class implements the mouse to frame out the range of the image to be tracked,显示图像,显示CPS和RES值.
import cv2
import numpy as np
import time
class TrackerBase(object):
def __init__(self, window_name):
self.window_name = window_name
self.frame = None
self.frame_width = None
self.frame_height = None
self.frame_size = None
self.drag_start = None
self.selection = None
self.track_box = None
self.detect_box = None
self.display_box = None
self.marker_image = None
self.processed_image = None
self.display_image = None
self.target_center_x = None
def onMouse(self, event, x, y, flags, params):
if self.frame is None:
return
if event == cv2.EVENT_LBUTTONDOWN and not self.drag_start:
self.track_box = None
self.detect_box = None
self.drag_start = (x, y)
if event == cv2.EVENT_LBUTTONUP:
self.drag_start = None
self.detect_box = self.selection
if self.drag_start:
xmin = max(0, min(x, self.drag_start[0]))
ymin = max(0, min(y, self.drag_start[1]))
xmax = min(self.frame_width, max(x, self.drag_start[0]))
ymax = min(self.frame_height, max(y, self.drag_start[1]))
self.selection = (xmin, ymin, xmax-xmin, ymax-ymin)
def display_selection(self):
if self.drag_start and self.is_rect_nonzero(self.selection):
x, y, w, h = self.selection
cv2.rectangle(self.marker_image, (x, y), (x + w, y + h), (0, 255, 255), 2)
def is_rect_nonzero(self, rect):
try:
(_,_,w,h) = rect
return ((w>0)and(h>0))
except:
try:
((_,_),(w,h),a) = rect
return (w > 0) and (h > 0)
except:
return False
def rgb_image_callback(self, data):
frame = data
if self.frame is None:
self.frame = frame.copy()
self.marker_image = np.zeros_like(frame)
self.frame_size = (frame.shape[1], frame.shape[0])
self.frame_width, self.frame_height = self.frame_size
cv2.imshow(self.window_name, self.frame)
cv2.setMouseCallback(self.window_name,self.onMouse)
cv2.waitKey(3)
else:
self.frame = frame.copy()
self.marker_image = np.zeros_like(frame)
processed_image = self.process_image(frame)
self.processed_image = processed_image.copy()
self.display_selection()
self.display_image = cv2.bitwise_or(self.processed_image, self.marker_image)
if self.track_box is not None and self.is_rect_nonzero(self.track_box):
tx, ty, tw, th = self.track_box
cv2.rectangle(self.display_image, (tx, ty), (tx+tw, ty+th), (0, 0, 0), 2)
elif self.detect_box is not None and self.is_rect_nonzero(self.detect_box):
dx, dy, dw, dh = self.detect_box
cv2.rectangle(self.display_image, (dx, dy), (dx+dw, dy+dh), (255, 50, 50), 2)
cv2.imshow(self.window_name, self.display_image)
cv2.waitKey(3)
def process_image(self, frame):
return frame
if __name__=="__main__":
cap = cv2.VideoCapture(0)
trackerbase = TrackerBase('base')
while True:
ret, frame = cap.read()
x, y = frame.shape[0:2]
small_frame = cv2.resize(frame, (int(y/2), int(x/2)))
trackerbase.rgb_image_callback(small_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
1 def onMouse(self, event, x, y, flags, params):
onMouseThe class is the default callback function for mouse events,在后面cv2.setMouseCallback(self.window_name,self.onMouse)使用到
参数:event:鼠标动作
x,y:When the mouse generates this eventx和y坐标
flags:cv2_EVENT_FLAG_* (MouseEventFlags)类型的变量
param:Custom passed to setMouseCallback 函数调用的参数
Put when left clickedx,y保存在变量drag_startas the initial position of the mouse,The new one will be placed at any time during mouse movementx,y传入回调函数,更新selection的值.Save the selected frame as when the left button is updetect_box
For more information about the mouse callback function, please refer to:http://t.csdn.cn/DuSMv
2 def display_selection(self):
Outline the selected area with a rectangle
3 def is_rect_nonzero(self, rect):
Determine whether the rectangle size is 0
4 def rgb_image_callback(self, data):
核心函数,有以下功能:
1 如没有frame时复制data(i.e. the incoming image),Keep some parameters the same as the incoming image,并显示图像.Here, the mouse callback function is called to get the border
2 when there is an image,Run the image processing programself.process_image(The program is overridden in a subclass),显示矩形框.显示track_box和detect_box
5 def process_image(self, frame):
The program currently just returns the original image,Subclasses should rewrite the program to implement the core function of tracking
camshift算法实现:
import cv2
import numpy as np
from tracker_base import TrackerBase
class Camshift(TrackerBase):
def __init__(self, window_name):
super(Camshift, self).__init__(window_name)
self.detect_box = None
self.track_box = None
def process_image(self, frame):
try:
if self.detect_box is None:
return frame
src = frame.copy()
if self.track_box is None or not self.is_rect_nonzero(self.track_box):
self.track_box = self.detect_box
x,y,w,h = self.track_box
self.roi = cv2.cvtColor(frame[y:y+h, x:x+w], cv2.COLOR_BGR2HSV)
roi_hist = cv2.calcHist([self.roi], [0], None, [16], [0, 180])
self.roi_hist = cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
self.term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
else:
hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
back_project = cv2.calcBackProject([hsv],[0],self.roi_hist,[0,180],1)
ret, self.track_box = cv2.CamShift(back_project, self.track_box, self.term_crit)
pts = cv2.boxPoints(ret)
pts = np.int0(pts)
cv2.polylines(frame,[pts],True,255,1)
except:
pass
return frame
if __name__ == '__main__':
cap = cv2.VideoCapture(0)
camshift = Camshift('camshift')
while True:
ret, frame = cap.read()
x, y = frame.shape[0:2]
small_frame = cv2.resize(frame, (int(y/2), int(x/2)))
camshift.rgb_image_callback(small_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
1
try:
if self.detect_box is None:
return frame
If there is no calibration tracking area, return to the original image directly
2
if self.track_box is None or not self.is_rect_nonzero(self.track_box):
self.track_box = self.detect_box
x,y,w,h = self.track_box
self.roi = cv2.cvtColor(frame[y:y+h, x:x+w], cv2.COLOR_BGR2HSV)
roi_hist = cv2.calcHist([self.roi], [0], None, [16], [0, 180])
self.roi_hist = cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
self.term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
track_boxPut when it doesn't existtrack_box重置为detect_box(That is, the delineated tracking area),并计算detect_boxColor probability histogram,This part is used for initialization
3
hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
back_project = cv2.calcBackProject([hsv],[0],self.roi_hist,[0,180],1)
ret, self.track_box = cv2.CamShift(back_project, self.track_box, self.term_crit)
pts = cv2.boxPoints(ret)
pts = np.int0(pts)
cv2.polylines(frame,[pts],True,255,1)
调用CamshiftThe library recognizes images
边栏推荐
- LeetCode 24. 两两交换链表中的节点
- 嘉为蓝鲸荣获工信部“数字技术融合创新应用解决方案”
- CodeForces - 628D (digital dp)
- Dining (网络流)
- 海外媒体宣发.国内媒体发稿要注意哪些问题?
- APP automation testing practice based on UiAutomator2+PageObject mode
- Redis常用命令
- LeetCode 61. Rotating linked list
- 技术人必看!数据治理是什么?它对数据中台建设重要吗?
- So delicious!Since using this interface artifact, my team efficiency has increased by 60%!
猜你喜欢
随机推荐
爱可可AI前沿推介(8.10)
LeetCode 82. Remove Duplicate Elements in Sorted List II
开源的作者,也有个生活问题
HDU 4372:Count the Buildings (Stirling数)
7、Instant-ngp
第六届”蓝帽杯“全国大学生网络安全技能大赛半决赛部分WriteUp
LeetCode 237. Delete a node in a linked list
搜索--01
LeetCode 19. Delete the Nth last node of the linked list
What are some useful performance testing tools recommended? Performance testing report charging standards
2016,还是到了最后
How many constants and data types do you remember?
这三个 Go 水平自测题,你手写不出来还是先老实上班吧,过来看看
人脸考勤是选择人脸比对1:1还是人脸搜索1:N?
【mysql】explain介绍[通俗易懂]
Does face attendance choose face comparison 1:1 or face search 1:N?
jlink and swd interface definition
中芯CIM国产化项目暂停?上扬软件:未停摆,改为远程开发!
郭晶晶家的象棋私教,好家伙是个机器人
You have a Doubaqiong thesaurus, please check it