二维码识别与定位

  • 二维码识别分为两个步骤
    1. 从图像中通过特征检出二维码四个角在成像平面上的位置
    2. 已知图像上的像素位置、相机内参及畸变参数,通过PnP(perspective-n-Point)求得二维码在三维空间中的位置和方向
  • 获得二维码在三维空间中的位置需要已知二维码的尺寸,或和深度相机结合获取相应点的位置

ArUco码

准备工作

  • 使用旧版本的opencv-python没有aruco模块:安装opencv-contrib-python(好像不需要卸载已安装的)

  • 注意,opencv-contrib-python==4.6opencv-contrib-python==4.7的函数会有所不同,参考(未测试)

    v4.6v4.7
    cv2.aruco.GridBoard_createcv2.aruco.GridBoard
    arucoDict=cv2.aruco.Dictionary_get(aruco.DICT_4X4_50)cv2.aruco.getPredifinedDictionary
    cv2.aruco.GridBoard.drawcv2.aruco.Board.generateImage
    arucoParams=cv2.aruco.DetectorParameters_create()cv2.aruco.DetectorParameters
    cv2.aruco.detectMarkers(img, arucoDict, parameters=arucoParams)需要先实例化cv2.aruco.ArucoDetector(arucoDict, arucoParams),再调用corners, ids, rejected = detector.detectMarkers(image)
  • 如果相机数据包含深度信息,在获得二维码在成像平面上的定位后可以直接通过查找深度图得到相应的位置数据

检出

  • 代码
from cv2 import aruco
 
def get_cv_version():
    v_str = [str(i) for i in str(cv2.__version__).split(".")]
    return int(v_str[0])*100 + int(v_str[1])
 
if get_cv_version() <= 406:
    ## for opencv-contrib-python <= 4.6.0.66
    arucoDict = aruco.Dictionary_get(aruco.DICT_4X4_50)
    arucoParams = aruco.DetectorParameters_create()
    corners, ids, rejected = aruco.detectMarkers(img, arucoDict, parameters=arucoParams)
else:
    ## for opencv-python > =4.8 (and 4.7 maybe?)
    ## use the new API.
    arucoDict = aruco.getPredefinedDictionary(aruco.DICT_4X4_50)
    arucoParams = aruco.DetectorParameters()
    detector = aruco.ArucoDetector(arucoDict, arucoParams)
    corners, ids, rejected = detector.detectMarkers(img)
  • detectMarkers(image)的返回参数:
    • corners:类型为tuple元组(例如一个corners的返回值为(np.array([[[612., 35.],[697., 34.],[699., 121.],[614.,123.]]]).astype("float32"),),注意在最后有一个,。),元组的元素个数为检出的二维码数量。元组中每个元素为np.ndarray。每个ndarray的大小为[1,4,2](其中的一个元素为np.array([[[612., 35.],[697., 34.],[699., 121.],[614.,123.]]])),因此取出第i个marker四个角的坐标应调用corners[i][0],即为一矩阵。
    • ids:类型为np.ndarray,尺寸为,可以通过foo.ravel().tolist()转化为一维列表

三维位姿的确定

  • 方法一:使用`cv2.solvePnP()
    • 二维码尺寸由objPoints给定,单位为mm
    • 相机内参knp.array,畸变distortnp.array(如np.array([0.0]*5)
    • 返回rvec可以通过cv2.Rodrigues(revc)[0]转化为旋转矩阵
    • 返回tvec的矩阵,通过tvec[i,0]取值
def get_cv_version():
    v_str = [str(i) for i in str(cv2.__version__).split(".")]
    return int(v_str[0])*100 + int(v_str[1])
 
def find_aruco(img, k, distort, marker_size=100, id=1):
    ## k: camera intrinsic, np.array([fx, 0, cx, 0, fy, cy, 0, 0, 1])
    # print(get_cv_version())
    
    if get_cv_version() <= 406:
        ## for opencv-contrib-python <= 4.6.0.66
        arucoDict = aruco.Dictionary_get(aruco.DICT_4X4_50)
        arucoParams = aruco.DetectorParameters_create()
        corners, ids, rejected = aruco.detectMarkers(img, arucoDict, parameters=arucoParams)
    else:
        ## for opencv-python > =4.8 (and 4.7 maybe?)
        ## use the new API.
        arucoDict = aruco.getPredefinedDictionary(aruco.DICT_4X4_50)
        arucoParams = aruco.DetectorParameters()
        detector = aruco.ArucoDetector(arucoDict, arucoParams)
        corners, ids, rejected = detector.detectMarkers(img)
 
    img_w_frame = copy.deepcopy(img)
    ht = None
    centroid = None
    if ids is not None:
        ids = ids.ravel().tolist()
        if id in ids:
            i = ids.index(id)
            ## Get the center as the coordinate frame center
            ## z-axis pointing out from the marker
            ## x-axis pointing up
            m = marker_size/2
            objPoints = np.array([[-m,m,0], [m,m,0], [m,-m,0], [-m,-m,0]], dtype = np.float32).reshape((4,1,3))
            valid, rvec, tvec = cv2.solvePnP(objPoints, corners[i], np.reshape(k, (3,3)), np.array(distort))
            if valid:
                ht = np.identity(4)
                rot = cv2.Rodrigues(rvec)[0]
                ht[:3, 3] = (tvec/1000.0).reshape(3) ## unit: meter
                ht[:3, :3] = rot
 
                centroid = (int(np.mean(corners[0][0][:,0])), int(np.mean(corners[0][0][:,1])))
                img_w_frame = cv2.circle(img_w_frame, centroid, radius = 10, color = (255, 255, 255), thickness=-1)
 
                aruco.drawDetectedMarkers(img_w_frame, corners)
                cv2.drawFrameAxes(img_w_frame, np.reshape(k, (3,3)), np.array(distort), rvec, tvec, 1)
 
    return img_w_frame, ht, centroid
  • 方法二(过时),使用自带函数,参考。在新版本中已被废弃,参考OpenCV文档,应使用方法一(cv2.solvePnP())替代,参考
    # Estimate pose of each marker and return the values rvec and tvec---(different from those of camera coefficients)
    rvec, tvec, markerPoints = cv2.aruco.estimatePoseSingleMarkers(corners[i], 0.02, matrix_coefficients,
                                                               distortion_coefficients)
    # Draw a square around the markers
    cv2.aruco.drawDetectedMarkers(frame, corners) 
     
    # Draw Axis
    cv2.aruco.drawAxis(frame, matrix_coefficients, distortion_coefficients, rvec, tvec, 0.01)  

QR码

  • 参考
  • 首先需获取相机参数,内参knp.array,畸变distortnp.array(如np.array([0.0]*5)
  • 例程(已测试,get_qr_coords()参考从成像平面到三维空间
def find_qr(img, k, distort):
    qr = cv2.QRCodeDetector()
    ret_qr, points = qr.detect(img)
    img_w_frame = copy.deepcopy(img)
    if ret_qr:
        axis_points, rvec, tvec = get_qr_coords(k, distort, points)
 
        #BGR color format
        colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255), (0,0,0)]
 
        #check axes points are projected to camera view.
        if len(axis_points) > 0:
            axis_points = axis_points.reshape((4,2))
 
            origin = (int(axis_points[0][0]),int(axis_points[0][1]) )
 
            for p, c in zip(axis_points[1:], colors[:3]):
                p = (int(p[0]), int(p[1]))
 
                #Sometimes qr detector will make a mistake and projected point will overflow integer value. We skip these cases. 
                if origin[0] > 5*img.shape[1] or origin[1] > 5*img.shape[1]:break
                if p[0] > 5*img.shape[1] or p[1] > 5*img.shape[1]:break
 
                cv2.line(img_w_frame, origin, p, c, 5)
 
    return img_w_frame

从成像平面到三维空间

  • rvec:Rodrigues旋转向量,可以通过cv2.Rodrigues(rvec)转换cv2.Rodrigues(rvec)[0]为旋转矩阵,cv2.Rodrigues(rvec)[1]为雅可比矩阵
  • tvecs:位移向量
  • 参考代码
def get_qr_coords(k, distort, points):
    ## from https://github.com/TemugeB/QR_code_orientation_OpenCV/blob/main/run_qr.py
    #Selected coordinate points for each corner of QR code.
    qr_edges = np.array([[0,0,0],
                         [0,1,0],
                         [1,1,0],
                         [1,0,0]], dtype = 'float32').reshape((4,1,3))
 
    #determine the orientation of QR code coordinate system with respect to camera coorindate system.
    ret, rvec, tvec = cv2.solvePnP(qr_edges, points, np.reshape(k, (3,3)), np.array(distort))
 
    #Define unit xyz axes. These are then projected to camera view using the rotation matrix and translation vector.
    unitv_points = np.array([[0,0,0], [1,0,0], [0,1,0], [0,0,1]], dtype = 'float32').reshape((4,1,3))
    if ret:
        points, jac = cv2.projectPoints(unitv_points, rvec, tvec, np.reshape(k, (3,3)), np.array(distort))
        return points, rvec, tvec
 
    #return empty arrays if rotation and translation values not found
    else: return [], [], []