Feature matching is a key task in computer vision for finding correspondences between features of two images. OpenCV provides robust tools for feature detection, description, and matching.
This tutorial will cover feature matching with different algorithms and practical examples.
What You’ll Learn
1. Introduction to Feature Matching
Feature matching involves:
- Feature Detection: Identifying keypoints in images.
- Feature Description: Computing descriptors for these keypoints.
- Matching: Finding correspondences between descriptors from two images.
OpenCV provides:
- Brute-Force Matcher (cv2.BFMatcher)
- FLANN Matcher (cv2.FlannBasedMatcher)
2. Brute-Force Matching with ORB Features
ORB (Oriented FAST and Rotated BRIEF) is a fast and efficient feature detector/descriptor suitable for real-time applications.
Example: Brute-Force Matching with ORB
import cv2 # Load two images image1 = cv2.imread("image1.jpg", cv2.IMREAD_GRAYSCALE) image2 = cv2.imread("image2.jpg", cv2.IMREAD_GRAYSCALE) # Initialize ORB detector orb = cv2.ORB_create() # Detect and compute features kp1, des1 = orb.detectAndCompute(image1, None) kp2, des2 = orb.detectAndCompute(image2, None) # Brute-Force Matcher with Hamming distance bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) # Match descriptors matches = bf.match(des1, des2) # Sort matches by distance matches = sorted(matches, key=lambda x: x.distance) # Draw matches result = cv2.drawMatches(image1, kp1, image2, kp2, matches[:10], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) # Display result cv2.imshow("ORB Feature Matching", result) cv2.waitKey(0) cv2.destroyAllWindows()
3. FLANN-Based Matching
FLANN (Fast Library for Approximate Nearest Neighbors) is a fast matching algorithm suitable for large datasets.
Example: FLANN Matching with SIFT
import cv2 # Load two images image1 = cv2.imread("image1.jpg", cv2.IMREAD_GRAYSCALE) image2 = cv2.imread("image2.jpg", cv2.IMREAD_GRAYSCALE) # Initialize SIFT detector sift = cv2.SIFT_create() # Detect and compute features kp1, des1 = sift.detectAndCompute(image1, None) kp2, des2 = sift.detectAndCompute(image2, None) # Define FLANN matcher parameters FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) search_params = dict(checks=50) flann = cv2.FlannBasedMatcher(index_params, search_params) # Match descriptors matches = flann.knnMatch(des1, des2, k=2) # Apply Lowe's ratio test good_matches = [] for m, n in matches: if m.distance < 0.7 * n.distance: good_matches.append(m) # Draw matches result = cv2.drawMatches(image1, kp1, image2, kp2, good_matches, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) # Display result cv2.imshow("FLANN Feature Matching", result) cv2.waitKey(0) cv2.destroyAllWindows()
4. Using SIFT Features
SIFT (Scale-Invariant Feature Transform) is a robust algorithm for detecting and describing features invariant to scale and rotation.
Example: Brute-Force Matching with SIFT
import cv2 # Load two images image1 = cv2.imread("image1.jpg", cv2.IMREAD_GRAYSCALE) image2 = cv2.imread("image2.jpg", cv2.IMREAD_GRAYSCALE) # Initialize SIFT detector sift = cv2.SIFT_create() # Detect and compute features kp1, des1 = sift.detectAndCompute(image1, None) kp2, des2 = sift.detectAndCompute(image2, None) # Brute-Force Matcher bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True) # Match descriptors matches = bf.match(des1, des2) # Sort matches by distance matches = sorted(matches, key=lambda x: x.distance) # Draw matches result = cv2.drawMatches(image1, kp1, image2, kp2, matches[:10], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) # Display result cv2.imshow("SIFT Feature Matching", result) cv2.waitKey(0) cv2.destroyAllWindows()
5. Homography Estimation
After feature matching, you can compute the homography matrix to align two images or perform perspective transformation.
Example: Homography with SIFT Features
import cv2 import numpy as np # Load two images image1 = cv2.imread("image1.jpg", cv2.IMREAD_GRAYSCALE) image2 = cv2.imread("image2.jpg", cv2.IMREAD_GRAYSCALE) # Initialize SIFT detector sift = cv2.SIFT_create() # Detect and compute features kp1, des1 = sift.detectAndCompute(image1, None) kp2, des2 = sift.detectAndCompute(image2, None) # Brute-Force Matcher bf = cv2.BFMatcher(cv2.NORM_L2) matches = bf.knnMatch(des1, des2, k=2) # Apply Lowe's ratio test good_matches = [] for m, n in matches: if m.distance < 0.7 * n.distance: good_matches.append(m) # Extract points src_pts = np.float32([kp1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2) # Find homography H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) # Warp perspective height, width = image2.shape aligned_image = cv2.warpPerspective(image1, H, (width, height)) # Display result cv2.imshow("Aligned Image", aligned_image) cv2.waitKey(0) cv2.destroyAllWindows()
6. Practical Examples
6.1 Real-Time Feature Matching with ORB
import cv2 # Initialize ORB detector orb = cv2.ORB_create() # Load the template image template = cv2.imread("template.jpg", cv2.IMREAD_GRAYSCALE) kp_template, des_template = orb.detectAndCompute(template, None) # Open webcam cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() if not ret: break # Convert to grayscale gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Detect and compute features kp_frame, des_frame = orb.detectAndCompute(gray_frame, None) # Match features bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) matches = bf.match(des_template, des_frame) matches = sorted(matches, key=lambda x: x.distance) # Draw matches result = cv2.drawMatches(template, kp_template, frame, kp_frame, matches[:10], None, flags=2) # Display result cv2.imshow("Real-Time Feature Matching", result) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()
6.2 Draw Keypoints on Images
import cv2 # Load an image image = cv2.imread("image.jpg", cv2.IMREAD_GRAYSCALE) # Initialize ORB detector orb = cv2.ORB_create() # Detect keypoints keypoints = orb.detect(image, None) # Draw keypoints result = cv2.drawKeypoints(image, keypoints, None, color=(0, 255, 0), flags=0) # Display result cv2.imshow("Keypoints", result) cv2.waitKey(0) cv2.destroyAllWindows()
7. Summary
Key Functions
- cv2.BFMatcher: Brute-force matcher for descriptor matching.
- cv2.FlannBasedMatcher: Fast matcher for large datasets.
- cv2.findHomography: Compute homography matrix for image alignment.
- cv2.drawMatches: Visualize matches between two images.
Best Practices
- Use Lowe’s ratio test to filter good matches.
- Select feature detectors (e.g., ORB, SIFT) based on task requirements.
- Use homography for perspective transformations after matching.