Skip to content

Other Algorithms

Abhishek Rathore edited this page Jul 9, 2016 · 1 revision

Matrioska Tracker:

  • It uses keypoint matching in the object and other frames to track the object and uses learning and pruning approach to improve tracking.
  • At first it detects keypoints and descriptors in the object using two or three keypoints detection methods.
  • Then does the same thing in the next frame.
  • Learns keypoints.
  • Then it uses K-nearest neighbor search between keypoints.
  • Then uses some outlier filters to remove false keypoints.
  • Then estimates scale to draw bounding box.

The detailed description can be found in this research paper

The python code can be found here. which is under progress.

Tracking by SIFT, SURF feature matching, homography

  • It also uses keypoint matching between object and frames to track the object.
  • At first it detects keypoints and descriptors in the object using Scale-Invariant Feature Transform (SIFT) and Speeded-Up Robust Features(SURF) methods.
  • Combines both keypoints and descriptors.
  • Does same in next frame.
  • Then it does Fast Approximate Nearest Neighbor Search of keypoints between frame and object.
  • Then it does some outlier filtering
  • Then it does homography
  • Then form rectangle.

Working code in Python and OpenCV can be found here

Tracking Learning Detection

  • It uses Lucas Kanade optical flow algorithm for tracking of the object.
  • It uses correlation filter for detection of the object in each frame.
  • It uses learning of image patches to improve tracking of the object.

The detailed explanation can be found in this research paper

The source code in C++ and OpenCV and Windows executable file can be found here

Consensus-based Matching and Tracking of Keypoints for Object Tracking

  • It uses feature detection and matching for tracking.
  • At first it detects features and descriptors in Region of Interest.
  • Then detects features and descriptors in next frame.
  • It matches both the features.
  • Then it tracks the object using Lucas Kanade Optical flow and previous matched keypoints.
  • Then it estimates scale and rotation of the object.
  • Then each good keypoint votes for center.
  • Then it create bounding box based on consensus.

The detailed explanation can be found in this research paper

The source code using Python and OpenCV can be found here

Clone this wiki locally