I solved it. It was a silly mistake.
I was setting up OPENCV_EXTRA_MODULES_PATH
as: cmake -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules ..
But my working directory was "opencv/build/
" (I was in a directory "build
" inside of opencv and the "opencv_contrib
" was in the same directory as "opencv
"), so that variable should have been: cmake -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules ..
You have also the OpenCV documentation to have a list of OpenCV features:
What I do to know if the feature is available only in keypoints detection or descriptor extraction or both is to read the corresponding paper linked in the documentation. It allows also to have a brief description of the features (for example if it is a binary descriptor, main advantages, etc.)
Other solution is to test each feature:
- if the call to
detect()
is ok (no exception thrown) ==> feature detection
- if the call to
compute()
is ok ==> feature extraction
- if the call to
detectAndCompute()
is ok ==> both
- or looking directly into the source code.
Maybe a more optimal solution exists...
Anyway, from my knowledge (feel free to correct me if I am wrong):
- BRISK: detector + descriptor
- ORB: detector + descriptor
- MSER: detector
- FAST: detector
- AGAST: detector
- GFFT: detector
- SimpleBlobDetector: detector
- KAZE: detector + descriptor
- AKAZE: detector + descriptor
- FREAK: descriptor
- StarDetector: detector
- BriefDescriptorExtractor: descriptor
- LUCID: descriptor
- LATCH: descriptor
- DAISY: descriptor
- MSDDetector: detector
- SIFT: detector + descriptor
- SURF: detector + descriptor
Also with OpenCV 3.1, the code is:
cv::Ptr<cv::Feature2D> kaze = cv::KAZE::create();
std::vector<cv::KeyPoint> kpts;
cv::Mat descriptors;
kaze->detect(matImg, kpts);
kaze->compute(matImg, kpts, descriptors);
kaze->detectAndCompute(matImg, cv::noArray(), kpts, descriptors);
cv::Ptr<cv::Feature2D> daisy = cv::xfeatures2d::DAISY::create(); //Contrib
To know which norm type to use:
std::cout << "AKAZE: " << akaze->descriptorType() << " ; CV_8U=" << CV_8U << std::endl;
std::cout << "AKAZE: " << akaze->defaultNorm() << " ; NORM_HAMMING=" << cv::NORM_HAMMING << std::endl;
Finally, why
No more features2d::create?
Best Solution
As Floyd mentioned, to use TrackerTLD, you need to download OpenCV contrib repo. Instruction is in the link, so explaining it shouldn't be necessary.
However in my opinion using TrackerTLD from OpenCV repo is bad option - i've tested it (about a week or 2 ago) and it was terribly slow. If you are thinking about real time image processing, consider using other implementation of TLD or some other tracker. Right now i'm using this implementation and it's working really well. Note that tracking an object is quite a time consuming task so to perform a real time tracking i have to downscale every frame from 640x480 to 320x240 (Propably it would work well (and definitely faster) in even lower resolution). On the web page of author of this implementation you may find some informations about TLD algorithm (and implementation) and another tracker created by this author - CMT(Consensus-based Matching and Tracking of Keypoints). Unfortunetely i haven't test it yet so i can't tell anything about it.