ORB
-
class skimage.feature.ORB(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04)
[source] -
Bases:
skimage.feature.util.FeatureDetector
,skimage.feature.util.DescriptorExtractor
Oriented FAST and rotated BRIEF feature detector and binary descriptor extractor.
Parameters: n_keypoints : int, optional
Number of keypoints to be returned. The function will return the best
n_keypoints
according to the Harris corner response if more thann_keypoints
are detected. If not, then all the detected keypoints are returned.fast_n : int, optional
The
n
parameter inskimage.feature.corner_fast
. Minimum number of consecutive pixels out of 16 pixels on the circle that should all be either brighter or darker w.r.t test-pixel. A point c on the circle is darker w.r.t test pixel p ifIc < Ip - threshold
and brighter ifIc > Ip + threshold
. Also stands for the n inFAST-n
corner detector.fast_threshold : float, optional
The
threshold
parameter infeature.corner_fast
. Threshold used to decide whether the pixels on the circle are brighter, darker or similar w.r.t. the test pixel. Decrease the threshold when more corners are desired and vice-versa.harris_k : float, optional
The
k
parameter inskimage.feature.corner_harris
. Sensitivity factor to separate corners from edges, typically in range[0, 0.2]
. Small values ofk
result in detection of sharp corners.downscale : float, optional
Downscale factor for the image pyramid. Default value 1.2 is chosen so that there are more dense scales which enable robust scale invariance for a subsequent feature description.
n_scales : int, optional
Maximum number of scales from the bottom of the image pyramid to extract the features from.
References
[R160] Ethan Rublee, Vincent Rabaud, Kurt Konolige and Gary Bradski “ORB: An efficient alternative to SIFT and SURF” http://www.vision.cs.chubu.ac.jp/CV-R/pdf/Rublee_iccv2011.pdf Examples
>>> from skimage.feature import ORB, match_descriptors >>> img1 = np.zeros((100, 100)) >>> img2 = np.zeros_like(img1) >>> np.random.seed(1) >>> square = np.random.rand(20, 20) >>> img1[40:60, 40:60] = square >>> img2[53:73, 53:73] = square >>> detector_extractor1 = ORB(n_keypoints=5) >>> detector_extractor2 = ORB(n_keypoints=5) >>> detector_extractor1.detect_and_extract(img1) >>> detector_extractor2.detect_and_extract(img2) >>> matches = match_descriptors(detector_extractor1.descriptors, ... detector_extractor2.descriptors) >>> matches array([[0, 0], [1, 1], [2, 2], [3, 3], [4, 4]]) >>> detector_extractor1.keypoints[matches[:, 0]] array([[ 42., 40.], [ 47., 58.], [ 44., 40.], [ 59., 42.], [ 45., 44.]]) >>> detector_extractor2.keypoints[matches[:, 1]] array([[ 55., 53.], [ 60., 71.], [ 57., 53.], [ 72., 55.], [ 58., 57.]])
Attributes
keypoints ((N, 2) array) Keypoint coordinates as (row, col)
.scales ((N, ) array) Corresponding scales. orientations ((N, ) array) Corresponding orientations in radians. responses ((N, ) array) Corresponding Harris corner responses. descriptors ((Q, descriptor_size
) array of dtype bool) 2D array of binary descriptors of sizedescriptor_size
for Q keypoints after filtering out border keypoints with value at an index(i, j)
either beingTrue
orFalse
representing the outcome of the intensity comparison for i-th keypoint on j-th decision pixel-pair. It isQ == np.sum(mask)
.-
__init__(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04)
[source]
-
detect(image)
[source] -
Detect oriented FAST keypoints along with the corresponding scale.
Parameters: image : 2D array
Input image.
-
detect_and_extract(image)
[source] -
Detect oriented FAST keypoints and extract rBRIEF descriptors.
Note that this is faster than first calling
detect
and thenextract
.Parameters: image : 2D array
Input image.
-
extract(image, keypoints, scales, orientations)
[source] -
Extract rBRIEF binary descriptors for given keypoints in image.
Note that the keypoints must be extracted using the same
downscale
andn_scales
parameters. Additionally, if you want to extract both keypoints and descriptors you should use the fasterdetect_and_extract
.Parameters: image : 2D array
Input image.
keypoints : (N, 2) array
Keypoint coordinates as
(row, col)
.scales : (N, ) array
Corresponding scales.
orientations : (N, ) array
Corresponding orientations in radians.
-
Please login to continue.