# 对齐和对象实例识别

1.Alignment and Object Instance Recognition Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Lazebnik and D. Hoiem

2.Administrative Stuffs HW 2 due 11:59 PM Oct 3rd Please start early Anonymous feedback Lecture Lectures going too fast Show more examples/code to demonstrate how the algorithms work HW assignments List functions that are not allowed to use Piazza Encourage more students to participate (e.g. answer questions) Group the questions into threads

3.Today’s class Review fitting Alignment Object instance recognition Example of alignment-based category recognition

4.Previous class Global optimization / Search for parameters Least squares fit Robust least squares Iterative closest point (ICP) Hypothesize and test Generalized Hough transform RANSAC

5.Least squares line fitting Data: ( x 1 , y 1 ), …, ( x n , y n ) Line equation: y i = m x i + b Find ( m , b ) to minimize ( x i , y i ) y=mx+b Matlab : p = A \ y; Modified from S. Lazebnik

6.Least squares line fitting function [m, b] = lsqfit (x, y) % y = mx + b % find line that best predicts y given x % minimize sum_i (m* x_i + b - y_i ).^2 A = [x(:) ones( numel (x), 1)]; b = y(:); p = A; m = p(1); b = p(2 ); A y p

7.Total least squares Find ( a , b , c ) to minimize the sum of squared perpendicular distances ( x i , y i ) ax+by+c =0 Unit normal: N= ( a, b ) Solution is eigenvector corresponding to smallest eigenvalue of A T A See details on Raleigh Quotient: http://en.wikipedia.org/wiki/Rayleigh_quotient Slide modified from S. Lazebnik

8.Total least squares Find ( a , b , c ) to minimize the sum of squared perpendicular distances ( x i , y i ) ax+by+c =0 Unit normal: N= ( a, b ) Solution is eigenvector corresponding to smallest eigenvalue of A T A See details on Raleigh Quotient: http://en.wikipedia.org/wiki/Rayleigh_quotient Slide modified from S. Lazebnik

9.Robust Estimator Initialize : e.g., choose by least squares fit and Choose params to minimize: E.g., numerical optimization Compute new Repeat (2) and (3) until convergence

10.function [m, b] = robust_lsqfit (x, y) % iterative robust fit y = mx + b % find line that best predicts y given x % minimize sum_i (m* x_i + b - y_i ).^2 [ m, b] = lsqfit (x, y); p = [m ; b]; err = sqrt ((y-p(1)*x-p(2)).^2); sigma = median(err)*1.5; for k = 1:7 p = fminunc (@( p) geterr ( p,x,y,sigma ), p); err = sqrt ((y-p(1)*x-p(2)).^2); sigma = median(err)*1.5; end m = p(1); b = p(2 );

11.x y Hough transform P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959 Hough space Use a polar representation for the parameter space Slide from S. Savarese

12.function [m, b] = houghfit (x, y) % y = mx + b % x*cos(theta) + y*sin(theta) = r % find line that best predicts y given x % minimize sum_i (m* x_i + b - y_i ).^2 thetas = (- pi+pi /50):(pi/100):pi; costhetas = cos(thetas); sinthetas = sin(thetas); minr = 0; stepr = 0.005; maxr = 1; % count hough votes counts = zeros( numel (thetas ),( maxr-minr )/stepr+1); for k = 1:numel(x) r = x(k)* costhetas + y(k)* sinthetas ; % only count parameters within the range of r inrange = find(r &gt;= minr &amp; r &lt;= maxr ); rnum = round((r( inrange )- minr )/ stepr )+1; ind = sub2ind(size(counts), inrange , rnum ); counts( ind ) = counts( ind ) + 1; end % smooth the bin counts counts = imfilter (counts, fspecial ( gaussian , 5, 0.75)); % get best theta, rho and show counts [ maxval , maxind ] = max(counts(:)); [ thetaind , rind] = ind2sub(size(counts ), maxind ); theta = thetas( thetaind ); r = minr + stepr *(rind-1 ); % convert to slope-intercept b = r/sin(theta); m = -cos(theta)/sin(theta);

13.RANSAC Algorithm: Sample (randomly) the number of points required to fit the model (#=2) Solve for model parameters using samples Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence

14.function [m, b] = ransacfit (x, y) % y = mx + b N = 200; thresh = 0.03 ; bestcount = 0; for k = 1:N rp = randperm ( numel (x)); tx = x( rp (1:2)); ty = y( rp (1:2)); m = (ty(2)-ty(1)) ./ (tx(2)-tx(1)); b = ty(2)-m* tx (2); nin = sum(abs(y-m*x-b)&lt;thresh); if nin &gt; bestcount bestcount = nin ; inliers = (abs(y - m*x - b) &lt; thresh); end end % total least square fitting on inliers [ m, b] = total_lsqfit (x(inliers), y(inliers ));

15.Line fitting demo demo_linefit ( npts , outliers, noise, method) npts : number of points outliers : number of outliers noise : noise level Method lsq : least squares tlsq : total least squares rlsq : robust least squares hough : hough transform ransac : RANSAC

16.Which algorithm should I use? If we know which points belong to the line, how do we find the “optimal” line parameters? Least squares What if there are outliers? Robust fitting, RANSAC What if there are many lines? Voting methods: RANSAC, Hough transform Slide credit: S . Lazebnik

17.Which algorithm should I use? If we know which points belong to the line, how do we find the “optimal” line parameters? Least squares What if there are outliers? Robust fitting, RANSAC What if there are many lines? Voting methods: RANSAC, Hough transform Slide credit: S . Lazebnik

18.What if you want to align but have no prior matched pairs? Hough transform and RANSAC not applicable Important applications Medical imaging: match brain scans or contours Robotics: match point clouds

19.Iterative Closest Points (ICP) Algorithm Goal: estimate transform between two dense sets of points Initialize transformation (e.g., compute difference in means and scale) Assign each point in {Set 1} to its nearest neighbor in {Set 2} Estimate transformation parameters e.g., least squares or robust least squares Transform the points in {Set 1} using estimated parameters Repeat steps 2-4 until change is very small

20.Example: solving for translation A 1 A 2 A 3 B 1 B 2 B 3 Given matched points in {A} and {B}, estimate the translation of the object

21.A 1 A 2 A 3 B 1 B 2 B 3 Least squares solution ( t x , t y ) Write down objective function Derived solution Compute derivative Compute solution Computational solution Write in form Ax=b Solve using pseudo-inverse or eigenvalue decomposition Example: solving for translation

22.A 1 A 2 A 3 B 1 B 2 B 3 RANSAC solution ( t x , t y ) Sample a set of matching points (1 pair) Solve for transformation parameters Score parameters with number of inliers Repeat steps 1-3 N times Problem: outliers A 4 A 5 B 5 B 4 Example: solving for translation

23.A 1 A 2 A 3 B 1 B 2 B 3 Hough transform solution ( t x , t y ) Initialize a grid of parameter values Each matched pair casts a vote for consistent values Find the parameters with the most votes Solve using least squares with inliers A 4 A 5 A 6 B 4 B 5 B 6 Problem: outliers, multiple objects, and/or many-to-one matches Example: solving for translation

24.( t x , t y ) Problem: no initial guesses for correspondence ICP solution Find nearest neighbors for each point Compute transform using matches Move points using transform Repeat steps 1-3 until convergence Example: solving for translation

25.Extract edge pixels and Compute initial transformation (e.g., compute translation and scaling by center of mass, variance within each image) Get nearest neighbors: for each point find corresponding Compute transformation T based on matches Warp points p according to T Repeat 3-5 until convergence   Example: aligning boundaries p q

26.Algorithm Summary Least Squares Fit closed form solution robust to noise not robust to outliers Robust Least Squares improves robustness to noise requires iterative optimization Hough transform robust to noise and outliers can fit multiple models o nly works for a few parameters (1-4 typically) RANSAC robust to noise and outliers works with a moderate number of parameters ( e.g , 1-8) Iterative Closest Point (ICP) For local alignment only: does not require initial correspondences

27.Alignment Alignment: find parameters of model that maps one set of points to another Typically want to solve for a global transformation that accounts for most true correspondences Difficulties Noise (typically 1-3 pixels) Outliers (often 30-50%) Many-to-one matches or multiple objects

28.Parametric (global) warping Transformation T is a coordinate-changing machine: p’ = T (p) What does it mean that T is global? Is the same for any point p can be described by just a few numbers (parameters) For linear transformations, we can represent T as a matrix p’ = T p T p = (x,y) p’ = (x’,y’)

29.Common transformations translation rotation aspect affine perspective original Transformed Slide credit (next few slides): A . Efros and/or S. Seitz