Tutorial: Hypothesis Verification for 3D Object Recognition

This tutorial aims at explaining how to do 3D object recognition in clutter by verifying model hypotheses in cluttered and heavily occluded 3D scenes. After descriptor matching, the tutorial runs one of the Correspondence Grouping algorithms available in PCL in order to cluster the set of point-to-point correspondences, determining instances of object hypotheses in the scene. On these hypotheses, the Global Hypothesis Verification algorithm is applied in order to decrease the amount of false positives.

Suggested readings and prerequisites

This tutorial is the follow-up of a previous tutorial on object recognition: 3D Object Recognition based on Correspondence Grouping To understand this tutorial, we suggest first to read and understand that tutorial.

More details on the Global Hypothesis Verification method can be found here: A. Aldoma, F. Tombari, L. Di Stefano, M. Vincze, A global hypothesis verification method for 3D object recognition, ECCV 2012

For more information on 3D Object Recognition in Clutter and on the standard feature-based recognition pipeline, we suggest this tutorial paper: A. Aldoma, Z.C. Marton, F. Tombari, W. Wohlkinger, C. Potthast, B. Zeisl, R.B. Rusu, S. Gedikli, M. Vincze, “Point Cloud Library: Three-Dimensional Object Recognition and 6 DOF Pose Estimation”, IEEE Robotics and Automation Magazine, 2012

The Code

Before starting, you should download from the GitHub folder: Correspondence Grouping the example PCD clouds used in this tutorial (milk.pcd and milk_cartoon_all_small_clorox.pcd), and place the files in the source older.

Then copy and paste the following code into your editor and save it as global_hypothesis_verification.cpp.

  1/*
  2 * Software License Agreement (BSD License)
  3 *
  4 *  Point Cloud Library (PCL) - www.pointclouds.org
  5 *  Copyright (c) 2014-, Open Perception, Inc.
  6 *
  7 *  All rights reserved.
  8 *
  9 *  Redistribution and use in source and binary forms, with or without
 10 *  modification, are permitted provided that the following conditions
 11 *  are met:
 12 *
 13 *   * Redistributions of source code must retain the above copyright
 14 *     notice, this list of conditions and the following disclaimer.
 15 *   * Redistributions in binary form must reproduce the above
 16 *     copyright notice, this list of conditions and the following
 17 *     disclaimer in the documentation and/or other materials provided
 18 *     with the distribution.
 19 *   * Neither the name of the copyright holder(s) nor the names of its
 20 *     contributors may be used to endorse or promote products derived
 21 *     from this software without specific prior written permission.
 22 *
 23 *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 24 *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
 25 *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
 26 *  FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
 27 *  COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
 28 *  INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
 29 *  BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
 30 *  LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
 31 *  CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
 32 *  LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
 33 *  ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
 34 *  POSSIBILITY OF SUCH DAMAGE.
 35 *
 36 */
 37
 38#include <pcl/io/pcd_io.h>
 39#include <pcl/point_cloud.h>
 40#include <pcl/correspondence.h>
 41#include <pcl/features/normal_3d_omp.h>
 42#include <pcl/features/shot_omp.h>
 43#include <pcl/features/board.h>
 44#include <pcl/filters/uniform_sampling.h>
 45#include <pcl/recognition/cg/hough_3d.h>
 46#include <pcl/recognition/cg/geometric_consistency.h>
 47#include <pcl/recognition/hv/hv_go.h>
 48#include <pcl/registration/icp.h>
 49#include <pcl/visualization/pcl_visualizer.h>
 50#include <pcl/kdtree/kdtree_flann.h>
 51#include <pcl/kdtree/impl/kdtree_flann.hpp>
 52#include <pcl/common/transforms.h> 
 53#include <pcl/console/parse.h>
 54
 55typedef pcl::PointXYZRGBA PointType;
 56typedef pcl::Normal NormalType;
 57typedef pcl::ReferenceFrame RFType;
 58typedef pcl::SHOT352 DescriptorType;
 59
 60struct CloudStyle
 61{
 62    double r;
 63    double g;
 64    double b;
 65    double size;
 66
 67    CloudStyle (double r,
 68                double g,
 69                double b,
 70                double size) :
 71        r (r),
 72        g (g),
 73        b (b),
 74        size (size)
 75    {
 76    }
 77};
 78
 79CloudStyle style_white (255.0, 255.0, 255.0, 4.0);
 80CloudStyle style_red (255.0, 0.0, 0.0, 3.0);
 81CloudStyle style_green (0.0, 255.0, 0.0, 5.0);
 82CloudStyle style_cyan (93.0, 200.0, 217.0, 4.0);
 83CloudStyle style_violet (255.0, 0.0, 255.0, 8.0);
 84
 85std::string model_filename_;
 86std::string scene_filename_;
 87
 88//Algorithm params 
 89bool show_keypoints_ (false);
 90bool use_hough_ (true);
 91float model_ss_ (0.02f);
 92float scene_ss_ (0.02f);
 93float rf_rad_ (0.015f);
 94float descr_rad_ (0.02f);
 95float cg_size_ (0.01f);
 96float cg_thresh_ (5.0f);
 97int icp_max_iter_ (5);
 98float icp_corr_distance_ (0.005f);
 99float hv_resolution_ (0.005f);
100float hv_occupancy_grid_resolution_ (0.01f);
101float hv_clutter_reg_ (5.0f);
102float hv_inlier_th_ (0.005f);
103float hv_occlusion_th_ (0.01f);
104float hv_rad_clutter_ (0.03f);
105float hv_regularizer_ (3.0f);
106float hv_rad_normals_ (0.05);
107bool hv_detect_clutter_ (true);
108
109/**
110 * Prints out Help message
111 * @param filename Runnable App Name
112 */
113void
114showHelp (char *filename)
115{
116  std::cout << std::endl;
117  std::cout << "***************************************************************************" << std::endl;
118  std::cout << "*                                                                         *" << std::endl;
119  std::cout << "*          Global Hypothese Verification Tutorial - Usage Guide          *" << std::endl;
120  std::cout << "*                                                                         *" << std::endl;
121  std::cout << "***************************************************************************" << std::endl << std::endl;
122  std::cout << "Usage: " << filename << " model_filename.pcd scene_filename.pcd [Options]" << std::endl << std::endl;
123  std::cout << "Options:" << std::endl;
124  std::cout << "     -h:                          Show this help." << std::endl;
125  std::cout << "     -k:                          Show keypoints." << std::endl;
126  std::cout << "     --algorithm (Hough|GC):      Clustering algorithm used (default Hough)." << std::endl;
127  std::cout << "     --model_ss val:              Model uniform sampling radius (default " << model_ss_ << ")" << std::endl;
128  std::cout << "     --scene_ss val:              Scene uniform sampling radius (default " << scene_ss_ << ")" << std::endl;
129  std::cout << "     --rf_rad val:                Reference frame radius (default " << rf_rad_ << ")" << std::endl;
130  std::cout << "     --descr_rad val:             Descriptor radius (default " << descr_rad_ << ")" << std::endl;
131  std::cout << "     --cg_size val:               Cluster size (default " << cg_size_ << ")" << std::endl;
132  std::cout << "     --cg_thresh val:             Clustering threshold (default " << cg_thresh_ << ")" << std::endl << std::endl;
133  std::cout << "     --icp_max_iter val:          ICP max iterations number (default " << icp_max_iter_ << ")" << std::endl;
134  std::cout << "     --icp_corr_distance val:     ICP correspondence distance (default " << icp_corr_distance_ << ")" << std::endl << std::endl;
135  std::cout << "     --hv_clutter_reg val:        Clutter Regularizer (default " << hv_clutter_reg_ << ")" << std::endl;
136  std::cout << "     --hv_inlier_th val:          Inlier threshold (default " << hv_inlier_th_ << ")" << std::endl;
137  std::cout << "     --hv_occlusion_th val:       Occlusion threshold (default " << hv_occlusion_th_ << ")" << std::endl;
138  std::cout << "     --hv_rad_clutter val:        Clutter radius (default " << hv_rad_clutter_ << ")" << std::endl;
139  std::cout << "     --hv_regularizer val:        Regularizer value (default " << hv_regularizer_ << ")" << std::endl;
140  std::cout << "     --hv_rad_normals val:        Normals radius (default " << hv_rad_normals_ << ")" << std::endl;
141  std::cout << "     --hv_detect_clutter val:     TRUE if clutter detect enabled (default " << hv_detect_clutter_ << ")" << std::endl << std::endl;
142}
143
144/**
145 * Parses Command Line Arguments (Argc,Argv)
146 * @param argc
147 * @param argv
148 */
149void
150parseCommandLine (int argc,
151                  char *argv[])
152{
153  //Show help
154  if (pcl::console::find_switch (argc, argv, "-h"))
155  {
156    showHelp (argv[0]);
157    exit (0);
158  }
159
160  //Model & scene filenames
161  std::vector<int> filenames;
162  filenames = pcl::console::parse_file_extension_argument (argc, argv, ".pcd");
163  if (filenames.size () != 2)
164  {
165    std::cout << "Filenames missing.\n";
166    showHelp (argv[0]);
167    exit (-1);
168  }
169
170  model_filename_ = argv[filenames[0]];
171  scene_filename_ = argv[filenames[1]];
172
173  //Program behavior
174  if (pcl::console::find_switch (argc, argv, "-k"))
175  {
176    show_keypoints_ = true;
177  }
178
179  std::string used_algorithm;
180  if (pcl::console::parse_argument (argc, argv, "--algorithm", used_algorithm) != -1)
181  {
182    if (used_algorithm.compare ("Hough") == 0)
183    {
184      use_hough_ = true;
185    }
186    else if (used_algorithm.compare ("GC") == 0)
187    {
188      use_hough_ = false;
189    }
190    else
191    {
192      std::cout << "Wrong algorithm name.\n";
193      showHelp (argv[0]);
194      exit (-1);
195    }
196  }
197
198  //General parameters
199  pcl::console::parse_argument (argc, argv, "--model_ss", model_ss_);
200  pcl::console::parse_argument (argc, argv, "--scene_ss", scene_ss_);
201  pcl::console::parse_argument (argc, argv, "--rf_rad", rf_rad_);
202  pcl::console::parse_argument (argc, argv, "--descr_rad", descr_rad_);
203  pcl::console::parse_argument (argc, argv, "--cg_size", cg_size_);
204  pcl::console::parse_argument (argc, argv, "--cg_thresh", cg_thresh_);
205  pcl::console::parse_argument (argc, argv, "--icp_max_iter", icp_max_iter_);
206  pcl::console::parse_argument (argc, argv, "--icp_corr_distance", icp_corr_distance_);
207  pcl::console::parse_argument (argc, argv, "--hv_clutter_reg", hv_clutter_reg_);
208  pcl::console::parse_argument (argc, argv, "--hv_inlier_th", hv_inlier_th_);
209  pcl::console::parse_argument (argc, argv, "--hv_occlusion_th", hv_occlusion_th_);
210  pcl::console::parse_argument (argc, argv, "--hv_rad_clutter", hv_rad_clutter_);
211  pcl::console::parse_argument (argc, argv, "--hv_regularizer", hv_regularizer_);
212  pcl::console::parse_argument (argc, argv, "--hv_rad_normals", hv_rad_normals_);
213  pcl::console::parse_argument (argc, argv, "--hv_detect_clutter", hv_detect_clutter_);
214}
215
216int
217main (int argc,
218      char *argv[])
219{
220  parseCommandLine (argc, argv);
221
222  pcl::PointCloud<PointType>::Ptr model (new pcl::PointCloud<PointType> ());
223  pcl::PointCloud<PointType>::Ptr model_keypoints (new pcl::PointCloud<PointType> ());
224  pcl::PointCloud<PointType>::Ptr scene (new pcl::PointCloud<PointType> ());
225  pcl::PointCloud<PointType>::Ptr scene_keypoints (new pcl::PointCloud<PointType> ());
226  pcl::PointCloud<NormalType>::Ptr model_normals (new pcl::PointCloud<NormalType> ());
227  pcl::PointCloud<NormalType>::Ptr scene_normals (new pcl::PointCloud<NormalType> ());
228  pcl::PointCloud<DescriptorType>::Ptr model_descriptors (new pcl::PointCloud<DescriptorType> ());
229  pcl::PointCloud<DescriptorType>::Ptr scene_descriptors (new pcl::PointCloud<DescriptorType> ());
230
231  /**
232   * Load Clouds
233   */
234  if (pcl::io::loadPCDFile (model_filename_, *model) < 0)
235  {
236    std::cout << "Error loading model cloud." << std::endl;
237    showHelp (argv[0]);
238    return (-1);
239  }
240  if (pcl::io::loadPCDFile (scene_filename_, *scene) < 0)
241  {
242    std::cout << "Error loading scene cloud." << std::endl;
243    showHelp (argv[0]);
244    return (-1);
245  }
246
247  /**
248   * Compute Normals
249   */
250  pcl::NormalEstimationOMP<PointType, NormalType> norm_est;
251  norm_est.setKSearch (10);
252  norm_est.setInputCloud (model);
253  norm_est.compute (*model_normals);
254
255  norm_est.setInputCloud (scene);
256  norm_est.compute (*scene_normals);
257
258  /**
259   *  Downsample Clouds to Extract keypoints
260   */
261  pcl::UniformSampling<PointType> uniform_sampling;
262  uniform_sampling.setInputCloud (model);
263  uniform_sampling.setRadiusSearch (model_ss_);
264  uniform_sampling.filter (*model_keypoints);
265  std::cout << "Model total points: " << model->size () << "; Selected Keypoints: " << model_keypoints->size () << std::endl;
266
267  uniform_sampling.setInputCloud (scene);
268  uniform_sampling.setRadiusSearch (scene_ss_);
269  uniform_sampling.filter (*scene_keypoints);
270  std::cout << "Scene total points: " << scene->size () << "; Selected Keypoints: " << scene_keypoints->size () << std::endl;
271
272  /**
273   *  Compute Descriptor for keypoints
274   */
275  pcl::SHOTEstimationOMP<PointType, NormalType, DescriptorType> descr_est;
276  descr_est.setRadiusSearch (descr_rad_);
277
278  descr_est.setInputCloud (model_keypoints);
279  descr_est.setInputNormals (model_normals);
280  descr_est.setSearchSurface (model);
281  descr_est.compute (*model_descriptors);
282
283  descr_est.setInputCloud (scene_keypoints);
284  descr_est.setInputNormals (scene_normals);
285  descr_est.setSearchSurface (scene);
286  descr_est.compute (*scene_descriptors);
287
288  /**
289   *  Find Model-Scene Correspondences with KdTree
290   */
291  pcl::CorrespondencesPtr model_scene_corrs (new pcl::Correspondences ());
292  pcl::KdTreeFLANN<DescriptorType> match_search;
293  match_search.setInputCloud (model_descriptors);
294  std::vector<int> model_good_keypoints_indices;
295  std::vector<int> scene_good_keypoints_indices;
296
297  for (std::size_t i = 0; i < scene_descriptors->size (); ++i)
298  {
299    std::vector<int> neigh_indices (1);
300    std::vector<float> neigh_sqr_dists (1);
301    if (!std::isfinite (scene_descriptors->at (i).descriptor[0]))  //skipping NaNs
302    {
303      continue;
304    }
305    int found_neighs = match_search.nearestKSearch (scene_descriptors->at (i), 1, neigh_indices, neigh_sqr_dists);
306    if (found_neighs == 1 && neigh_sqr_dists[0] < 0.25f)
307    {
308      pcl::Correspondence corr (neigh_indices[0], static_cast<int> (i), neigh_sqr_dists[0]);
309      model_scene_corrs->push_back (corr);
310      model_good_keypoints_indices.push_back (corr.index_query);
311      scene_good_keypoints_indices.push_back (corr.index_match);
312    }
313  }
314  pcl::PointCloud<PointType>::Ptr model_good_kp (new pcl::PointCloud<PointType> ());
315  pcl::PointCloud<PointType>::Ptr scene_good_kp (new pcl::PointCloud<PointType> ());
316  pcl::copyPointCloud (*model_keypoints, model_good_keypoints_indices, *model_good_kp);
317  pcl::copyPointCloud (*scene_keypoints, scene_good_keypoints_indices, *scene_good_kp);
318
319  std::cout << "Correspondences found: " << model_scene_corrs->size () << std::endl;
320
321  /**
322   *  Clustering
323   */
324  std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f> > rototranslations;
325  std::vector < pcl::Correspondences > clustered_corrs;
326
327  if (use_hough_)
328  {
329    pcl::PointCloud<RFType>::Ptr model_rf (new pcl::PointCloud<RFType> ());
330    pcl::PointCloud<RFType>::Ptr scene_rf (new pcl::PointCloud<RFType> ());
331
332    pcl::BOARDLocalReferenceFrameEstimation<PointType, NormalType, RFType> rf_est;
333    rf_est.setFindHoles (true);
334    rf_est.setRadiusSearch (rf_rad_);
335
336    rf_est.setInputCloud (model_keypoints);
337    rf_est.setInputNormals (model_normals);
338    rf_est.setSearchSurface (model);
339    rf_est.compute (*model_rf);
340
341    rf_est.setInputCloud (scene_keypoints);
342    rf_est.setInputNormals (scene_normals);
343    rf_est.setSearchSurface (scene);
344    rf_est.compute (*scene_rf);
345
346    //  Clustering
347    pcl::Hough3DGrouping<PointType, PointType, RFType, RFType> clusterer;
348    clusterer.setHoughBinSize (cg_size_);
349    clusterer.setHoughThreshold (cg_thresh_);
350    clusterer.setUseInterpolation (true);
351    clusterer.setUseDistanceWeight (false);
352
353    clusterer.setInputCloud (model_keypoints);
354    clusterer.setInputRf (model_rf);
355    clusterer.setSceneCloud (scene_keypoints);
356    clusterer.setSceneRf (scene_rf);
357    clusterer.setModelSceneCorrespondences (model_scene_corrs);
358
359    clusterer.recognize (rototranslations, clustered_corrs);
360  }
361  else
362  {
363    pcl::GeometricConsistencyGrouping<PointType, PointType> gc_clusterer;
364    gc_clusterer.setGCSize (cg_size_);
365    gc_clusterer.setGCThreshold (cg_thresh_);
366
367    gc_clusterer.setInputCloud (model_keypoints);
368    gc_clusterer.setSceneCloud (scene_keypoints);
369    gc_clusterer.setModelSceneCorrespondences (model_scene_corrs);
370
371    gc_clusterer.recognize (rototranslations, clustered_corrs);
372  }
373
374  /**
375   * Stop if no instances
376   */
377  if (rototranslations.size () <= 0)
378  {
379    std::cout << "*** No instances found! ***" << std::endl;
380    return (0);
381  }
382  else
383  {
384    std::cout << "Recognized Instances: " << rototranslations.size () << std::endl << std::endl;
385  }
386
387  /**
388   * Generates clouds for each instances found 
389   */
390  std::vector<pcl::PointCloud<PointType>::ConstPtr> instances;
391
392  for (std::size_t i = 0; i < rototranslations.size (); ++i)
393  {
394    pcl::PointCloud<PointType>::Ptr rotated_model (new pcl::PointCloud<PointType> ());
395    pcl::transformPointCloud (*model, *rotated_model, rototranslations[i]);
396    instances.push_back (rotated_model);
397  }
398
399  /**
400   * ICP
401   */
402  std::vector<pcl::PointCloud<PointType>::ConstPtr> registered_instances;
403  if (true)
404  {
405    std::cout << "--- ICP ---------" << std::endl;
406
407    for (std::size_t i = 0; i < rototranslations.size (); ++i)
408    {
409      pcl::IterativeClosestPoint<PointType, PointType> icp;
410      icp.setMaximumIterations (icp_max_iter_);
411      icp.setMaxCorrespondenceDistance (icp_corr_distance_);
412      icp.setInputTarget (scene);
413      icp.setInputSource (instances[i]);
414      pcl::PointCloud<PointType>::Ptr registered (new pcl::PointCloud<PointType>);
415      icp.align (*registered);
416      registered_instances.push_back (registered);
417      std::cout << "Instance " << i << " ";
418      if (icp.hasConverged ())
419      {
420        std::cout << "Aligned!" << std::endl;
421      }
422      else
423      {
424        std::cout << "Not Aligned!" << std::endl;
425      }
426    }
427
428    std::cout << "-----------------" << std::endl << std::endl;
429  }
430
431  /**
432   * Hypothesis Verification
433   */
434  std::cout << "--- Hypotheses Verification ---" << std::endl;
435  std::vector<bool> hypotheses_mask;  // Mask Vector to identify positive hypotheses
436
437  pcl::GlobalHypothesesVerification<PointType, PointType> GoHv;
438
439  GoHv.setSceneCloud (scene);  // Scene Cloud
440  GoHv.addModels (registered_instances, true);  //Models to verify
441  GoHv.setResolution (hv_resolution_);
442  GoHv.setResolutionOccupancyGrid (hv_occupancy_grid_resolution_);
443  GoHv.setInlierThreshold (hv_inlier_th_);
444  GoHv.setOcclusionThreshold (hv_occlusion_th_);
445  GoHv.setRegularizer (hv_regularizer_);
446  GoHv.setRadiusClutter (hv_rad_clutter_);
447  GoHv.setClutterRegularizer (hv_clutter_reg_);
448  GoHv.setDetectClutter (hv_detect_clutter_);
449  GoHv.setRadiusNormals (hv_rad_normals_);
450
451  GoHv.verify ();
452  GoHv.getMask (hypotheses_mask);  // i-element TRUE if hvModels[i] verifies hypotheses
453
454  for (std::size_t i = 0; i < hypotheses_mask.size (); i++)
455  {
456    if (hypotheses_mask[i])
457    {
458      std::cout << "Instance " << i << " is GOOD! <---" << std::endl;
459    }
460    else
461    {
462      std::cout << "Instance " << i << " is bad!" << std::endl;
463    }
464  }
465  std::cout << "-------------------------------" << std::endl;
466
467  /**
468   *  Visualization
469   */
470  pcl::visualization::PCLVisualizer viewer ("Hypotheses Verification");
471  viewer.addPointCloud (scene, "scene_cloud");
472
473  pcl::PointCloud<PointType>::Ptr off_scene_model (new pcl::PointCloud<PointType> ());
474  pcl::PointCloud<PointType>::Ptr off_scene_model_keypoints (new pcl::PointCloud<PointType> ());
475
476  pcl::PointCloud<PointType>::Ptr off_model_good_kp (new pcl::PointCloud<PointType> ());
477  pcl::transformPointCloud (*model, *off_scene_model, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));
478  pcl::transformPointCloud (*model_keypoints, *off_scene_model_keypoints, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));
479  pcl::transformPointCloud (*model_good_kp, *off_model_good_kp, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));
480
481  if (show_keypoints_)
482  {
483    CloudStyle modelStyle = style_white;
484    pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_color_handler (off_scene_model, modelStyle.r, modelStyle.g, modelStyle.b);
485    viewer.addPointCloud (off_scene_model, off_scene_model_color_handler, "off_scene_model");
486    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, modelStyle.size, "off_scene_model");
487  }
488
489  if (show_keypoints_)
490  {
491    CloudStyle goodKeypointStyle = style_violet;
492    pcl::visualization::PointCloudColorHandlerCustom<PointType> model_good_keypoints_color_handler (off_model_good_kp, goodKeypointStyle.r, goodKeypointStyle.g,
493                                                                                                    goodKeypointStyle.b);
494    viewer.addPointCloud (off_model_good_kp, model_good_keypoints_color_handler, "model_good_keypoints");
495    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, goodKeypointStyle.size, "model_good_keypoints");
496
497    pcl::visualization::PointCloudColorHandlerCustom<PointType> scene_good_keypoints_color_handler (scene_good_kp, goodKeypointStyle.r, goodKeypointStyle.g,
498                                                                                                    goodKeypointStyle.b);
499    viewer.addPointCloud (scene_good_kp, scene_good_keypoints_color_handler, "scene_good_keypoints");
500    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, goodKeypointStyle.size, "scene_good_keypoints");
501  }
502
503  for (std::size_t i = 0; i < instances.size (); ++i)
504  {
505    std::stringstream ss_instance;
506    ss_instance << "instance_" << i;
507
508    CloudStyle clusterStyle = style_red;
509    pcl::visualization::PointCloudColorHandlerCustom<PointType> instance_color_handler (instances[i], clusterStyle.r, clusterStyle.g, clusterStyle.b);
510    viewer.addPointCloud (instances[i], instance_color_handler, ss_instance.str ());
511    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, clusterStyle.size, ss_instance.str ());
512
513    CloudStyle registeredStyles = hypotheses_mask[i] ? style_green : style_cyan;
514    ss_instance << "_registered" << std::endl;
515    pcl::visualization::PointCloudColorHandlerCustom<PointType> registered_instance_color_handler (registered_instances[i], registeredStyles.r,
516                                                                                                   registeredStyles.g, registeredStyles.b);
517    viewer.addPointCloud (registered_instances[i], registered_instance_color_handler, ss_instance.str ());
518    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, registeredStyles.size, ss_instance.str ());
519  }
520
521  while (!viewer.wasStopped ())
522  {
523    viewer.spinOnce ();
524  }
525
526  return (0);
527}

Walkthrough

Take a look at the various parts of the code to see how it works.

Input Parameters

bool hv_detect_clutter_ (true);

/**
 * Prints out Help message
 * @param filename Runnable App Name
 */
}

/**
 * Parses Command Line Arguments (Argc,Argv)
 * @param argc
 * @param argv
 */
void

showHelp function prints out the input parameters accepted by the program. parseCommandLine binds the user input with program parameters.

The only two mandatory parameters are model_filename and scene_filename (all other parameters are initialized with a default value). Other usefuls commands are:

Hypotheses Verification parameters are:

  • --hv_clutter_reg val:        Clutter Regularizer (default 5.0)

  • --hv_inlier_th val:          Inlier threshold (default 0.005)

  • --hv_occlusion_th val:       Occlusion threshold (default 0.01)

  • --hv_rad_clutter val:        Clutter radius (default 0.03)

  • --hv_regularizer val:        Regularizer value (default 3.0)

  • --hv_rad_normals val:        Normals radius (default 0.05)

  • --hv_detect_clutter val:     TRUE if clutter detect enabled (default true)

More details on the Global Hypothesis Verification parameters can be found here: A. Aldoma, F. Tombari, L. Di Stefano, M. Vincze, A global hypothesis verification method for 3D object recognition, ECCV 2012.

Helpers

struct CloudStyle
{
    double r;
    double g;
    double b;
    double size;

    CloudStyle (double r,
                double g,
                double b,
                double size) :
        r (r),
        g (g),
        b (b),
        size (size)
    {
    }
};

CloudStyle style_white (255.0, 255.0, 255.0, 4.0);
CloudStyle style_red (255.0, 0.0, 0.0, 3.0);
CloudStyle style_green (0.0, 255.0, 0.0, 5.0);
CloudStyle style_cyan (93.0, 200.0, 217.0, 4.0);
CloudStyle style_violet (255.0, 0.0, 255.0, 8.0);

This simple struct is used to create Color presets for the clouds being visualized.

Clustering

The code below implements a full Clustering Pipeline: the input of the pipeline is a pair of point clouds (the model and the scene), and the output is

std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f> > rototranslations;

rototraslations represents a list of coarsely transformed models (“object hypotheses”) in the scene.

Take a look at the full pipeline:

  }

  /**
   * Compute Normals
   */
  pcl::NormalEstimationOMP<PointType, NormalType> norm_est;
  norm_est.setKSearch (10);
  norm_est.setInputCloud (model);
  norm_est.compute (*model_normals);

  norm_est.setInputCloud (scene);
  norm_est.compute (*scene_normals);

  /**
   *  Downsample Clouds to Extract keypoints
   */
  pcl::UniformSampling<PointType> uniform_sampling;
  uniform_sampling.setInputCloud (model);
  uniform_sampling.setRadiusSearch (model_ss_);
  uniform_sampling.filter (*model_keypoints);
  std::cout << "Model total points: " << model->size () << "; Selected Keypoints: " << model_keypoints->size () << std::endl;

  uniform_sampling.setInputCloud (scene);
  uniform_sampling.setRadiusSearch (scene_ss_);
  uniform_sampling.filter (*scene_keypoints);
  std::cout << "Scene total points: " << scene->size () << "; Selected Keypoints: " << scene_keypoints->size () << std::endl;

  /**
   *  Compute Descriptor for keypoints
   */
  pcl::SHOTEstimationOMP<PointType, NormalType, DescriptorType> descr_est;
  descr_est.setRadiusSearch (descr_rad_);

  descr_est.setInputCloud (model_keypoints);
  descr_est.setInputNormals (model_normals);
  descr_est.setSearchSurface (model);
  descr_est.compute (*model_descriptors);

  descr_est.setInputCloud (scene_keypoints);
  descr_est.setInputNormals (scene_normals);
  descr_est.setSearchSurface (scene);
  descr_est.compute (*scene_descriptors);

  /**
   *  Find Model-Scene Correspondences with KdTree
   */
  pcl::CorrespondencesPtr model_scene_corrs (new pcl::Correspondences ());
  pcl::KdTreeFLANN<DescriptorType> match_search;
  match_search.setInputCloud (model_descriptors);
  std::vector<int> model_good_keypoints_indices;
  std::vector<int> scene_good_keypoints_indices;

  for (std::size_t i = 0; i < scene_descriptors->size (); ++i)
  {
    std::vector<int> neigh_indices (1);
    std::vector<float> neigh_sqr_dists (1);
    if (!std::isfinite (scene_descriptors->at (i).descriptor[0]))  //skipping NaNs
    {
      continue;
    }
    int found_neighs = match_search.nearestKSearch (scene_descriptors->at (i), 1, neigh_indices, neigh_sqr_dists);
    if (found_neighs == 1 && neigh_sqr_dists[0] < 0.25f)
    {
      pcl::Correspondence corr (neigh_indices[0], static_cast<int> (i), neigh_sqr_dists[0]);
      model_scene_corrs->push_back (corr);
      model_good_keypoints_indices.push_back (corr.index_query);
      scene_good_keypoints_indices.push_back (corr.index_match);
    }
  }
  pcl::PointCloud<PointType>::Ptr model_good_kp (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr scene_good_kp (new pcl::PointCloud<PointType> ());
  pcl::copyPointCloud (*model_keypoints, model_good_keypoints_indices, *model_good_kp);
  pcl::copyPointCloud (*scene_keypoints, scene_good_keypoints_indices, *scene_good_kp);

  std::cout << "Correspondences found: " << model_scene_corrs->size () << std::endl;

  /**
   *  Clustering
   */
  std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f> > rototranslations;
  std::vector < pcl::Correspondences > clustered_corrs;

  if (use_hough_)
  {
    pcl::PointCloud<RFType>::Ptr model_rf (new pcl::PointCloud<RFType> ());
    pcl::PointCloud<RFType>::Ptr scene_rf (new pcl::PointCloud<RFType> ());

    pcl::BOARDLocalReferenceFrameEstimation<PointType, NormalType, RFType> rf_est;
    rf_est.setFindHoles (true);
    rf_est.setRadiusSearch (rf_rad_);

    rf_est.setInputCloud (model_keypoints);
    rf_est.setInputNormals (model_normals);
    rf_est.setSearchSurface (model);
    rf_est.compute (*model_rf);

    rf_est.setInputCloud (scene_keypoints);
    rf_est.setInputNormals (scene_normals);
    rf_est.setSearchSurface (scene);
    rf_est.compute (*scene_rf);

    //  Clustering
    pcl::Hough3DGrouping<PointType, PointType, RFType, RFType> clusterer;
    clusterer.setHoughBinSize (cg_size_);
    clusterer.setHoughThreshold (cg_thresh_);
    clusterer.setUseInterpolation (true);
    clusterer.setUseDistanceWeight (false);

    clusterer.setInputCloud (model_keypoints);
    clusterer.setInputRf (model_rf);
    clusterer.setSceneCloud (scene_keypoints);
    clusterer.setSceneRf (scene_rf);
    clusterer.setModelSceneCorrespondences (model_scene_corrs);

    clusterer.recognize (rototranslations, clustered_corrs);
  }
  else
  {
    pcl::GeometricConsistencyGrouping<PointType, PointType> gc_clusterer;
    gc_clusterer.setGCSize (cg_size_);
    gc_clusterer.setGCThreshold (cg_thresh_);

    gc_clusterer.setInputCloud (model_keypoints);
    gc_clusterer.setSceneCloud (scene_keypoints);
    gc_clusterer.setModelSceneCorrespondences (model_scene_corrs);

    gc_clusterer.recognize (rototranslations, clustered_corrs);
  }

  /**

For a full explanation of the above code see 3D Object Recognition based on Correspondence Grouping.

Model-in-Scene Projection

To improve the coarse transformation associated to each object hypothesis, we apply some ICP iterations. We create a instances list to store the “coarse” transformations :

  /**
   * Generates clouds for each instances found 
   */
  std::vector<pcl::PointCloud<PointType>::ConstPtr> instances;

  for (std::size_t i = 0; i < rototranslations.size (); ++i)
  {
    pcl::PointCloud<PointType>::Ptr rotated_model (new pcl::PointCloud<PointType> ());
    pcl::transformPointCloud (*model, *rotated_model, rototranslations[i]);
    instances.push_back (rotated_model);
  }

then, we run ICP on the instances wrt. the scene to obtain the registered_instances:

  /**
   * ICP
   */
  std::vector<pcl::PointCloud<PointType>::ConstPtr> registered_instances;
  if (true)
  {
    std::cout << "--- ICP ---------" << std::endl;

    for (std::size_t i = 0; i < rototranslations.size (); ++i)
    {
      pcl::IterativeClosestPoint<PointType, PointType> icp;
      icp.setMaximumIterations (icp_max_iter_);
      icp.setMaxCorrespondenceDistance (icp_corr_distance_);
      icp.setInputTarget (scene);
      icp.setInputSource (instances[i]);
      pcl::PointCloud<PointType>::Ptr registered (new pcl::PointCloud<PointType>);
      icp.align (*registered);
      registered_instances.push_back (registered);
      std::cout << "Instance " << i << " ";
      if (icp.hasConverged ())
      {
        std::cout << "Aligned!" << std::endl;
      }
      else
      {
        std::cout << "Not Aligned!" << std::endl;
      }
    }

    std::cout << "-----------------" << std::endl << std::endl;
  }

Hypotheses Verification

  /**
   * Hypothesis Verification
   */
  std::cout << "--- Hypotheses Verification ---" << std::endl;
  std::vector<bool> hypotheses_mask;  // Mask Vector to identify positive hypotheses

  pcl::GlobalHypothesesVerification<PointType, PointType> GoHv;

  GoHv.setSceneCloud (scene);  // Scene Cloud
  GoHv.addModels (registered_instances, true);  //Models to verify
  GoHv.setResolution (hv_resolution_);
  GoHv.setResolutionOccupancyGrid (hv_occupancy_grid_resolution_);
  GoHv.setInlierThreshold (hv_inlier_th_);
  GoHv.setOcclusionThreshold (hv_occlusion_th_);
  GoHv.setRegularizer (hv_regularizer_);
  GoHv.setRadiusClutter (hv_rad_clutter_);
  GoHv.setClutterRegularizer (hv_clutter_reg_);
  GoHv.setDetectClutter (hv_detect_clutter_);
  GoHv.setRadiusNormals (hv_rad_normals_);

  GoHv.verify ();
  GoHv.getMask (hypotheses_mask);  // i-element TRUE if hvModels[i] verifies hypotheses

  for (std::size_t i = 0; i < hypotheses_mask.size (); i++)
  {
    if (hypotheses_mask[i])
    {
      std::cout << "Instance " << i << " is GOOD! <---" << std::endl;
    }
    else
    {
      std::cout << "Instance " << i << " is bad!" << std::endl;
    }
  }
  std::cout << "-------------------------------" << std::endl;

GlobalHypothesesVerification takes as input a list of registered_instances and a scene so we can verify() them to get a hypotheses_mask: this is a bool array where hypotheses_mask[i] is TRUE if registered_instances[i] is a verified hypothesis, FALSE if it has been classified as a False Positive (hence, must be rejected).

Visualization

The first part of the Visualization code section is pretty simple, with -k options the program displays goog keypoints in model and in scene with a styleViolet color.

Later we iterate on instances, and each instances[i] will be displayed in Viewer with a styleRed color. Each registered_instances[i] will be displayed with two optional colors: styleGreen if the current instance is verified (hypotheses_mask[i] is TRUE), styleCyan otherwise.

  /**
   *  Visualization
   */
  pcl::visualization::PCLVisualizer viewer ("Hypotheses Verification");
  viewer.addPointCloud (scene, "scene_cloud");

  pcl::PointCloud<PointType>::Ptr off_scene_model (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr off_scene_model_keypoints (new pcl::PointCloud<PointType> ());

  pcl::PointCloud<PointType>::Ptr off_model_good_kp (new pcl::PointCloud<PointType> ());
  pcl::transformPointCloud (*model, *off_scene_model, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));
  pcl::transformPointCloud (*model_keypoints, *off_scene_model_keypoints, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));
  pcl::transformPointCloud (*model_good_kp, *off_model_good_kp, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));

  if (show_keypoints_)
  {
    CloudStyle modelStyle = style_white;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_color_handler (off_scene_model, modelStyle.r, modelStyle.g, modelStyle.b);
    viewer.addPointCloud (off_scene_model, off_scene_model_color_handler, "off_scene_model");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, modelStyle.size, "off_scene_model");
  }

  if (show_keypoints_)
  {
    CloudStyle goodKeypointStyle = style_violet;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> model_good_keypoints_color_handler (off_model_good_kp, goodKeypointStyle.r, goodKeypointStyle.g,
                                                                                                    goodKeypointStyle.b);
    viewer.addPointCloud (off_model_good_kp, model_good_keypoints_color_handler, "model_good_keypoints");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, goodKeypointStyle.size, "model_good_keypoints");

    pcl::visualization::PointCloudColorHandlerCustom<PointType> scene_good_keypoints_color_handler (scene_good_kp, goodKeypointStyle.r, goodKeypointStyle.g,
                                                                                                    goodKeypointStyle.b);
    viewer.addPointCloud (scene_good_kp, scene_good_keypoints_color_handler, "scene_good_keypoints");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, goodKeypointStyle.size, "scene_good_keypoints");
  }

  for (std::size_t i = 0; i < instances.size (); ++i)
  {
    std::stringstream ss_instance;
    ss_instance << "instance_" << i;

    CloudStyle clusterStyle = style_red;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> instance_color_handler (instances[i], clusterStyle.r, clusterStyle.g, clusterStyle.b);
    viewer.addPointCloud (instances[i], instance_color_handler, ss_instance.str ());
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, clusterStyle.size, ss_instance.str ());

    CloudStyle registeredStyles = hypotheses_mask[i] ? style_green : style_cyan;
    ss_instance << "_registered" << std::endl;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> registered_instance_color_handler (registered_instances[i], registeredStyles.r,
                                                                                                   registeredStyles.g, registeredStyles.b);
    viewer.addPointCloud (registered_instances[i], registered_instance_color_handler, ss_instance.str ());
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, registeredStyles.size, ss_instance.str ());
  }

  while (!viewer.wasStopped ())
  {
    viewer.spinOnce ();
  }

Compiling and running the program

Create a CMakeLists.txt file and add the following lines into it:

 1cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
 2
 3project(global_hypothesis_verification)
 4
 5#Pcl
 6find_package(PCL 1.7 REQUIRED)
 7
 8include_directories(${PCL_INCLUDE_DIRS})
 9link_directories(${PCL_LIBRARY_DIRS})
10add_definitions(${PCL_DEFINITIONS})
11
12add_executable (global_hypothesis_verification global_hypothesis_verification.cpp)
13target_link_libraries (global_hypothesis_verification ${PCL_LIBRARIES})

After you have created the executable, you can then launch it following this example:

>>> ./global_hypothesis_verification milk.pcd milk_cartoon_all_small_clorox.pcd
Original Scene Image

Original Scene Image

_images/single.png

Valid Hypothesis (Green) with simple parameters

You can simulate more false positives by using a larger bin size parameter for the Hough Voting Correspondence Grouping algorithm:

>>> ./global_hypothesis_verification milk.pcd milk_cartoon_all_small_clorox.pcd --cg_size 0.035
_images/multiple.png

Valid Hypothesis (Green) among 9 false positives