Weird normal estimation

The problem Jump to heading

I bumped into a weird case when estimating normals using pcl::IntegralImageNormalEstimation - part of the normals are flipped. The normals of the flat plane are mostly correctly facing upwards, but most of the normals of the object are facing downwards.

object point cloud with flipped normals from a high view angle
Object point cloud with flipped normals from a high view angle.

The wrong normals can be seen more clearly when looking at the object from a low angle:

object point cloud with flipped normals from a low view angle
Object point cloud with flipped normals from a low view angle.

This is the snippet I used to estimate the normals, which is just a ordinary pcl::IntegralImageNormalEstimation setup:

pcl::IntegralImageNormalEstimation<pcl::PointXYZI, pcl::Normal> ne;
ne.setNormalEstimationMethod(ne.COVARIANCE_MATRIX);
ne.setMaxDepthChangeFactor(0.02f);
ne.setNormalSmoothingSize(10.0f);
ne.setInputCloud(cloud);
ne.compute(*normals);

What caused the problem? Jump to heading

After some investigation, it turned out that the problem was due to transforming the point cloud to a different coordinate system in a previous step.

When a point cloud is captured, you would normally get the point cloud with the origin set to the front of the object, which is where the camera's optical frame is. All the points in the point cloud would be relative to that origin. When performing normal estimation in this case, the normals would be estimated to be on the side of the surfaces that can be seen by the camera, and not on the other side because it would not make sense.

ordinary point cloud captured by a camera
Ordinary point cloud captured by a camera.

In my case, in order to make it easier to try out different camera setups or to use a cloud stiched from multiple point clouds, I have transformed the point cloud to origin from a point on the flat plane to eliminate the factor of camera for the following processing. Thus, you can see in the earlier screenshots that the origin marker is lying on the flat plane (beneath the object), compared to the above screenshot where the origin marker is floating above in the air (in front of the object).

The solution(s) Jump to heading

I found two fixes to the problem:

Solution 1: a quick fix Jump to heading

There is a setViewPoint(float vpx, float vpy, float vpz) method in pcl::IntegralImageNormalEstimation. You can use the x, y, z translation values from the transformation matrix to set the viewpoint back to the camera's original position, or just use a safe value like (0, 0, 1) in this case.

  pcl::IntegralImageNormalEstimation<pcl::PointXYZI, pcl::Normal> ne;
ne.setNormalEstimationMethod(ne.COVARIANCE_MATRIX);
ne.setMaxDepthChangeFactor(0.02f);
ne.setNormalSmoothingSize(10.0f);
ne.setInputCloud(cloud);
+ ne.setViewPoint(0.0f, 0.0f, 1.0f);
ne.compute(*normals);

However, that means you have to either set a magic number like (0, 0, 1) or keep track of the original camera position somewhere.

Solution 2: keep track of the camera position in pcl::PointCloud Jump to heading

A better solution would be to keep track of the camera position in the pcl::PointCloud when we first transform the point cloud. There is a sensor_origin_ property and sensor_orientation_ property in pcl::PointCloud that can be used to store the camera position and orientation.

pcl::PointCloud<pcl::PointXYZI>::Ptr output_cloud(new pcl::PointCloud<pcl::PointXYZI>);

Eigen::Affine3f transform = Eigen::Affine3f::Identity();
transform.translation() << x, y, z;
Eigen::Quaternionf q(qw, qx, qy, qz);
transform.rotate(q);

pcl::transformPointCloud(*input_cloud, *output_cloud, transform);

output_cloud->sensor_origin_ = Eigen::Vector4f(x, y, z, 1.0f);
output_cloud->sensor_orientation_ = Eigen::Quaternionf(qw, qx, qy, qz);

Once you have sensor_origin_ and sensor_orientation_ set properly, you can get rid of the setViewPoint call in the normal estimation step.

  pcl::IntegralImageNormalEstimation<pcl::PointXYZI, pcl::Normal> ne;
ne.setNormalEstimationMethod(ne.COVARIANCE_MATRIX);
ne.setMaxDepthChangeFactor(0.02f);
ne.setNormalSmoothingSize(10.0f);
ne.setInputCloud(cloud);
- ne.setViewPoint(0.0f, 0.0f, 1.0f);
ne.compute(*normals);

Both solutions will have the normals estimated correctly

object point cloud with correctly oriented normals from a high view angle
Object point cloud with correctly oriented normals from a high view angle.
object point cloud with correctly oriented normals from a low view angle
Object point cloud with correctly oriented normals from a low view angle.

An extra bonus of setting sensor_origin_ and sensor_orientation_ is that they will be preserved as the VIEWPOINT field when saving the point cloud to a PCD file, which can be useful for future reference and can be leveraged by other tools.

$ head output_cloud.pcd
# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z normal_x normal_y normal_z intensity curvature
SIZE 4 4 4 4 4 4 4 4
TYPE F F F F F F F F
COUNT 1 1 1 1 1 1 1 1
WIDTH 1280
HEIGHT 720
VIEWPOINT 0.00732 0.018473 0.230759 0.00546692 0.99976 -0.0201687 -0.0065679
POINTS 921600

Remark Jump to heading

The color of the points in this post represents z value, which is distance from the geometry origin instead of the sensor origin. If you look carefully, you might notice that although the transformed point cloud with the sensor origin set and the original point cloud have their origin marker at the same relative position, the color of the points are very different. That is because in the original point cloud, the sensor origin is the same as the geometry origin, while in the transformed point cloud, the geometry origin is transformed to a point on the flat plane, while the sensor origin is still at the camera position.

Point cloud before transformation
Point cloud before transformation.
Point cloud after transformation and sensor origin fixed
Point cloud after transformation and sensor origin fixed.

Published