Point cloud nearest neighbour search using B-Tree

Nearest neighbour search in a point cloud consists of, given a query point $$q$$ and a point cloud $$P$$, finding the point $$p$$ in $$P$$ which minimizes the distance between $$q$$ and $$p$$ for a given metric. The brute force method consists of iterating through all points of $$P$$ and memorising the one nearest to $$q$$. For very large point clouds containing million of points, like nowadays scans of real world objects or machine learning datasets, more efficient methods become necessary. In this article I'll introduce how to use the B-Tree data structure to perform search of nearest neighbour in a point cloud in a more efficient way than brute force.

Improvement over the brute force method consists of partitioning the point cloud into clusters such as we can gain information about other clusters while looking for the nearest neighbour in one of them, with the view to reduce the number of remaining clusters to be checked. The data structures used to represent theses space partitioning are called metric trees. For example, using a KD-Tree one can recursively splits the space into two with an hyperplane, such as the search in one half may lead to the conclusion that the nearest neighbour can't be in the other half, avoiding to check any of the points in that other half.

As the efficiency of space partitioning methods directly depends on the way the space is partitioned, many options have been studied. Beside KD-Trees, Ball-Trees are another example. They split the space with nested hyperballs instead of hyperplanes. There are several variants of Ball-Trees, for example: VP-Trees use ball centered on points of the point cloud having radius such as they split the remaining points into two equal size sets; M-Trees also uses ball centered on points of the point cloud, but enhance them with covering radius and distance to parent node properties.

After studying the B-Tree data structure, I was looking for some example use-case, and had the idea they may be useful for nearest neighbour search. Indeed M-Trees are using that data structure, and the idea I came up with is a variant of Ball-Trees. My idea is based on the following observation: if $$p$$ is a point in $$P$$, then the nearest point in $$P$$ to $$q$$ is necessarily within the ring of radii $$[M(q)-M(p-q),M(q)+M(p-q)]$$, where $$M()$$ is the metric. I'm using the term "ring" loosely here, "thick hypersphere" is maybe more what I mean, I have no idea...

Then, one can construct a representation of the point cloud with a B-Tree where separators are the points in the point cloud, branches between separators define the rings, and the comparison operator between elements is the given metric. Searching for the nearest point to a query point starts with the calculation of the distance to an initial point $$p_0$$ in the point cloud. This creates an initial range for the search ring. The B-Tree is then traversed, while pruning branches which are outside the current range (i.e. points outside the current search ring), and updating the current range (i.e. shrinking the current search ring when we find a nearer point).

Making the reasonable assumption that the user would like to query several points, we can optimize out the computation which are independent of $$q$$. To prune branches we will need to check if $$M(s)\in[M(q)-M(p-q),M(q)+M(p-q)]$$ where $$s$$ is a separator of the B-Tree. $$M(s)$$ is independant of $$q$$ and can then be stored in its corresponding node of the B-Tree to avoid recomputing it several times.

The origin $$O$$ relative to which the calculation of $$M(s)$$ and $$M(q)$$ is done can be choosen in a convenient way depending on the context. Let's take the example of a point cloud coming from a 3D scan. One may imagine use cases where that model will be translated, rotated and/or scaled. Translation doesn't invalidate the B-Tree, nor rotation and uniform scaling as long as they are always done relative to the same relative origin position (the center of the point cloud for example). Thus, a careful choice of that origin can allow the reuse of the same B-Tree in some cases even if its corresponding point cloud is transformed.

The choice of a good initial point $$p_0$$ can also speed up the search. The nearest it is from the result point, the narrower the initial search ring will be, giving a boost at the start of the search. Allowing the user, who may have a good guess of an approximate nearest point, to choose that initial point is an import design choice in the implementation.

The pseudocode for the creation of the B-Tree is as follow:

Super simple, not much to say, except you may prefer a reference to the point instead of the index, and may want to add a scale factor to handle transformations of the point cloud as explained above. The pseudocode for the search of nearest neighbour is as follow:

To evaluate the performance of this method, I've used the following models from the Stanford 3D Scanning Repository: the Stanford Bunny (35947 points), the Happy Buddha (543652 points), and the Dragon (437645 points). I've verified my implementation by checking it returned a point at same distance as the one from a brute force search (not necessarily the same as there can be several nearest neighbours at same distance). I've used the number of distance comparisons relative to the brute force method (then, result in $$[0,1]$$, $$1$$ is as many as the brute force, i.e. no performance gain, and the lower the better). For each model, I've performed 1000 queries equally distributed within the bounding box of the model scaled by 1 and 3 successively. Finally, I've ran the tests for a number of separator per node in the B-Tree ranging from 3 to 3003. The results are as follow.

The good news is that it's actually improving over brute force. So my idea isn't completely ridiculous, and my implementation probably not completely messed up, great !

As the number of separators grows, the gain increases quickly, until an optimal value from which it decreases slowly. During the search, when checking the separators of a node, the more there are the higher the probability to find a nearer point, and thus the higher the probability to prune more branches with a thinner ring. But at the same time, more separators means more comparisons per node, hence beyond an optimal number of separator, the increase in number of comparison overcomes the increase in probablity to thin further the search ring.

The bigger the point cloud, the higher the gain. The brute force grows linearily with the point cloud size, while the efficiency of pruning a tree structure grows exponentially. Therefore the gain increases.

The larger the range of query points, the lower the gain. The thinner the search ring becomes the stronger the pruning (and then the gain) becomes. However, the ring thickness can't be less than $$2M(p^*,q)$$, where $$p^*$$ is the nearest point to $$q$$ in $$P$$. If $$M(O,q)\lt M(O,p^-)$$ or $$M(O,q)\gt M(O,p^+)$$, where $$p^-$$ and $$p^+$$ are respectively the nearest and farthest point in $$P$$ from $$O$$, only the outer or inner side, resp., of the search ring participates to the pruning. Therefore the gain is reduced for query points out of the ring $$[M(O,p^-),M(O,p^+)]$$.

In conclusion, the method introduced in this article to find the nearest neighbour in a point cloud does perform better than a simple brute force method. However, it shows poor performance when the query point is far away from the point cloud. I haven't made comparison with a similar method, like M-Tree, but I don't hope much my method would compare well, even when the query is near the point cloud.

The main flaw comes from the use of a ring centered at a given point. While it does prune out points from the point cloud, it also leaves way too many compare to a method like M-Tree. In worst case, the search would spent its time checking points on the "wrong" side of the ring (opposite side relative to $$q$$ and $$O$$) making none or few progress in reducing the search ring at each step.

Anyway, that was a fun study to do, I'm glad I somehow rediscovered the concept of Ball-Tree, and it was a solid validation for the implementation of B-Tree which I've just added to LibCapy !

Edit on 2024/08/01: adding more results below.

Quick update. I was wondering what's the influence of the origin $$O$$, so I ran my tests again using the center of the bounding box instead of the origin of the coordinate system. The results are as follow:

We clearly see that the position of $$O$$ as a strong influence on the gain, sometime for the better, sometime for the worst. I won't speculate why now cause I don't have time and no idea. But it shows clearly that, in the context of repeating query on the same point cloud, one would be interested in searching the optimal position for $$O$$ as a preprocessing step.

Edit on 2024/08/27: adding more results below.

As I have already implemented differential evolution in LibCapy, it was also a good occasion to use it to search the optimal $$O$$. I've obtained the following results (for a number of separator set to 903):

And using these results to run the tests again it gives:

Good news, differential evolution was able to find a better solution for all cases ! Also the optimal $$O$$ doesn't seem to follow an obvious pattern, so for now I'll conclude that one would have to calculate it for its own use case if he/she'd like to use the method introduced here for nearest neighbour search.

2024-07-29
in AI/ML, All, Computer graphics,
58 views