Memo for myself about the calculation of the bilinear inverse in 2D.
In this article I'm interested in the following problem: given a quadrilateral laying on a 2D plane, defined by the coordinates of its four corners in clockwise order \(P_0\), \(P_1\), \(P_2\), \(P_3\). Given a point \(P\) on that plane, how to calculate the bilinear coordinates \((u,v)\) of that point relative to that quadrilateral.
Inigo Quilez gives a solution here, which unfortunately is buggy as others also noticed here. No offense to him, he's doing really great stuff and I invite you to have a look at his website. Ivan Popelyshev suggested a fix here, which I haven't tried but look fishy. Anyway, Nick Alger gives another idea here: use the Newton-Raphson method. His comment is very detailed and explain why this approach is interesting. I'll go through the math (to ensure I understand it myself) and give a C implementation below (Nick gives a Matlab one).
The bilinear coordinates are defined as follow: $$ Q(u,v)=(1-v)((1-u)P_0+uP_1)+v((1-u)P_3+uP_2) $$
which expands into $$ Q(u,v)=P_0+(P_1-P_0)u+(P_3-P_0)v+(P_0-P_1-P_3+P_2)uv $$
We are looking for the solution \((u^*,v^*)\) such as \(Q(u^*,v^*)=P\). This is a non linear system of 2 unknowns and 2 equations. It can be solved with the Newton-Raphson method. To apply this method we need an estimate of the solution \((u_0,v_0)\), the inverse of the Jacobian matrix \(J\) of \(Q\) (the derivative in each dimension of \(Q\)). Applying the method consists of iteratively computing \((u_{i+1},v_{i+1})=(u_i,v_i)-J^{-1}(u_i,v_i)(Q(u_i,v_i)-P)\). \((u_i,v_i)\) converges toward \((u^*,v^*)\).
For the estimate of the position, any point in the plane is fine. Assuming that in practive \(P\) is most often inside \(Q\) or near it, choosing \((u_0,v_0)=(0.5, 0.5)\) seems legit.
The derivatives of \(Q\) are: $$ \begin{array}{l} \frac{\delta Q}{\delta u}(u,v)=(P_1-P_0)+(P_0-P_1-P_3+P_2)v\\ \frac{\delta Q}{\delta v}(u,v)=(P_3-P_0)+(P_0-P_1-P_3+P_2)u\\ \end{array} $$ hence, the Jacobian is $$ J(u,v)=\left[\begin{array}{cc} (P_1-P_0)_x+(P_0-P_1-P_3+P_2)_xv & (P_3-P_0)_x+(P_0-P_1-P_3+P_2)_xu\\ (P_1-P_0)_y+(P_0-P_1-P_3+P_2)_yv & (P_3-P_0)_y+(P_0-P_1-P_3+P_2)_yu\\ \end{array}\right] $$ The Jacobian being a 2D matrix, its inverse can be easily calculated as following $$ J^{-1}(u,v)=\left[\begin{array}{cc} dJ_{1,1} & -dJ_{1,0}\\ -dJ_{0,1} & dJ_{0,0}\\ \end{array}\right] $$ where \(d=1/(J_{0,0}J_{1,1}-J_{0,1}J_{1,0})\), and \(J_{i,j}\) is the value at the i-th column and j-th row of \(J\). Note that if the quadrilateral is not degenerate the inverse is guaranteed to be calculable, except for the case when the current solution is the optimal solution. In such case, the Jacobian is null and non invertible. This case can be handle by checking for isnan(d), or using branchless programing trick (cf below).
The implementation in C becomes:
Unit tests:
Note that if the quadrialteral is not degenerate, the convergence is guaranteed. If it is, random coordinates are returned. If one needs to check for success of convergence, the norm of v in BilinearInverse2D() gives an evaluation of that success. If successfull, it should be almost equal to 0.0; if not, the greater the worst. The function could easily be modified to return this norm so the user can check it.