Color correction using the QP203 color chart

A generic optimiser is definitively a must have in one's toolbox. It will save the day when there is no other solution, when you're too lazy to find a proper one, or when you're too dumb to know how to implement it. In my old framework I was using the genetic algorithm for that purpose, and that really proved handy more than one time, either when I was doing competitive programming on CodinGame, or even in some occasion at my workplace. In LibCapy I've chosen to try differential evolution. I had never heard about it before, and I found its simplicity particularly attractive given how lasy I am.

After implementing it as the CapyDiffEvo class, I was looking for a practical example of use. I then remembered I have a QP203 color chart since years, which I've never used lacking a software able to handle it (never been able to make the provided freeware works). That's a nice and useful optimisation problem, so I decided to give it a try.

The QP203 color chart looks like the image below. It's a simple cardboard with 7x5 color patches.

These color patches can be used to correct the color of pictures toward the reference colors of the card, or to match a picture against another one. It is useful when pictures are taken in an uncontrolled environment, where the lighting conditions biases the colors. Nowadays cameras are often able to correct for these bias, that's what the 'sunny', 'cloudy', 'neon', ...(or whatever they are called on your camera) modes are for. However these modes use generic models, while having a few points of reference with known colors allow for better results, in theory at least.

Some cameras have a functionality to adjust their color balance based on a white or grey surface, which is also available on the QP203. They don't however integrate such a functionality for a complete color chart. There exists many different cards, made by independent manufacturers, and as far as I know there is no camera whose embedded software includes the needed database of color charts and detection/correction algorithm. On a modern camera that doesn't seem out of reach, and makes me wonder why. There is certainly some dirty business reason on the database side but, at least leaving the user the possibility to input itself the reference colors...? Anyway, my camera can't, so I need to do the color correction as a post-processing step.

Edit on 2022/03/19:
I came accross the FLIR cameras from Teledyne, which don't integrate automatic correction from a color chart, but at least have an integrated color correction matrix which can be set programmatically as explained here. Nice!

Alexander Behringer explains in details how color correction can be implemented in "Camera Array Calibration with Color Rendition Charts". In short, the chart's colors in the picture are linearised to account for the non-linear response of camera sensors, then a transformation matrix is applied to match the picture colors to the reference colors. The color correction is then equivalent to optimising the linearisation coefficients and matrix values. The thesis "Color Correction and Contrast Enhancement for Natural Images and Videos" by Qi-Chong Tian is also a precious reference for color correction without using reference color charts.

For my own implementation, I've followed Behringer's work. His correction model uses the raw pixels value of the picture. That's the right way to do with no doubt, unfortunately my modest library doesn't include a RAW file decoder. Then I'll do what I can with the RGB values of a standard image file format, and ignore the camera response model. Still, I wanted to keep the overall brightness of the picture. Reference charts values are values obtained in a photo studio like environment with a perfectly white bright illumination. Matching a, generally under-illuminated, picture directly to these values results in completely burnt colors. To compensate, I'll scale the range of brightness of the reference color chart to picture's color chart.

Behringer is also describing a nice way to automatically detect the chart in the picture. I've ignored that part (hope to come back to it one day) and focused on the color correction itself. Instead, I take the coordinates of the four corners of the area containing the color patches in the picture, and use them as the control points of a 2D->2D Bezier surface or order 1. The coordinates of the center of the color patch at column \(i\) and row \(j\) can then be calculated with Bezier((0.5+i)/5, (0.5+j)/7). To limit the influence of eventual noise in the picture, I smooth the patch color on a 5x5 pixels kernel at its center.

My implementation of color correction then becomes as follow. Let's call \((r_i,g_i,b_i),i\in[0,34]\) the RGB values of the color patches in the image, and \((r'_i,g'_i,b'_i),i\in[0,34]\) the reference RGB values of the color patches in the QP203 color chart. Let's call \((r^*_i,g^*_i,b^*_i),i\in[0,34]\) the scaled reference RGB values such as \(c^*_i=lerp(c'_i,[m',M'],[m,M]),c\in\{r,g,b\}\), where \(m=min_{c\in{\{r,g,b\},i\in[0,34]}}(c_i)\), \(M=max_{c\in{\{r,g,b\},i\in[0,34]}}(c_i)\), \(m'=min_{c\in{\{r',g',b'\},i\in[0,34]}}(c_i)\), \(M'=max_{c\in{\{r',g',b'\},i\in[0,34]}}(c_i)\), and \(lerp(x,A,B)\) is the linear interpolation of \(x\) from the interval \(A\) to the interval \(B\). Find the 3x3 matrix \(M\) which minimise \(\sum_{i=0}^{34}||(r^*_i,g^*_i,b^*_i)-M.(r_i,g_i,b_i)||^2\). Note that, as the purpose of this work was to test differential evolution that's what I'll use to find \(M\), however, given that it consists of minimising a system of linear equations, it could be solved directly by regression. The corrected RGB values \((r',g',b')\) of a given picture's pixel can finally be calculated by multiplying this matrix with that pixel's RGB values \((r,g,b)\): \((r',g',b')=M.(r,g,b)\).

To test my implementation I've set up a small scene in my room with objects of various colors, layed on the whitest piece of paper I add. I shooted the scene with a Nikon D5100 camera, varying the camera and light settings. To make the correction obvious and check how far the correction could go, I've intentionnally chosen very poor settings. The two examples below shows a 'good' picture (camera in auto mode, white full lighting) and an example 'wrong' one (camera in inappropriate light mode). The color casting is quite obvious in the 'wrong' one, but even the 'good' one is a bit warm.

After correction, the pictures look like below. (use the select box to navigate, the top picture is before correction, the bottom one is after correction)

The color casting is clearly corrected in each picture. The overall brightness is also preserved, thanks to the step \(c^*_i=lerp(c'_i,\dots)\). As an example, the corected image #04 with/without this step looks like this:

The fitness values before/after correction are as follow.

imageinitial fitnessfinal fitness

Differential evolution has successfully minimised the bias in color. Whether it did it efficiently or not would take the comparison to the implementation of other solutions (different optimisers or regression). In that respect, this work also makes for a nice practical test case for such kind of comparison. For now, the results are visually satisfying to me and I'm looking forward an occasion to actually use it, maybe for another insects shooting.

See also how I've made the CLI application to use that color correction implementation here.

This article is followed by another one where I compare the results if using the L*a*b* color space instead of sRGB.

in All, Computer graphics,
Copyright 2021-2022 Baillehache Pascal