A formal article is available here.
In my article about color correction I was referring to the thesis of Alexander Behringer. This thesis not only describes how to perform color correction based on a color chart, but also how to automatically detect that color chart in the image. Then, I decided to keep on working on this subject and started implementing the solution described by Behringer.
While implementing his solution, I've felt that it could be improved. Behringer himself assesses that the case where no patch could be identified in certain row or column is left unimplemented for example. Or, the transformation accounting for rotation is based on a comparison of patches' color with corrected color, which seems to me very sensitive to the color deviation we try to correct in the first place. Finally, what to do in the case where some areas do not correspond to color chart's patches is not clearly explained (for example if the image contains a tile wall or checker board).
These points led me to look for my own method to locate the color chart, which I was able to find. It works as follow (details in the PDF linked above):
Finding datasets of images including a color chart to test my method was a bit of a hassle. I couldn't find any at all with the QP203 color chart I own. With a different one, the only I could find was several datasets provided by the Middlebury College (available here, reference paper here), using the X-Rite color checker (specifications available here). If you known of any (no matter what is the color chart), please let me know!
My method works for any color chart, but anyway I took the time to make my own dataset with the QP203 color chart in complement of the Middlebury datasets. It's quite time consuming but I managed to gather 41 pictures as various as I could. This dataset is described in another article, and available for everyone to use.
The manufacturer of the X-Rite color checker is nice enough to provide a free software. This gave a reference to evaluate the performance of my method, at least on the Middlebury datasets. The location failure ratio was as follow:
Datasets | XRite | my method |
---|---|---|
biwall | 6.6% | 0.0% |
chalk | 6.2% | 0.0% |
color-m | 17.5% | 1.3% |
color-m2 | 0.0% | 0.0% |
color-m3 | 0.0% | 0.0% |
TOTAL | 5.9% | 0.4% |
Good news, my method gives better result! On the QP203 dataset I obtained a 4.8% failure ratio. The Middlebury datasets are interesting to test the robustness to strongly over/under exposed or tinted pictures, but lack in variety. My dataset is much more various, but 41 pictures is not a very large dataset. I think a fair comparison with the X-Rite software, and fair evaluation of my method, would require more data. Nonetheless, I also think these 309 pictures are already enough to confidently say that my method works quite well.
The result of location on pictures of the QP203 dataset can be seen below:
If you're interested in this method read the paper and feel free to contact me!