After a first try at simulating Japanese handwritten text calligraphy six years ago, I decided to come back to that topic and see how I could improve my previous work.
The work I did then was based on machine learning. I used a genetic algorithm to learn the 3D path reproducing the image of a character when used as input of a simple brush model. A rough path was manually edited and used as a seed, and the GA optimised the path's controls to fit the resulting image to a target image created from a computer font. The fitness was evaluated as the number of matching pixels. Results were encouraging as can be seen below.
However, while that method works well for hiraganas, it doesn't for kanjis due to the amount of work necessary to prepare the training data and their large number of strokes. Also the brush model I developed then wasn't really advanced and I wanted to try to reproduce more accurately the beauty of the brush and ink interacting with paper.
So this year, I started again from zero, focusing more on the four components of the simulation (hand, brush, ink, paper) and less on machine learning. I'll only introduce here briefly my new model and its results, and let you refer to the formal article if you want to know more.
The sheet of paper is divided into small 3D blocks called 'papels' which represent small volumes of paper. Each one can contain a certain amount of ink which flows from papel to papel according to Darcy's law. The fibrous texture of the paper acts as a filter between papels for particles in suspension in the ink (percolation). The proportion of water relative to number of particles, and the proportion of number of particles relative to particles size is taken into account.
The brush, or more precisely its tuft, is made of hundred of individual bristles. Each bristle is an elastic polyline for which internal constraints are solved with a modified version of the FABRIK algorithm to take into account the stiffness of the bristle and the interaction with paper. The splitting of the tuft into clumps during writing is also simulated. Nodes of the polylines carry ink, which is transferred to the paper upon contact. Ink then flows in the tuft from wet nodes to dry nodes.
The hand component implements the movement of the brush. It is extrapolated from the representation of each character as a 2D splines, available in the KanjiVG database. The missing height/pressure coordinate is automatically calculated using heuristics, as well as the whole movement is processed to recreate various styles.
The main limitation of this new model is the extrapolation of a 3D hand movement from the available 2D data, and the missing other dimensions (brush rotation on its axis and tilt relative to paper). The kind of data I would need doesn't exist yet, and it's improbable it will appear soon. Procedural generation of the missing data from a partial database is currently the only choice. This work and my previous one show there are ways to obtain interesting results despite this limitation, as can be seen with the results below, and there is certainly room for improvement, as I suggest in the formal article.
Hand, brush, ink, paper. The four components of the model.
Procedural generation of the kuzushiji style from the KanjiVG 2D data.
Rendering of random hiragana with random parameters.
Rendering of random kanji with random parameters.
Variations on the proverb 弘法筆を選ばず (1).
Variations on the proverb 弘法筆を選ばず (2).
The model allows for video and 3D rendering too. The full video is available here.
And I have introduced in two other articles (1, 2) how I have used this model to produce a calligraphy of a Japanese Go players "Banzuke" and a map of the Gion festival parade.