MiniFrame, a framework for solving combinatorial optimization problems

Around 10 years ago I used to participate to programming contests on the CodinGame platform. These were contests where the goal was to code a bot playing some kind of game against other participants, in other words solving problems related to combinatorial optimization in real time. I had develop a framework based on the Minimax algorithm which I reused on each contest and allowed me to reach a high ranking. Revisiting this framework and making it publicly available was on my todo list since then. I finally took time to do it and will introduce it in this article.

Combinatorial optimization

In combinatorial optimization problems we are looking for a subset of elements, inside a discrete and finite set, which maximise an objective function. This can be interpreted as, choosing one element among all possible, then another one, and so on according to some constraints until an end condition, and finally evaluating the result subset according to the objective function.

The problem is then, how to choose an element at each step to obtain the subset with maximum value. If the problem is simple, the solution can be obtained by exploring all possible combinations and choosing the best one (brute force method). In real life problems, it is however impractical most of the time, as the number of combination is too large and exceeds the computation time/memory constraints.

For example, in the context of a two players game, the subset is made of the moves constituting one game instance, choosen among all the possible sequences of move allowed by the game rules. The objective function is the win of the game for a given player. The rules are generally well defined, but the opponent moves are not known in advance, and even when the winning condition is well defined, the evaluation of a position during the game can be difficult.

As another example, in the context of resource attribution, the problem becomes the selection of pairs resource/user, choosen among all possible pairs. The objective function could then be the maximisation of user satisfaction. Here, the set of possible pairs is known with certainty from the beginning, but the definition of the objective function (what satisfies the user) can be difficult.

Greedy algorithm

The simplest algorithm for this kind of problems is the Greedy algorithm. At each step, the best choice is determined with a heuristic based only on local information available at that step. This method is extremely easy to implement and fast to execute, but rarely produces a good solution when applied to real life complex problems.

MiniMax

The MiniMax algorithm (MM) was designed to solve two players sequential games. The tree of all possible sequences of move is progressively constructed in a breadth first order, and each time a node is computed, it is evaluated with the objective function and its score is backpropagated toward the root of the tree. The backpropagation stops when, for a given node, there exists a sibling having a more advantageous value from the point of view of the active player at that node's parent node (see the Wikipedia page for more details). Solving the game using MM consists then of choosing at each node the next move with best value according to the current player.

The name MiniMax comes from the fact that if the value of a node represents the advantage of a position for player one, then the optimal path in the tree is found by choosing the next node maximising the value when it's player one's turn, and the next node minimising the value when it's player two's turn. I always find that way of thinking confusing, and rather than maximising/minimising relative to one player I personally prefer to see it as maximising from the point of view of the current player. Both ways of thinking are strictly equivalent, but when adapting Minimax to games with more than two players, or even one player, I found the "always maximising" view more natural.

Monte Carlo Tree Search

While Minimax explores the tree in a breadth first manner, another approach is to explore it in a depth first one. This is known as the Monte Carlo Tree Search (MCTS) algorithm. In this algorithm, instead of exploring all subnodes of a node, we choose only one at random. The exploration then goes from the root of the tree down to an end node along one single random path, and starts again from the root along a different random path. The evaluation of a node is then equal to the average of the evaluation of its currently explored subtree's end nodes.

Compare to MM, it is advantageous when there is no well defined way to evaluate incomplete subsets (mid-game positions in the context of a game) and only complete subsets (end-game positions) have a well defined evaluation (win/loss for a game). In the latter case, MM can't evaluate a node value until it has reached end nodes, which generally doesn't happen until nearing the end of the game due to the breadth first exploration and limited time/memory.

In the other hand, as MCTS explores only a random subset of all possible paths, its evaluation of a node is only an approximation of its real value, and by choosing randomly MCTS takes the risk of missing an optimal path. Completely randomly choosing the path to explore then appears suboptimal, and the UCB1 formula (\(max\left(\frac{w_i}{n_i}+c\sqrt{\frac{ln(N_i)}{n_i}}\right)\)) was developped to improve on that point by recommending which next node to explore (MCTS+UCB1 is referred to as UCT). Unfortunately, it's only a recommendation and still doesn't guarantee it won't miss the optimal path.

Also, UCB1 can only be used for evaluation functions with a binary output (win/loss), making UCT less general than MCTS or MM. MCTS doesn't suffer from that limitation, but may not lead to the optimal answer. Even if it has explored the whole tree, due to the averaging of leaves value the optimal solution may be hidden within a subtree full of low value leaves. In that case MCTS would prefer in the early nodes a branch leading to a subtree with lower maximal value but higher average one.

The two problems above make me think that, except if there is no evalutation function on intermediate nodes and so no choice but to avoid MM, MCTS or UCT are much less appealing than MM.

Alpha-Beta Pruning

The above methods all give instructions on how to explore the tree and how to choose a branch from a node to maximise the outcome. However they are all eventually equivalent to a brute force method: given enough resources they will explore the whole tree. UCT tries to improve on that point by spending more time on subtrees that look the more promising, but it would be better if it was possible to completely discard subtrees which are certain to contain only suboptimal solutions. This is the idea behind the Alpha-Beta pruning algorithm (ABP).

In the MiniMax algorithm, one can observe that, in the context of a multi-players sequential game, as each player chooses the branch leading to the best node from its point of view, exploration of branches to other nodes can be discarded as soon as we know they won't lead to a better solution. To ensure the value of a node, MM is modified from breadth first to depth first, and the pruning of not-yet computed branches from \(N_i\) occurs as soon as \(V^*_{P_{i-1}}(N_{i-1})>V^*_{P_{i-1}}(N_i)\) where \(N_{i-1}\) is the parent node of \(N_i\), \(P_i\) is the active player at node \(N_i\), and \(V^*_P(N)\) is the best branch value at node \(N\) from the point of view of player \(P\).

In theory, ABP indeed prunes a large part of the tree, ensuring to return the same optimal path as MM in a much faster time. However, the depth first exploration is generally incompatible with resource constraints: only a part of the tree will ever be explored. In the worst case only one branch of nodes near the root of the tree are explored as the exploration ends deep in their subtree before coming back to them. In the context of a game that would be equivalent of always considering only one single next move. Surely not the right strategy to win. For that reason it is suggested to shuffle the branches duing exploration.

So in practice we must limit the exploration to a carefully choosen depth ensuring it completes within the resources limit. This requires to be able to evaluate all nodes (not only end ones), and then comes the risk of pruning accidentally optimal branches due to imperfect mid-game evaluation.

Choosing a maximum depth also has a big disadvantage: some part of the tree may have higher branching factor than others. Without a maximum depth, MM always goes as deep as it can within the resources limit. Setting a static maximum depth would cause early stop of the exploration even if it could have explored further.

Here again, I personally find difficult to believe that ABP could outperform MM, except in situations where resources constraint is not a problem.

MiniMax siblings pruning

Of course, pruning to improve MM is extremely appealing. Unsatisfied by ABP I've decided to make my own, which I'll call 'siblings' pruning. When a node is visited during exploration its childs are added to the list of nodes to explore only under the condition their value is within a threshold from the value of the best one among them. The threshold is controlled by the user.

When a node is first visited, the pruning occurs based on local information. As the exploration goes deeper into the tree and the values backpropagate, when a node is visited again the pruning occurs based more and more on deep, reliable, values. For a very high threshold value, it behaves as standard MM. For a very low threshold value, it explores only the paths which are the best according to the information available so far. In between, the lower the threshold value the more pruning, hence the deeper the exploration can go in the same amount of time, but the higher the chance to ignore an optimal path hidden behind a seemingly "bad" position in middle game. The higher the threshold value the less pruning, hence less deep exploration in the same amount of time, but lower chance of missing an optimal path. Automatic search for the optimal threshold value can easily be implemented.

Results below will show it is indeed an efficient way of pruning.

MiniFrame

All the methods introduced above can be used to solve combinatorial optimisation problems as long as they can be represented as trees. More precisely, in term of implementation, they can be used if the user can provide:

  1. a definition of data representing one node in the tree
  2. a definition of data representing one transition from one node to one of its childs
  3. a function creating the root node
  4. a function computing the list of possible transitions from a node
  5. a function computing the child node resulting from a transition from its parent node
  6. a function checking if a node is a leaf
  7. a function evaluating a node

It is then possible to create a reusable framework implementing the algorithms introduced above, ready to use by a user who only need to implement the two data structures and five functions above. MiniFrame is a framework written in C doing just that. In addition, it also provides tools useful to implement the user functions, debug them and search for the best parameters. You can download it here (see at the end of the article for old versions).

I describe below how to use it and how it performs on concrete examples.

Tic-tac-toe

Tic-tac-toe is the most simple example possible, so let's start with that. I guess everyone knows this game so I won't explain the rules here (if you need a refresher the Wikipedia article is here).

Some vocabulary: in MiniFrame a node is called a "world", a transition between two nodes is called an "action", and entities at the origin of the actions (in a game, the players) are called the "actors". We first need to tell to MiniFrame how many actors there are, and how many actions there can be from any given world.

For performance optimisation, MiniFrame uses its own memory management functions and a memory pool to avoid dynamic allocation during execution. Hence the necessity to indicate beforehand the maximum number of actions per world.

The data structures are super simple: a grid of 9 by 9 cells. The cell value encodes an empty cell (-1), a cross (0) or a circle (1). An action is represented with the row and column, and a value (same as cell).

MiniFrameActionFields and MiniFrameWorldFields are macros to automatically add internal fields used by MiniFrame.

The five functions (plus one to commonalise code) implementing the problem to solve (here, the game) are as follow.

In MiniFrameWorldSetToInitialWorld, the second argument userData is used to pass optional data for initialisation (cf MiniFrameWorldCreateInitData below). If null, default values are used. The aliveActors array indicates which actors are 'alive'. Here both players are playing until the end of the game, but on 3+ players games, one player may be eliminated in mid-game and only the remaining players keep playing. That array allows to handle such cases. Also, expert readers will surely see many things that could be optimised. The point here is just to give a simple example.

In addition to these structures and functions, MiniFrame actually needs five more functions:

  1. a function comparing if two worlds are to be considered the same (used when looking an instance of world into the tree)
  2. a function copying the user defined properties of a world into another
  3. a function printing a world on a stream in a human readable format
  4. a function printing an action on a stream in a human readable format
  5. a function to create initialisation data used by MiniFrameWorldSetToInitialWorld. In this simple example, it's always an empty grid, but this function becomes useful in more complex games/problems to automatically create random intialisation state used by the other tools (cf below).

MiniFrame usage

Then, one can use MiniFrame in several different ways. They almost all have in common the creation of a MiniFrame instance, setting its parameter, using it, and finally freeing it.

As explained above, MiniFrame uses a memory pool for better performance, so one must provide the number of world instances MiniFrame will have to work with. Of course the more the better, it just depends on the resource constraints of the environment MiniFrame is running in. The helper function MiniFrameGetNbWorldForMemSize calculates the number of world instances fitting in a given amount of memory, such as the user can specifiy either the available number of worlds either the available amount of memory in bytes.

Next, the initial world and the algorithm used (MM, MCTS, ABP) are set. Here as we used MM the depth of exploration is also set. For this simple example, we know there can't be more than 9 moves, so setting the max depth to 10 ensures MiniFrame will explore down to the end (in the limit of available time and memory). Note that the maximum exploration depth is relative to the current world of course (at current step 0 it would explore up to step 9, at current step 1 it would explore up to step 10, and so on...).

An executable binary can be compiled as follow. The source file of MiniFrame uses a macro (MINIFRAME_WORLD_IMPL) defined in the compilation options, which points to the source file implementing the game/problem. That source file is included in MiniFrame and the whole code is compiled as one unit. A Makefile rule would look as follow.

It's also possible to have several problems solved using MiniFrame at once in the same project. Instead of compiling an executable, compile an object file (-c titactoe.o instead of -o tictactoe) and define in tictactoe.c some exposed functions instead of the main() for the project to interact with the tic-tac-toe solver. Then use the exposed functions and link the object file in the project as needed.

One game simulation

Coming back to the part left undefined in the main(), let see how to run one game simulation using MiniFrame. The framework has two modes of exploration: in the main process, or in background. MiniFrame is indeed multithreaded. The main thread interacts with the user, another thread performs the exploration, and another thread manages the memory. The goal is of course to improve performance in term of execution speed.

If you're wondering why not more threads, well I'd like to, but if you're used to work with multithreading you surely know how much of a headache it quickly becomes. So, moving slowly but surely, I improved from my old CodinGame framework's single thread to just 3 for now. Another motivation (or de-motivation ?) is that using more threads doesn't help on the CodinGame platform.

Back to the code, one can run one game simulation as follow (in background):

One local copy of the current world is kept in memory. On more complex example it would be updated by the main process with external input (like the opponent actual moves). MiniFrameRunBackground(mf) starts the exploration thread in background. No need to worry about the memory management thread, MiniFrame does it for you. Then we loop until the end of the simulated game, wait some time for the exploration thread to do its job, get the computed best action, apply to the local copy of the world and display it, step the local world and inform MiniFrame of the new current actual world. An example output looks like this:

As one would expect, playing perfecty against itself leads to a tie !

Running the same simulation but in the main thread would look like this:

MiniFrameRunOneStep(mf) is blocking and computes as many worlds as it can in the limit of the available memory. A time limit can also be set with MiniFrameSetMaxExploreTimeMs(mf, timeMs). Note that if no time limit is set MiniFrameRunOneStep(mf) performs only one pass on the tree. That's probably not what you want, in particular when using MCTS.

Debug mode

Even on an example as simple as tic-tac-toe I've been able to mess up the world implementation and not get it right from the first compilation, genius! Once the world implementation is burried into the MiniFrame framework, debugging what's wrong is really challenging. A debugging tool is a must, and of course I provide one. It allows the user to manually run the exploration process and check the exploration tree.

The undefined portion of the main file becomes in that case just one line:

This starts an interactive console which can be used as follow. Available commands are:

An example of use on tic-tac-toe would look like this.

The user first checks the memory information, then the initial world. He/she runs one step of exploration and checks the updated memory information. Next he/she checks that actions have been created on the current initial world, and moves to the next world according to the current best action. Next he/she comes back to the previous world, move to the next world through another action, and set that new world as the actual current world before running one more step of exploration. Finally he/she displays again the information about memory.

Thanks to this debugging tool, one can quickly check that the initial world, calculated actions and result of actions, etc... are as expected. It is also helpful when designing the evaluation function: one can see how the exploration tree varies according to it, and explore the tree to better understand why an action ends up being the best.

Qualification and server mode

From my experience, implementing the world is the easy part of the process. Even when it isn't straight forward or well defined, tree exploration is robust enough to provide 'ok' solution even with a clumsy implementation. For example, in a tournament of CodinGame, the referee will send you the current actual game at each turn, hence the eventual mistakes get corrected. The exploration will be incorrect, but if the implementation is not completely meaningless, it will in general lead to an acceptable next action.

Of course, the more exact your implementation is, the more accurate the exploration will be, and the better next action you can expect. But anyway, you'll never have enough time and memory to calculate 'the' best action. It's a good example of "don't care if it's the best, care if it's good enough", which is a general truth on a real world battleground.

Rather, the design of a good evaluation function is where the headaches and sleepless nights await you. Unclear and contradicting goals are common, in a game for example you often need to balance aggressive and defensive moves. And in practice, tweaking 'just a little' your evaluation function generally leads to completely different behaviour for your AI, which is very confusing and need careful examination with the debugging tool. At the same time you generally want the evaluation function to be as fast as possible, while using as few memory as possible. It's really the place where one can shine.

How to evaluate wether one evaluation function leads to better results than another is also a big matter of concern. The only reasonable way is to run many times your different implementations against each other, and have a way to evaluate the results. The first problem here is time. At beginning, there will be clear improvement on the evaluation function and few runs should give clear results. But as you approach a good solution, improvements will be more and more subtle, and you'll need more and more runs to get an idea of which version is the best. More runs equal more time, which you generally do not have.

The second problem is how to evaluate the results. Here, my go-to has always been the ELO ranking system. It's easy to implement and use, supports any number of players (including one: run two single player games and choose the winner as the one with higher score) and give a clear and reliable ranking. To make the evaluation function design task easier, MiniFrame provides a qualification mode which automatically evaluates several implementations using ELO ranking.

The qualification mode uses another mode of execution: the server mode. In that mode, MiniFrame runs as a TCP/IP server, with the exploration process in background, and communicates with an external client to update the actual current world and get the best action. In the context of a game with two players using that server mode, there would be two servers (one for each player) and one client (acting as the referee between the players).

One can simply starts MiniFrame in server mode with the following code (still as the undefined portion of the main() introduced earlier):

The server automatically chooses an ip and port, which are written into the file located at filenameServerAddr for a client application to retrieve. In the context of MiniFrame it's fine to use the same file for all server, there is no risk of conflicts by overwriting. MiniFrameSetServerVerbose(mf, false) allows to turn on/off the verbose mode for the server. A client interacts with the server as follow:

The ip and port are retrieved from the file the server has updated, and used to start a connection between the client and server(s) (only one server in that example but there would be one per player in a multiplayer game). Initialisation data are created using MiniFrameWorldCreateInitData(...) and sent to the server(s) with MiniFrameServerInit(...).

Then the simulation runs in a loop until the current world becomes an end world. At each step (game's turn) the current world is actualised on the server (the one of the current turn's player if multiple players) using MiniFrameServerActualiseWorld(conn, &world). The next action is requested with MiniFrameServerGetAction(conn, &action) and the current world is updated accordingly. Here the referee may actually update the current world in a differet manner than what the server thought. It would be the case for example for incomplete information games.

Running MiniFrame in server mode also allows to interface it with any code able to communicate through TCP/IP. It opens the door to many various uses of MiniFrame by other softwares.

Back to the qualification mode. First we need several instances to be qualified. I like to do it using compilation arguments and macros and keeping everything in one single source file for the world implementation. For example, in tic-tac-toe two different evaluation functions would be defined like this (the second one being wrong on purpose):

Obviously, the user is not limited to the evaluation function. The type of search, or MiniFrame's parameters, or even world implementation could also vary according to VER. Only the two data structures defining the world and action must be identical between all versions under qualification (for the referee to have a common communication pattern with all servers).

Then the two binaries, one per version could be compiled like this:

To be used with the qualification mode, each version's binary should be running in server mode (cf details above). The binary for the qualification itself looks like this:

A MiniFrameQualifier is created to set the qualification parameters, and the qualification itself is done with MiniFrameQualification(). The qualification output its result on stdout, but also returns it as a MiniFrameQualificationResult structure for the user to use it in anyway he/she likes. The parameters of the qualification are: the number of rounds (one round consists of one game for each combination of actors), the number of qualified instances and the path to the binay of each instance, the path to the file used to exchange the server ip and port, the 'thinking' time between each step, optional user data (argument of MiniFrameWorldCreateInitData()).

The thresholdScore parameter is used for early termination of the qualification. If negative, the qualification runs for the given number of rounds. If positive, the qualification early quits if the best instance's score is larger than the second best's score plus the threshold. This allows to save time when one version clearly outperfoms the others and no more rounds are necessary to know that it is the best one.

Running the qualification gives the following output, where we clearly see the version with the correct evaluation function winning over the one with the wrong evaluation function.

Parameters search mode

The qualification mode introduced above is already a precious tool to design a good evaluation function, but MiniFrame offers even more: a parameter automatic search mode. The evaluation formula is almost always subject to some parameters (weight, heat map, threshold, ...). Looking for their optimal value is a tedious task, which can be automated. In MiniFrame I reuse the qualification mode and differential evolution algorithm to provide a parameters search mode. A binary running in server mode and accepting the parameters value in argument is prepared as follow:

Then the source for the binary for the search is as follow:

nbEpoch is the number of epochs in the differential evolution algorithm. nbHyperParam is the number of searched parameters. hyperParamMins and hyperParamMaxs define the range of possible value for each searched parameters. seedHyperParams lets the user provides a value for each parameters for the first instance of the first epoch (useful to force a starting point in the differential evolution algorithm). pathInstance is the path to the binary accepting the parameters in argument (cf above). nbInstance is the number of instance in the differential evoluation algorithm. Other parameters are the same as for qualification mode. In this example settings are pretty low, in reality you would use much larger number of instances, rounds and epochs. But be aware that it very quickly explodes in term of execution time.

Running the automatic search on the simple tic-tac-toe example gives the expected results, any positive value is the best value:

Mancala and Reversi

Tic-tac-toe is a good test case thanks to its simplicity, but is not really useful or representative of the performance of MiniFrame. Lets see two more interesting examples: Mancala and Reversi. If you don't know these games, I let you refer to, respectively, this article and this article.

Mancala has a very clear evaluation function: simply the current number of captured stones. It's a very good candidate for a concrete use of MiniMax. Indeed I've already implemented a web app a long time ago allowing a human to play against a bot using MM. It predates MiniFrame, but still a good representative of what can be done. It's available here. A minimalist version using MiniFrame is also available here.

Implementing Mancala in MiniFrame can be done as follow:

Reversi is an even more interesting candidate because it's still easy to implement, but a good evaluation function is extremely difficult to find. As the coins can flip until the last move, there is no obvious way to know which player is leading during the game. A version developped with MiniFrame is available to play online here (excuse the extreme lack of effort put into the interface, the point here is just to prove it's working).

As a tournament of Reversi is available on CodinGame (here), it also allows me to evaluate the performance of a solution based on MiniFrame. At the time of writing my solution ranks 51st among 474 entries. Not at the top of the world but good enough for me to believe MiniFrame runs as expected and is performant. The difference with stronger solution lies probably in the evaluation function and optimisation of the world implementation. To avoid spoiling the tournament I don't share the code of my implementation here.

Sudoku

As I've written at the beginning of this article, this framework can be applied to solve any combinatorial optimisation problem that can be represented as a tree. To illustrate this, I've used MiniFrame to create a Sudoku solver. At the same time it also shows that it can be used when there is only one 'player'.

In that case, one node (or 'world') represents a grid, totally or partially filled, with any value in [1,9] in the non-empty cells. One transition (or 'action') consists of filling one empty cell with a value. The initial world is the initial sudoku grid. A grid with no empty cells is an end world. The evaluation function is simply the number of non-empty cell.

The creation of possible actions from a given world is where the magic happens. A dummy solution would be to consider all combinations (empty cell, value in [1,9]). This would lead to an enormous amount of actions for starting grids. The trick here is to do what I call 'pruning by design'. Instead of all combinations, search one cell where there is only one value possible. If such (cell, value) exists, set it as the only possible action from that world. If no such (cell, value) exists, consider pairs (cell, value) which are valids. This dramatically reduces the branching factor, with a single action per world most of the time. MiniFrame in MiniMax mode (without pruning) and 10Mb of memory was enough to solve all the grids I have tried (well under one second).

This solver is also available as a web app here, and the code looks like this:

Note that I'm using here an helper function, MiniFrameStepBestAction(), which is equivalent to getting the best action and actualising the current world at once, convenient when using MiniFrame to solve that kind of problems.

Mars lander

Another claim I've made is that MiniFrame can be used under real time constraints. The CodinGame solution for Reversi already proves it: it must output its action within 150ms, which it does successfully. As one more example, I've choosen another problem available on CodinGame: the Mars lander. Here the goal is a simulated landing on the surface of the planet Mars. The problem is limited to 2D, but follows the physical reality of such an operation. The lander is subject to gravity, controls its attitude through change in orientation and thrust, and must satisfies constraints of position, speed and orientation for a successful landing, while using a limited amount of fuel.

Here, the 'world' is the status of the lander (position, speed, attitude, landscape), an 'action' is a command to activate the thruster with a given power, or to change orientation. To limit the number of possible actions, I limit an action to be either thrust either orientation (on CodinGame both are allowed simultaneously), the change of orientationto be \(\pm1\) degree, the thrust to be one of {0,4} when the lander is not above the landing area, and one of {0,2,4} when it is above it.

The evaluation function is divided into three, depending on where the lander is located relative to the landing area. It is crafted in a way to control the behaviour of the lander. Away from the landing area, it gets near to it as fast as possible while avoiding the landscape. High above the landing area it stabilises its horizontal speed and start the descent. Low above the landing area, it focus on a smooth landing within the allowed constraints.

Withing 100ms time constraint between each action and 10Mb of available memory, using MiniMax with siblings pruning allows the lander to plan its control several seconds ahead. It achieves 100% successful landings on 100 tests with random initial conditions (attitude and landscape). The animation below gives an illustration of the landing maneuvers. The red dot is the lander, the grey areay is the landscape profile (highest mountain is 484m for scale), the landing area is the flat area at the center of the screen.

Comparison MM, MCTS, ABP

Finally, here is a comparison of performances of MM (with/without siblings pruning), MCTS, and ABP on the Mancala and Reversi games. Using MiniFrame's qualification tool of course.

Results show that 1) there is an optimal value for the pruning threshold, 2) the siblings pruning with an appropriate threshold value outperforms MM without pruning.

I think the A shape of the results can be explained as follow: a too small threshold discards moves that are at short term bad but prove to be optimal choice at long term; a too large threshold is equivalent to no pruning at all, hence shallower exploration; intermediate thresholds are a good balance between keeping actions which are not optimal locally but may prove optimal later and pruning actions which looks locally really too bad and save exploration resources to go deeper in the tree. As the evaluation function is more accurate and the exploration resources increase, I expect the optimal threshold to move to the left (stronger pruning).

Results show that 1) the maximum depth has an influence on the performance, 2) better performances are obtained for a small maximum depth, 3) MCTS performs similarly to ABP with a smal maximum depth.

I think the results can be explained as follow: ABP depth 64 should be equivalent to MCTS, however my implementation of MCTS explores around 15% more worlds than ABP in the same time, hence the better rank of MCTS; as the maximum depth of ABP increases the rank decreases due to the increasing number of missed branches; the evaluation function is probably a good one, because evaluating everything at depth 4 is clearly better than evaluating depper and missing branches; MCTS as good as ABP depth 4 probably means that there is not a big difference overall between all these version.

Results show that 1) the type of algorithm significantly impacts the performance, 2) MM with pruning > MM without pruning > ABP > MCTS.

I think the results can be explained as follow. In the game of Othello the situation changes hugely until the last move of the game, it makes it very difficult to evaluate moves at long term and optimal moves are hidden in subtrees of bad moves. This makes a partial exploration of the tree particularly inefficient and explains MM > (MCTS and ABP). At the same time, we have to deal with the resources constraint. Pruning is necessary, and siblings pruning proves to be a very good way to do it.

Results show that 1) a strong pruning has a very negative impact on performance, 2) other pruning thresholds have no significant impact relative to each other and relative to no pruning.

In my implementation of Mancala, the evaluation function is simply the current score. It may not vary for several moves, and when it does it's in a much more quiet way than the one for Reversi. The exploration should go must deeper to notice that a locally bad move ends up being a good one. This explains why pruning here doesn't make a big difference, except if pruning so strongly than it shies away from risk, always waiting for killer moves that never come.

Results show that 1) there is an optimal maximum depth, 2) ABP at the appropriate depth outperform MCTS.

Again, MCTS being faster than ABP explains why it performs better than ABP depth 64. But contrarily to Reversi, the evaluation function is worst at predicting the outcome of a game, hence increasing a bit the maximum depth performs better than a minimal one, up to a point where the quantity of missed branches overcomes the advantage of a deeper exploration.

Results show that 1) the type of algorithm significantly impacts the performance, 2) MM with pruning > MM without pruning > ABP > MCTS.

Overall, results are similar to those obtained on Reversi.

Conclusion

In this article, I've introduced the MiniFrame framework implementing MiniMax, MonteCarlo Tree Search, and AlpaBeta Pruning. This framework allows to implement solvers in C programming language for combinatorial optimisation problems. The framework provides tools (debugger, evaluation function qualification, hyper parameters search, TCP/IP server) useful for the implementation of the problem, the design of the evaluation function, and the usage of the solver in various ways and context. It also provides an original pruning method for the MiniMax algorithm, which I've shown to perform better than other methods on two examples. The framework has been validated and qualified on several examples introduced here, and on the CodinGame platform with the following bot programming contests (ranking as of 2024-11-15):

MiniFrame has also been used to solve a combinatorial optimisation problem for the company Sakanotochu. The implemented solution is currently used to help define optimal vegetable baskets in a subscription system, saving the users several hours of manual work every week.

The code source of the current version of MiniFrame is available under GPL here. Any questions or comments are welcome by email.

List of versions:

2023-11-12
in AI/ML, All, C programming,
185 views
Copyright 2021-2024 Baillehache Pascal