Rudim [3] - .NET Core Chess Engine - States, Perft & Search
Rudim is a chess engine I am building in my spare time to learn how chess engines work, and also improve my knowledge in C#.
This blog post is a part of a series of blog posts on Rudim. Click on the 'rudim' tag at the top of the page to see all posts in the series.
The next step towards making Rudim an engine that can actually play chess is to implement a search. Currently we only evaluate a position, instead we should be generating all moves and finding the best of those. Now evaluating a move directly is going to be highly inaccurate - because you are not considering how the position might turn out when you play that move. Before diving into that problem lets look at a different concept.
States
We have a move generator to generate all moves (legal or not) and we have a function to make a move. Now to combine these together, we need to restore or undo the move if it's an illegal move. This would require us to write some code to reverse whatever changes MakeMove()
had made. There are two primary approaches to doing this.
- The first approach is pretty simple and straightforward - just copy everything in the board's state before the move was made, and store it somewhere in a LIFO manner. When we need it back, just take the last element from the LIFO structure and restore whatever you saved before. This approach would use very little cpu on very performant languages if you implemented your boardstate in a particular way. The downside with this is that in high level languages it's usually going to be slow especially if you followed an object oriented approach. This approach is sometimes called
Copy-Board
. - The other way to do things is to store only whatever is necessary, and try to compute whatever you don't want to unnecessarily store. i.e. the irreversible aspects of the boards state - like castling rights, enpassant etc. - need to be stored somewhere but the reversible aspects - like the position of pieces - can be computed with a bit of code when you know what the move you need to undo is. This approach is generally called
Copy-Make
.
Rudim implements the second version, but initially had been implemented with the first way for simplicity's sake. The engine had almost a 50% reduction in runtime when moving to the second approach.
Perft
We have most of the aspects of the minimal engine ready - we can represent a position, we can generate all the moves in that position, we can make a move on the position and then take back the move if we want to.
Because of the large number of board possibilities, we can't extensively test our code for every single position to make it's working all the time. Instead what is usually done to make sure all the above features are working correctly is to use a Perft
.
How perft works is by taking a given position, generating all moves in that position, and traversing each legal move till a certain depth. Once you accumulate all the moves' traversals, you will get the number of nodes you visited. Since lots of provably working chess engines already exist, these node numbers are well known and can be found here and can be used by others to cross evaluate if their move generation function is working correctly.
Here's how Rudim implemented Perft :
static
public void
Ideally your perft numbers should match with what you can find online - if not it means your code has some implementation flaw - you might want to analyze the perft results a bit deeper with some custom tests. Generally it is best to perform perfts on multiple positions and not just the starting position as you might not encounter every combination of moves from a given position - castles, promotions, enpassants etc.
Search
Perft is a bare bone traversal, now when we are actually going to try and find the best move, we want to do what perft is doing, but also evaluate the positions.
The following 3 topics I am about to talk about is going to be pretty hard to understand without a detailed explanation. I suggest reading to Bruce Morseland's Chess Programming Series if you haven't heard of these concepts before. His explanations of them are the best I have found so far. Unfortunately his blog has been down for a few years and is only available through archived versions.
The first concept you need to understand is Min-Max
. White's goal in a position is to play the move that gives the most positive score, and blacks goal is to play the move that gives the most negative score. This concept is what is applied to min max, one player is trying to maximise a reward and the other player is trying to minimise it.
Here's a snippet from Bruce's website with the MinMax algorithm.
int
int
int
Now a small optimization or extension to this is Negamax
, where you don't need separate Min
and Max
but instead each player tries to maximise their opponents negated score. Putting both the above functions together you get Negamax.
int
If you tried out perft, you would have seen how much the numbers exponentially increase with depth. Now if your perft performed really well and managed to get a few million nodes every second you should consider the overhead of actually evaluating the position on each node. This will be highly infeasible at higher depths, and just looking 3 or 4 half moves forward will give you a chess engine that probably won't play as well as you want it to.
Fortunately for us, there is a way to SAFELY bring down the node traversal count without affecting the result. This is called Alpha-Beta Pruning
and revolves around the fact that the opponent will choose the best move for him. The worst case scenario for the opponent is given a value beta
during the search. If a certain move in a position evaluates to a score greater than beta, then you can straight up disregard that position because the opponent is definitely not going to pick that node, as it would give you a better position since you would definitely choose that beta
move if you ever reached this node.
Check out Bruce's Blog for a great explanation on this topic.
So we've got a good algorithm that can go to a fair depth. Sounds great, but unfortunately there are two well known drawbacks of Minmax with AlphaBeta Pruning called the Horizon Effect and Move Ordering.
Move ordering simply allows Alpha Beta to actually do what it's supposed to do. If the moves are ordered in a way such that the beta cut-off only happens towards the end of the search loops, the Alpha Beta pruning almost never happens and the number of nodes evaluated become very close to how much a normal Minmax Algorithm would traverse. This can be easily mitigated by various Move Ordering techniques - i.e. putting the move you feel would most likely cause a beta cut off in the beginning. Some ways are MVV-LVA for captures and Killer heuristic for quiet moves. There are various other heuristics which act as optimizations to the AlphaBeta search to make the cut off happens earlier and avoid extra node traversals - the choice of which to use is yours.
The horizon effect is a bit different. It happens because any alphabeta search is fixed depth - whether it be 3 moves or 20 moves, the engine is forced to evaluate the position based on what it sees on the board then and there - it can't calculate things such as forced captures & trades that would come up right after this horizon. Fortunately, this too can be mitigated with Quiescence Search. The algorithm is similar to Negamax (you can apply Alpha Beta pruning to the Quiescence Search as well.) - but the difference is you only traverse captures, and you do an extra comparison in the beginning to prune off nodes that are definitely not worth of searching.
Fin
This brings us to the end of the main parts of building a chess engine. The engine can now play chess - it can take a position, search through all moves, and efficiently produce an above average move with it's evaluation function. The road ahead is optimizations and implementations that help the engine. Concepts such as Iterative Deepening are useful for chess games where time is a constraint and you can't always go to a great depth. Optimizations to the search function and evaluation function also make your engine play a lot better. There are a lot of such optimizations, and I will explore some of them along the way forward with Rudim.
Till the next time, adios!