Knight Outposts

Prophet and chess4j now understand knight outposts. An outpost, as implemented in Prophet, is a square that cannot be attacked by an enemy pawn. Putting a knight on an outpost can be a strong advantage, particularly if that knight is supported by a friendly pawn.

In the following diagram, the knight on D4 is on an outpost square, but the knight on E4 is not since it may be run off by the F7 pawn at some point.

The bonus (or penalty) given for an outpost varies by square. An additional bonus is given if the outpost is supported, such as the knight on the D4 square above. The “supported” bonus also varies by square. This is possibly overkill, but with an auto-tuner , I reasoned the more knobs and dials it has to minimize error the better. (Or at least, it can’t hurt as long as we guard against over-fitting.)

As expected this feature isn’t a huge gain in terms of ELO, but it did net a few points. It also puts the latest development version at +50 ELO over Prophet 4.2, which was my goal before doing a new release. Before doing a release I’m going to test a couple more terms, both expected to be minor gains at most, but after that I’m going to switch gears so it’d be good to clear them from the board. Those terms are “trapped bishop” and “supported rook on open file.”

The Prophet and The Gibbon

Graham Banks recently ran a blitz tournament (40 moves/16 minutes) he titled ‘The Prophet and The Gibbon’ between 16 engines, including Prophet 4.2.

Final Standings

21.5 – Prophet 4.2 64-bit
21.0 – SoFCheck 0.9.1-beta 64-bit
18.0 – Gibbon 2.69a 64-bit
18.0 – Isa 2.0.83 64-bit
17.0 – Queen 4.03
16.5 – Fornax 3.0 64-bit
16.5 – Barbarossa 0.6.0 64-bit
14.5 – Horizon 4.4
13.0 – Jazz 840 64-bit
13.0 – Sage 3.53
13.0 – EveAnn 1.72
13.0 – CeeChess 1.3.2 64-bit
12.5 – Napoleon 1.8 64-bit
12.0 – StockNemo 3.0.0.2 64-bit
11.0 – FireFly 2.7.2 64-bit
9.5 – Ares GB 1.1 64-bit

Woo hoo!


The complete tournament pgn (zipped) can be downloaded here:
http://kirill-kryukov.com/chess/discuss … p?id=51143

Passed pawns and Non-linear Mobility

Since I released Prophet 4.2 I’ve made a couple of additional evaluation changes:

  1. The passed pawn bonus has been made more granular. Where it used to be a simple bonus for a passed pawn, now it varies depending on the pawn’s rank. 40,000 bullet games says that change was worth about 14 ELO.
  2. Bishop and queen mobility has been made non-linear. This change was inspired by Erik Madsen’s MadChess blog – https://www.madchess.net/2014/12/16/madchess-2-0-beta-build-29-piece-mobility/ . The idea is to encourage piece development. I had originally plugged Erik’s values in verbatim, but they didn’t mesh well with existing weights and testing showed it weakened the program. After running the auto-tuner, this change brought in an additional 22 ELO.

In my first attempt at running the auto-tuner, I just started with the previously tuned weights, plus Erik’s values for bishop and queen mobility, but the tuner couldn’t seem to find any improvements. The error bounced around a little, going up and down, and not making any progress. I eventually decided to do a complete reset. I set the piece values to the traditional 1/3/3/5/9 values, and everything else to 0. Then I re-tuned and validated with some bullet games. The learning curve:

Fresh off the heals of these improvements, Prophet played in an informal online engine blitz tourney today. Unfortunately it was a pretty rough outing, placing just 16/20 with 2.5 points out of 9. It was a very strong field though. Even the 10th place finisher is nearly 3000 ELO on CCRL’s 40/2 list.

:Tourney Players: Round 9 of 9 
:
:     Name              Rating Score Perfrm Upset  Results 
:     ----------------- ------ ----- ------ ------ ------- 
:  1 +LczTinker         [2971]  6.5  [2937] [   0] +07w =05w =06b =03b +02w =04b =09w +11w +12b 
:  2 +NightmareX        [2909]  6.5  [2939] [   0] +12w =06w +09b =04w -01b =05w +07b +08w +11b 
:  3 +ChessSystemTalX   [2900]  6.5  [2898] [  35] +10w +09w =08b =01w =06b +07w =05b =04w +13b 
:  4 +RubiChess         [2875]  6.5  [2947] [  77] +13w +11w =05b =02b +08w =01w =06b =03b +09w 
:  5 +ArasanX           [2859]  6.0  [2836] [ 110] +14w =01b =04w =08b +12w =02b =03w =09b +16w 
:  6 +WaspX             [2830]  6.0  [2679] [ 181] +15w =02b =01w +17b =03w =08b =04w +13w =10b 
:  7 +TheBaron          [2569]  5.5  [2457] [   3] -01b +14w +16w =11b +17w -03b -02w +18b +15b 
:  8 +Goldbar           [2861]  5.0  [2533] [  19] +16w +17b =03w =05w -04b =06w =13b -02b +18w
:  9 +Marvin            [2752]  5.0  [2683] [ 162] +18w -03b -02w +16b +11w +12b =01b =05w -04b 
: 10 +Nalwald           [2500]  5.0  [2325] [ 165] -03b +18w =15b -12b -13w +17b +16w +14b =06w 
: 11 +atomGoldbar       [2575]  4.5  [2480] [   0] +20w -04b +13w =07w -09b +16b +14w -01b -02w 
: 12 +WaDuuttie         [2567]  4.5  [2411] [   0] -02b +15w =14b +10w -05b -09w +18b +17w -01w 
: 13 +rpiArminius       [2272]  4.0  [2425] [ 522] -04b +20w -11b =14w +10b +15w =08w -06b -03w 
: 14 +atomFloyd         [2242]  4.0  [2267] [ 177] -05b -07b =12w =13b +15w +18w -11b -10w +17b 
: 15 +Skiull            [1966]  3.0  [2170] [ 410] -06b -12b =10w +18w -14b -13b +17w =16b -07w 
: 16 -Prophet           [2253]  2.5  [2325] [ 351] -08b +19w -07b -09w +18b -11w -10b =15w -05b
: 17 -Skipper           [1662]  2.0  [2219] [1120] +19b -08w +18b -06w -07b -10w -15b -12b -14w 
: 18 +atomSargon        [1840]  0.0  [1974] [   0] -09b -10b -17w -15b -16w -14b -12w -07w -08b 
: 19 +atomNightmare     [forf]  0.0  [1557] [   0] -17w -16b 
: 20 +POS               [forf]  0.0  [2023] [   0] -11b -13b 
:
:     Average Rating    2474.2 

Next up- I’m going to continue with the mobility theme a little longer by testing rook mobility, then knight outposts, trapped bishops, and connected rooks on open files. I don’t expect any of those will be big points by themselves but cumulatively they might be worth a bit.

Automated Parameter Tuning in chess4j

INTRODUCTION

I’ve long believed that one of the biggest potential areas of improvement in my chess programs was tuning of the evaluation parameters. chess4j‘s evaluation function is rather simple; there are gaps in its knowledge, but the issue I’m talking about here are the values assigned to the parameters it does have relative to each other.  Automated tuning addresses that problem.

Automated tuning in computer chess is not a new concept. In 2014, Peter Österlund (author of the chess program Texel), wrote about his approach of using logistic regression to optimize evaluation parameters.  This approach has since been dubbed The Texel Tuning Method .  While Peter does get credit for popularizing the idea, it goes back even further – at least as far back as 2009.  I won’t rigorously describe the algorithm here as it’s already done in the link referenced above, but I will try to give some intuition without getting bogged down in the math.

  1. Get a bunch of labeled positions.  By “labeled,” I mean – each position is associated with an outcome.  It’s convenient to think of the outcome from the perspective of the white player: win=1, draw=0.5, loss=0.
  2. Given a chess position, formulate a hypothesis.  The hypothesis should be a value in the closed interval [0, 1].   From the perspective of the white player, the hypothesis is the probability of a win.
  3. The error (for a set of evaluation parameters) is a measure of difference between the actual outcome and the hypothesis.   Measure the error for each position in your test set and take the average.  This is your initial error.
  4. Find a set of evaluation parameters that minimizes the error.

Imagine the error (the cumulative difference between your hypothesis and the actual outcome) as a landscape with hills and valleys.  Some hills may be taller than others, and some valleys lower than others.  We want to settle into the lowest spot in a valley.  Ideally, in the LOWEST valley, but that’s really hard.  We’ll settle for ANY valley.

There are various approaches to go about minimizing the error with varying degrees of complexity.  I’ve experimented with a local search and gradient descent, which I’ll discuss below.

I don’t mean to fully explain how logistic regression or gradient descent works, but I will share a few details about my experience and my implementation.  Most of the code can be found in the tuner package of chess4j.

DATA

Good data is crucial to a successful outcome.   This point can’t be overstated – the end result is limited by the quality of the data you’re training with.  In fact, the data is even more important than the algorithm!  If you think about it this should be obvious.

Building a good data set isn’t an overly difficult task, but it is a tedious and time consuming one.  Fortunately, others have already done the work and have been generous enough to share with the community.  The well known Zuri Chess set from is a common one, and was my starting point.  It contains around 725,000 labeled “quiet” positions.  Andrew Grant (Ethereal) has shared a few larger data sets containing 10 million or so positions each.

https://drive.google.com/file/d/1Y3haCS … sp=sharing
https://drive.google.com/file/d/1PdU6oL … sp=sharing
https://drive.google.com/file/d/1m5fc4d … sp=sharing

I did have slightly better results using Andrew’s data.  I don’t know if that’s because of the number of positions in the set, or the quality of the data itself, or possibly a combination of both.

In his paper Evaluation & Tuning in Chess Engines, Andrew describes his approach to generating datasets.  (I’m assuming the same methodology was used in the shared ones.)   He sampled positions from self play games, then performed high depth searches from each of those positions to produce a principal variation.  The terminal position of the principal variation was the position added to the dataset.  A major benefit of this approach is that the evaluation function can be applied directly, without first running a quiescence search.  This is a major time saver.

HYPOTHESIS

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/Hypothesis.java

The basic idea behind almost any learning algorithm is to minimize the error between “truth” and what you think will happen (your “hypothesis”).  In the case of a chess game, the outcome is one of three things: white wins, black wins (white loses), or a draw.  Let’s give our outcome a name, say “y”.  Let y=1 be the case where white wins, y=0 be the case where white loses, and y=0.5 be the case where the game is drawn.

Now, our hypothesis, call it “h(x)” where x is a vector of our evaluation parameter values, is just the probability of a white win.  We can map the output of the evaluation function to a hypothesis using a standard sigmoid function.  If the eval says 0, our hypothesis is 0.5 (draw).  The larger the eval score in the positive direction, the closer the hypothesis moves towards 1 (white wins).  As the eval gets more negative, our hypothesis moves towards 0.

Note: Texel did not use the standard logistic sigmoid function, but a more general sigmoid function that uses a scaling constant K he used to minimize the error.  Texel used a K of -1.13, so I just did the same.  It’s possible with some tuning the tuner might converge a little faster but I haven’t attempted this nor do I plan to.

COST FUNCTION

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/CostFunction.java

The goal of training is to minimize error, which is where the cost function comes in. We want our cost function to produce large values when our predictions are wrong, and smaller and smaller values as our predictions are closer to the actual outcome.  The simplest way to do this is to take the absolute value of the difference: |y-h| .  I used a common variation of that: (y-h)^2.  Squaring has the effect of “amplifying” errors.

The overall cost (error) associated with an evaluation parameter vector is simply the average cost over all the training data.

MINIMIZING ERROR WITH LOCAL SEARCH

I used local search in my initial attempts.  The idea is very simple – adjust the evaluation parameters, one at a time, each time re-measuring the error.  Please reference the pseudo code in the Texel Tuning Wiki article for the details, but this is essentially a very slow walk (more like a stumble) down a hill. The stumble continues until you are at the bottom of the hill.

Pros: This approach is guaranteed to find a local minimum.  Indeed, I used this approach to derive the eval parameters in Prophet 4.1, which was a 100 elo improvement over Prophet 4.0.  Another major pro is that it’s easy to implement.  It will likely require only minimal changes to your evaluation function.

Cons: it is a naive approach, and consequently VERY slow.  As in, hours or even days.

Using a local search is a great start, but if you are actively developing and would like to be able to iterate quickly, you’ll likely want a faster approach.  That is where gradient descent comes in.

MINIMIZING ERROR WITH GRADIENT DESCENT

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/Gradient.java

Gradient descent is a commonly used machine learning algorithm to minimize some (differentiable) function by moving in the area of steepest descent.  Where a local search  takes a drunken stumble down the hill, gradient descent takes a more direct path.   Where local search takes hours (or days!), gradient descent takes minutes.  Unfortunately, this “steeper walk” also means a “steeper learning curve.”  Understanding gradient descent does require a little calculus and linear algebra.

The biggest challenge for me was to start thinking about the evaluation differently than I had been for the last 20+ years.  You’ll have to think about the evaluation in terms of features and weights.   Your final evaluation is a sum of products:

eval = F0 * W0 + F1 * W1 + F2 * W2 + .... + FN * WN

The beauty of gradient descent is that it adjusts the individual weights according to how active the feature was.  Consider a position that your program evaluates incorrectly.  If the position didn’t have any queens on the board, why adjust any of the terms related to queens?

This will almost surely require refactoring your evaluation function.  In chess4j, I augmented the standard evaluation functions with “feature extraction” functions.  See EvalRook for an example.  Note that tapered evaluations make this slightly more complex.  Instead of a feature being “on or off,” you’ll have to assign it a value indicating how active it is according to the phase of the game.

TRAINING

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/LogisticRegressionTuner.java

For each training session, the test data was shuffled and then split into two subsets.  The first subset contained 80% of the records and was used for the actual training.  The other 20% were used for validation (technically the “test set”).   Ideally, the training error should decrease after every iteration, but with gradient descent (particularly stochastic gradient descent) that may not be the case.  The important point is that the error trends downward (this is tough if you’re a perfectionist).

Learning curves are tremendously useful to know when to STOP training.  If you go too far, you risk a serious problem – overfitting. This is when your model fits the training data “too well” and fails to generalize.  In the sample learning curve below I plotted the error curve for the training data vs the error curve for the test (validation) data.  In this case the valley around iteration 800 is probably the best place to stop.  Things could get interesting if the two curves begin to diverge, but with a large enough data set that shouldn’t be an issue.

After every training session I would validate the results by running a gauntlet of 10,000 bullet games (1 second + 0.5 second increment) against a collection of 8 sparring partners.  Periodically I would also run self play matches.

HYPER PARAMETERS

There are several hyper parameters that influence the outcome of a training session.  Finding the ideal values for each took a little trial and error.  (Actually it is possible to automate this using a validation set, but it didn’t seem worth it here.)

  • Alpha (the learning rate).  This of this as the size of your step as you walk towards down the hill towards the bottom of the valley.  A small alpha will take a very long time to converge.  Too big, and you may “overstep” the bottom.  I ended up with alpha=25.
  • Batch size.  With millions of test positions, it’s not computationally feasible to train against all positions all at once.  A typical approach is to choose a small subset, running many iterations with different subsets each time.  I tested batch sizes of 10 up to 10,000.  The learning curves seemed similar, just more noise with lower batch sizes.  I settled on 10,000.
  • Lambda.  This is a decay parameter.  The idea is to taper down the size of your steps down the hill over time.  Anything I tried seemed worse than lambda=0 so I eventually disabled it altogether.
  • Regularization.  The idea behind regularization is to penalize the size of the parameters.  That is, not only do we want the eval weights to be in the proper ratios to each other, but we want the smallest values that satisfy those ratios.  In practice I never observed a difference so I eventually disabled it.

RESULTS

This exercise brought in around 120 ELO for Prophet.  The difference between Prophet 4.0 and 4.1 was around 100 ELO and can be fully attributed to optimizing evaluation parameters.  The difference between Prophet 4.1 and 4.2 is around 50 ELO, but some of that is due to the addition of pawn hashing and other eval terms.

I find it instructive to visualize the piece square tables as heat maps.  The following are the pawn piece square tables before and after training.  Note in 4.0 there was only a single pawn table, but by 4.2 I had split those into middle and endgame tables.  A couple of things that really stand out to me are how undervalued a pawn on the 7th rank was, and just how much the danger of pushing a pawn around the king was underestimated.

Pawn PST before training

Pawn PST after training (middle game)

Pawn PST after training (endgame)

FUTURE WORK

In the near term, the focus will be on adding some additional evaluation terms, particularly for pawns.  The nice thing about having an auto-tuning capability is that I can focus on correctly implementing a term without worrying too much about how to assign a value to the term.  Just rerun the tuner and verify with some games.

Longer term (perhaps Spring 2023) I plan to delve into neural networks and my own implementation of NNUE.

Prophet 4.2 and chess4j 5.0 are released

I’m happy to announce updates to both chess engines. Prophet 4.2 is approximately 50 elo stronger than 4.1, and 150 elo stronger than 4.0. (I missed a release announcement or two while this development blog was offline.) The most significant change, and the reason the chess4j major version number has been incremented, is that chess4j now includes an auto tuner! The tuner uses logistic regression with gradient descent to optimize evaluation terms. I’ll write more detail about that in a separate post. Those optimized weights have been added into Prophet, so it benefits from that work as well. Tapered evaluation has been fully implemented as well which added a few elo. I say “fully” because the king evaluation was already tapered, but now both programs fully evaluate the position with a middle game and an endgame score, and weight them based on material on the board. Some concept of mobility has been added as well – a simple count of available squares for both bishops and queens.

Here is how Prophet 4.2 stacks up against its current sparring partners in 1+0.5 games:

RankNameEloGamesScoreDraws
1tantabus-2.0.0824625063%22%
2arasan-13.4674625060%21%
3barbarossa-0.6.0584625059%20%
4qapla-0.1.1314625055%24%
5prophet-4.2244000053%24%
6loki-3.5234625054%23%
7myrddin-0.88-24625050%24%
8prophet-4.1-364000044%23%
9tjchess-1.3-834625037%21%
10jazz-840-1344625030%19%
11prophet-4.0-1411000030%18%

PVS – take 2

Some time back I tried implementing a Principal Variation Search , but as I wrote about in my post PVS – Another Fast Fail , the results were not good. At the time I concluded that if PVS is not a win, it must mean that the cost of the researches is outweighing the nodes saved by doing zero width searches. For that to be the case, it must mean that too often the first move is not the best move, which points to move ordering.

Since then move ordering has certainly improved, as documented in this post on Move Ordering . So, I decided to give PVS another try. In my first attempt, it appeared to be another loss. Then, I decided to not do PVS at the root node, and now it appears to be a very small win.

A win is a win, so I’m merging the changes in, but I think there is more to do here. My suspicion is that, as move ordering improves, the benefits of PVS will increase. The most obvious way to improve move ordering is to add a depth preferred hash table (the current strategy is a very naive “always replace”).

It seems like PVS at the root should work as well, if the program can reliably predict the best move often enough. I know a lot of programs put extra effort into ordering the moves at the root. I remember reading that Bob Hyatt’s Crafty does a quiescence search at the root. So, this is on the backlog as well, and once complete I will revisit the idea of PVS at the root.

For now, it is on to the next thing – Late Move Reductions. I’m hopeful that will yield a significant ELO increase, perhaps finally putting P4 on par with P3.

Small Improvement to “bad captures”

In my recent post on move ordering, I identified a potential area of improvement to the criteria for deciding if a capture is “good” or “bad.” As I wrote in that post, a capture is good if:

  1. It is a promotion (technically even non-capturing promotions are included)
  2. The value of the captured piece is greater than the value of the capturing piece
  3. The Static Exchange Evaluator (SEE) score is non-negative.

The issue is with knights and bishops. They are roughly the same value (which one is more valuable really depends on the position), but in Prophet the bishop has a slightly higher value. A knight has a material value equal to 3 pawns, but a bishop has 3.2 pawns. The consequence of that is that a simple Bishop x Knight capture would be categorized as “bad” and not tried until all non-captures have been tried.

I don’t have the link handy but I read an older post on talkchess.com where Tord Romstad, the author of Glaurang (pre-cursor to Stockfish), mentioned that he used different piece values for the purposes of move ordering than he did in the evaluation. He said he used 1, 3, 3, 5 and 10. That means Bishop x Knight captures, as well as Knight x Bishop captures would both be categorized as “good.” Also, by giving the queen a value of 10, it means that giving two rooks for a queen would be considered equal by the SEE, where giving a queen + pawn for two rooks would have a negative score.

Sure enough, that simple change seems to be worth about 6 ELO.

Pruning “bad” captures in quiescence

As suspected the change to move ordering to separate good captures from bad captures has already paid off. Moving bad captures to the bottom of the move order list made it trivial to skip bad captures in the quiescence search altogether. This is an idea I first read about in a discussion on r.g.c.c. between Bob Hyatt, Feng-Hsiung Hsu and others here https://groups.google.com/g/rec.games.chess.computer/c/H6XjY2L13eQ . Hyatt claimed a small improvement, though Hsu was skeptical. Time has proven Hyatt correct though; I believe this is something most strong programs do. In any event, it seems to be worth about 10 ELO for Prophet4 so it’s a keeper.

Move Ordering

Improving move ordering has been on the radar of a while now. I started to suspect that move ordering needed some work when my initial attempts at a PVS search and aspiration windows both failed. I reasoned that, if move ordering is subpar, researches would occur too often causing an overall increase in node counts.

To know, you have to measure, so I added data to the logfiles and wrote a Python script to aggregate (1) time to depth, (2) effective branching factor, and (3) the % of nodes in which we get a fail high on the 1st move and first four moves.

Once I was able to measure, I changed the move ordering scheme FROM:

PV move -> Hash move -> All captures in MVV/LVA order -> Killer 1 -> Killer 2 -> noncaptures in the order they are generated

TO:

PV move -> Hash Move -> “Good captures” in MVV/LVA order -> Killer 1 -> Killer 2 -> noncaptures -> bad captures in SEE order.

A “good capture” is one that is a promotion, a capture in which the value of the captured piece is at least that of the capturing piece, or one with a non-negative SEE value.

Incidentally, while researching this I came across a post I myself had made many years ago: http://talkchess.com/forum3/viewtopic.php?f=7&t=15198&hilit=defer+losing+captures&sid=5543f43760ce939ed16deacd59710e06

The change doesn’t seem to add more than just a couple of ELO directly. However, in non-tactical test suites all the measured metrics were improved: time to solution is down, effective branching factor is lower, the percentage of fail highs on the first move is improved, and the percentage of fail highs in the first four moves is dramatically improved. Additionally, the number of nodes required to reach a solution is cut by about a third, with only a 7% or so decrease in speed (nodes per second). I believe this sets the stage to take another stab at PVS and aspiration windows. First though, I’m going to take a stab at pruning bad captures from the quiescence search.

One potential (probable) area of improvement is that knights and bishops have slightly different values (bishops being the more valuable). For the purposes of determining if a capture is “good” when classifying the captures, they should probably be considered equal.

New Testing Rig

Prophet4 finally has a proper testing rig! A few weeks ago, I purchased a Dell Alienware system — an 8 core (16 logical) AMD Ryzen 7 5800 with 32 GB of RAM and an AMD Radeon RX600XT graphics card.

This replaces the single core laptop I have been using. This is pretty exciting as it will allow P4 testing to go at 8x the speed it did before. Just to break the machine in, I ran the first ever gauntlet with P4. Here are the results:

RankNameEloGamesScoreDraws
1plisk 0.2.7d1032978566%19%
2tjchess 1.3812978763%22%
3jazz 840522978658%21%
4myrddin 0.87442978657%21%
5Horizon 4.4152978552%19%
6jumbo 0.4.17-52978349%21%
7p3-20181124-82978648%25%
8p4-20210407-861721136%32%
9prophet2_ja-941600035%18%
10tcb 0052-1822978523%15%

This closely matches the results I obtained a few years ago when I announced Prophet3 20180811 is released . One notable exception is TCB – it seems to have done much worse on this machine, for whatever reason.

So, it seems P4 is already on par with P2, and is within 78 elo or so from P3. This is encouraging, as the P4 rewrite is still not complete. I’m feeling pretty confident that when it is, it will be at least as strong as P3, and then the real work of improving can begin. And, with the fast feedback from a proper testing rig, I don’t think that should be all that difficult. My goal is to achieve +100 elo over P3 before doing a release.

Note– the versions of the chess engines used in this gauntlet are quite old by now; many of them likely have updates that could be significantly stronger. During the rewrite process, I’ve tried to keep everything “as is” — the goal is to compare P4 to P3, not to other engines. Once the rewrite is complete and P4 grows in strength, new testing engines will be cycled in as the current set is cycled out.