Category Archives: Computer Chess

Towards NNUE

A good part of the development effort in the previous year was spent optimizing evaluation weights using logistic regression, first using the Texel Tuning Method (simple and slow), and later using a more standard gradient descent approach (faster, but more complicated). The ultimate goal, however, has been to develop a solution that doesn’t require domain knowledge at all. What if there were a way to just let the machine figure out for itself what is “good” about a chess position by examining enough training positions? A supervised machine learning algorithm such as neural networks would be perfect here.

One of the key challenges in using a neural network based evaluation within an alpha-beta search is the loss of speed. There is no getting around the fact that a neural network based evaluator is going to be slower than a traditional “hand crafted” evaluation. The key question is, can it be performed efficiently enough that the speed loss is offset by the gain in evaluation precision? Enter “efficiently updatable neural networks (NNUE).”

To begin the journey towards NNUE, I first had to learn about neural networks. My thoughts were to first learn about neural networks, and then worry about the “efficiently updatable” part, which is specific to computer chess. To that end I’ve spent the last several months reading, taking some online courses, and even writing my own simple machine learning library. Though I’m still learning and experimenting with neural nets in general, I’m at the point where I’m thinking about how to apply them to computer chess and have begun that work.

My path towards NNUE looks like this:

  • Learning the fundamentals (neural networks) without worrying about NNUE. I’ve learned a lot from Andrew Ng’s Deep Learning courses on Coursera. Michael Nielson’s fantastic (and free!) online book has also been a fantastic resource.
  • Write some code! The “hello world” of neural networks seems to be writing an image classifier, so I did that.
  • How would it look to train a network that replaces a traditional chess evaluator? This is the stage I’m at now, and the work is already underway. I’ve decided again to not worry about the “efficiently updatable” stuff here, though I have done some studying around NNUE. I’ll focus on fixed depth searches in this stage, so I can focus more on the mechanics of the network itself. How do I encode a chess position? How should the network be structured? What hyperparameters seem to work better?
  • Finally, how do I do all this efficiently to minimize speed loss?

There are shorter paths than the DIY approach I’m taking. I could probably rip some code or use an existing library to get something working much faster, but I’m OK with the long road. To me, the journey is as important as the end result. So far it’s been fun.

Prophet 4.3 and chess4j 5.1 released

I’m happy to finally announce the next minor release of both of my chess programs! You can grab Windows and Linux binaries from the respective Github repos:

Prophet – https://github.com/jswaff/prophet

chess4j – https://github.com/jswaff/chess4j

If you need a Mac build for Prophet, please look here. (Thank you Darius!)

The focus of this release was to continue to improve the evaluation. Prophet 4.3 and chess4j 5.1 have the exact same change log:

  • Passed pawn by rank (was a single value)
  • Non-linear mobility (was a single value)
  • Knight outposts
  • Trapped bishop penalty

These changes are worth about +50 ELO in Prophet (which I expect will bring it very close to the 2500 mark on the CCRL Blitz List). I attempted a “supported rook” term, meaning the rook on an open file is connected to another rook, but surprisingly it actually cost a few ELO. Seems that should work though, so I’ve left the code in place but have it commented out.

I had planned on doing some pawn and basic endgame work in this line, and perhaps I still will, but right now I feel the time is right to begin work on neural networks. I’m pausing development for a while to study the literature. Hopefully by spring I’m ready to begin the implementation.

Knight Outposts

Prophet and chess4j now understand knight outposts. An outpost, as implemented in Prophet, is a square that cannot be attacked by an enemy pawn. Putting a knight on an outpost can be a strong advantage, particularly if that knight is supported by a friendly pawn.

In the following diagram, the knight on D4 is on an outpost square, but the knight on E4 is not since it may be run off by the F7 pawn at some point.

The bonus (or penalty) given for an outpost varies by square. An additional bonus is given if the outpost is supported, such as the knight on the D4 square above. The “supported” bonus also varies by square. This is possibly overkill, but with an auto-tuner , I reasoned the more knobs and dials it has to minimize error the better. (Or at least, it can’t hurt as long as we guard against over-fitting.)

As expected this feature isn’t a huge gain in terms of ELO, but it did net a few points. It also puts the latest development version at +50 ELO over Prophet 4.2, which was my goal before doing a new release. Before doing a release I’m going to test a couple more terms, both expected to be minor gains at most, but after that I’m going to switch gears so it’d be good to clear them from the board. Those terms are “trapped bishop” and “supported rook on open file.”

The Prophet and The Gibbon

Graham Banks recently ran a blitz tournament (40 moves/16 minutes) he titled ‘The Prophet and The Gibbon’ between 16 engines, including Prophet 4.2.

Final Standings

21.5 – Prophet 4.2 64-bit
21.0 – SoFCheck 0.9.1-beta 64-bit
18.0 – Gibbon 2.69a 64-bit
18.0 – Isa 2.0.83 64-bit
17.0 – Queen 4.03
16.5 – Fornax 3.0 64-bit
16.5 – Barbarossa 0.6.0 64-bit
14.5 – Horizon 4.4
13.0 – Jazz 840 64-bit
13.0 – Sage 3.53
13.0 – EveAnn 1.72
13.0 – CeeChess 1.3.2 64-bit
12.5 – Napoleon 1.8 64-bit
12.0 – StockNemo 3.0.0.2 64-bit
11.0 – FireFly 2.7.2 64-bit
9.5 – Ares GB 1.1 64-bit

Woo hoo!


The complete tournament pgn (zipped) can be downloaded here:
http://kirill-kryukov.com/chess/discuss … p?id=51143

Passed pawns and Non-linear Mobility

Since I released Prophet 4.2 I’ve made a couple of additional evaluation changes:

  1. The passed pawn bonus has been made more granular. Where it used to be a simple bonus for a passed pawn, now it varies depending on the pawn’s rank. 40,000 bullet games says that change was worth about 14 ELO.
  2. Bishop and queen mobility has been made non-linear. This change was inspired by Erik Madsen’s MadChess blog – https://www.madchess.net/2014/12/16/madchess-2-0-beta-build-29-piece-mobility/ . The idea is to encourage piece development. I had originally plugged Erik’s values in verbatim, but they didn’t mesh well with existing weights and testing showed it weakened the program. After running the auto-tuner, this change brought in an additional 22 ELO.

In my first attempt at running the auto-tuner, I just started with the previously tuned weights, plus Erik’s values for bishop and queen mobility, but the tuner couldn’t seem to find any improvements. The error bounced around a little, going up and down, and not making any progress. I eventually decided to do a complete reset. I set the piece values to the traditional 1/3/3/5/9 values, and everything else to 0. Then I re-tuned and validated with some bullet games. The learning curve:

Fresh off the heals of these improvements, Prophet played in an informal online engine blitz tourney today. Unfortunately it was a pretty rough outing, placing just 16/20 with 2.5 points out of 9. It was a very strong field though. Even the 10th place finisher is nearly 3000 ELO on CCRL’s 40/2 list.

:Tourney Players: Round 9 of 9 
:
:     Name              Rating Score Perfrm Upset  Results 
:     ----------------- ------ ----- ------ ------ ------- 
:  1 +LczTinker         [2971]  6.5  [2937] [   0] +07w =05w =06b =03b +02w =04b =09w +11w +12b 
:  2 +NightmareX        [2909]  6.5  [2939] [   0] +12w =06w +09b =04w -01b =05w +07b +08w +11b 
:  3 +ChessSystemTalX   [2900]  6.5  [2898] [  35] +10w +09w =08b =01w =06b +07w =05b =04w +13b 
:  4 +RubiChess         [2875]  6.5  [2947] [  77] +13w +11w =05b =02b +08w =01w =06b =03b +09w 
:  5 +ArasanX           [2859]  6.0  [2836] [ 110] +14w =01b =04w =08b +12w =02b =03w =09b +16w 
:  6 +WaspX             [2830]  6.0  [2679] [ 181] +15w =02b =01w +17b =03w =08b =04w +13w =10b 
:  7 +TheBaron          [2569]  5.5  [2457] [   3] -01b +14w +16w =11b +17w -03b -02w +18b +15b 
:  8 +Goldbar           [2861]  5.0  [2533] [  19] +16w +17b =03w =05w -04b =06w =13b -02b +18w
:  9 +Marvin            [2752]  5.0  [2683] [ 162] +18w -03b -02w +16b +11w +12b =01b =05w -04b 
: 10 +Nalwald           [2500]  5.0  [2325] [ 165] -03b +18w =15b -12b -13w +17b +16w +14b =06w 
: 11 +atomGoldbar       [2575]  4.5  [2480] [   0] +20w -04b +13w =07w -09b +16b +14w -01b -02w 
: 12 +WaDuuttie         [2567]  4.5  [2411] [   0] -02b +15w =14b +10w -05b -09w +18b +17w -01w 
: 13 +rpiArminius       [2272]  4.0  [2425] [ 522] -04b +20w -11b =14w +10b +15w =08w -06b -03w 
: 14 +atomFloyd         [2242]  4.0  [2267] [ 177] -05b -07b =12w =13b +15w +18w -11b -10w +17b 
: 15 +Skiull            [1966]  3.0  [2170] [ 410] -06b -12b =10w +18w -14b -13b +17w =16b -07w 
: 16 -Prophet           [2253]  2.5  [2325] [ 351] -08b +19w -07b -09w +18b -11w -10b =15w -05b
: 17 -Skipper           [1662]  2.0  [2219] [1120] +19b -08w +18b -06w -07b -10w -15b -12b -14w 
: 18 +atomSargon        [1840]  0.0  [1974] [   0] -09b -10b -17w -15b -16w -14b -12w -07w -08b 
: 19 +atomNightmare     [forf]  0.0  [1557] [   0] -17w -16b 
: 20 +POS               [forf]  0.0  [2023] [   0] -11b -13b 
:
:     Average Rating    2474.2 

Next up- I’m going to continue with the mobility theme a little longer by testing rook mobility, then knight outposts, trapped bishops, and connected rooks on open files. I don’t expect any of those will be big points by themselves but cumulatively they might be worth a bit.

Automated Parameter Tuning in chess4j

INTRODUCTION

I’ve long believed that one of the biggest potential areas of improvement in my chess programs was tuning of the evaluation parameters. chess4j‘s evaluation function is rather simple; there are gaps in its knowledge, but the issue I’m talking about here are the values assigned to the parameters it does have relative to each other.  Automated tuning addresses that problem.

Automated tuning in computer chess is not a new concept. In 2014, Peter Österlund (author of the chess program Texel), wrote about his approach of using logistic regression to optimize evaluation parameters.  This approach has since been dubbed The Texel Tuning Method .  While Peter does get credit for popularizing the idea, it goes back even further – at least as far back as 2009.  I won’t rigorously describe the algorithm here as it’s already done in the link referenced above, but I will try to give some intuition without getting bogged down in the math.

  1. Get a bunch of labeled positions.  By “labeled,” I mean – each position is associated with an outcome.  It’s convenient to think of the outcome from the perspective of the white player: win=1, draw=0.5, loss=0.
  2. Given a chess position, formulate a hypothesis.  The hypothesis should be a value in the closed interval [0, 1].   From the perspective of the white player, the hypothesis is the probability of a win.
  3. The error (for a set of evaluation parameters) is a measure of difference between the actual outcome and the hypothesis.   Measure the error for each position in your test set and take the average.  This is your initial error.
  4. Find a set of evaluation parameters that minimizes the error.

Imagine the error (the cumulative difference between your hypothesis and the actual outcome) as a landscape with hills and valleys.  Some hills may be taller than others, and some valleys lower than others.  We want to settle into the lowest spot in a valley.  Ideally, in the LOWEST valley, but that’s really hard.  We’ll settle for ANY valley.

There are various approaches to go about minimizing the error with varying degrees of complexity.  I’ve experimented with a local search and gradient descent, which I’ll discuss below.

I don’t mean to fully explain how logistic regression or gradient descent works, but I will share a few details about my experience and my implementation.  Most of the code can be found in the tuner package of chess4j.

DATA

Good data is crucial to a successful outcome.   This point can’t be overstated – the end result is limited by the quality of the data you’re training with.  In fact, the data is even more important than the algorithm!  If you think about it this should be obvious.

Building a good data set isn’t an overly difficult task, but it is a tedious and time consuming one.  Fortunately, others have already done the work and have been generous enough to share with the community.  The well known Zuri Chess set from is a common one, and was my starting point.  It contains around 725,000 labeled “quiet” positions.  Andrew Grant (Ethereal) has shared a few larger data sets containing 10 million or so positions each.

https://drive.google.com/file/d/1Y3haCS … sp=sharing
https://drive.google.com/file/d/1PdU6oL … sp=sharing
https://drive.google.com/file/d/1m5fc4d … sp=sharing

I did have slightly better results using Andrew’s data.  I don’t know if that’s because of the number of positions in the set, or the quality of the data itself, or possibly a combination of both.

In his paper Evaluation & Tuning in Chess Engines, Andrew describes his approach to generating datasets.  (I’m assuming the same methodology was used in the shared ones.)   He sampled positions from self play games, then performed high depth searches from each of those positions to produce a principal variation.  The terminal position of the principal variation was the position added to the dataset.  A major benefit of this approach is that the evaluation function can be applied directly, without first running a quiescence search.  This is a major time saver.

HYPOTHESIS

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/Hypothesis.java

The basic idea behind almost any learning algorithm is to minimize the error between “truth” and what you think will happen (your “hypothesis”).  In the case of a chess game, the outcome is one of three things: white wins, black wins (white loses), or a draw.  Let’s give our outcome a name, say “y”.  Let y=1 be the case where white wins, y=0 be the case where white loses, and y=0.5 be the case where the game is drawn.

Now, our hypothesis, call it “h(x)” where x is a vector of our evaluation parameter values, is just the probability of a white win.  We can map the output of the evaluation function to a hypothesis using a standard sigmoid function.  If the eval says 0, our hypothesis is 0.5 (draw).  The larger the eval score in the positive direction, the closer the hypothesis moves towards 1 (white wins).  As the eval gets more negative, our hypothesis moves towards 0.

Note: Texel did not use the standard logistic sigmoid function, but a more general sigmoid function that uses a scaling constant K he used to minimize the error.  Texel used a K of -1.13, so I just did the same.  It’s possible with some tuning the tuner might converge a little faster but I haven’t attempted this nor do I plan to.

COST FUNCTION

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/CostFunction.java

The goal of training is to minimize error, which is where the cost function comes in. We want our cost function to produce large values when our predictions are wrong, and smaller and smaller values as our predictions are closer to the actual outcome.  The simplest way to do this is to take the absolute value of the difference: |y-h| .  I used a common variation of that: (y-h)^2.  Squaring has the effect of “amplifying” errors.

The overall cost (error) associated with an evaluation parameter vector is simply the average cost over all the training data.

MINIMIZING ERROR WITH LOCAL SEARCH

I used local search in my initial attempts.  The idea is very simple – adjust the evaluation parameters, one at a time, each time re-measuring the error.  Please reference the pseudo code in the Texel Tuning Wiki article for the details, but this is essentially a very slow walk (more like a stumble) down a hill. The stumble continues until you are at the bottom of the hill.

Pros: This approach is guaranteed to find a local minimum.  Indeed, I used this approach to derive the eval parameters in Prophet 4.1, which was a 100 elo improvement over Prophet 4.0.  Another major pro is that it’s easy to implement.  It will likely require only minimal changes to your evaluation function.

Cons: it is a naive approach, and consequently VERY slow.  As in, hours or even days.

Using a local search is a great start, but if you are actively developing and would like to be able to iterate quickly, you’ll likely want a faster approach.  That is where gradient descent comes in.

MINIMIZING ERROR WITH GRADIENT DESCENT

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/Gradient.java

Gradient descent is a commonly used machine learning algorithm to minimize some (differentiable) function by moving in the area of steepest descent.  Where a local search  takes a drunken stumble down the hill, gradient descent takes a more direct path.   Where local search takes hours (or days!), gradient descent takes minutes.  Unfortunately, this “steeper walk” also means a “steeper learning curve.”  Understanding gradient descent does require a little calculus and linear algebra.

The biggest challenge for me was to start thinking about the evaluation differently than I had been for the last 20+ years.  You’ll have to think about the evaluation in terms of features and weights.   Your final evaluation is a sum of products:

eval = F0 * W0 + F1 * W1 + F2 * W2 + .... + FN * WN

The beauty of gradient descent is that it adjusts the individual weights according to how active the feature was.  Consider a position that your program evaluates incorrectly.  If the position didn’t have any queens on the board, why adjust any of the terms related to queens?

This will almost surely require refactoring your evaluation function.  In chess4j, I augmented the standard evaluation functions with “feature extraction” functions.  See EvalRook for an example.  Note that tapered evaluations make this slightly more complex.  Instead of a feature being “on or off,” you’ll have to assign it a value indicating how active it is according to the phase of the game.

TRAINING

See: https://github.com/jswaff/chess4j/blob/v5.0/chess4j-java/src/main/java/com/jamesswafford/chess4j/tuner/LogisticRegressionTuner.java

For each training session, the test data was shuffled and then split into two subsets.  The first subset contained 80% of the records and was used for the actual training.  The other 20% were used for validation (technically the “test set”).   Ideally, the training error should decrease after every iteration, but with gradient descent (particularly stochastic gradient descent) that may not be the case.  The important point is that the error trends downward (this is tough if you’re a perfectionist).

Learning curves are tremendously useful to know when to STOP training.  If you go too far, you risk a serious problem – overfitting. This is when your model fits the training data “too well” and fails to generalize.  In the sample learning curve below I plotted the error curve for the training data vs the error curve for the test (validation) data.  In this case the valley around iteration 800 is probably the best place to stop.  Things could get interesting if the two curves begin to diverge, but with a large enough data set that shouldn’t be an issue.

After every training session I would validate the results by running a gauntlet of 10,000 bullet games (1 second + 0.5 second increment) against a collection of 8 sparring partners.  Periodically I would also run self play matches.

HYPER PARAMETERS

There are several hyper parameters that influence the outcome of a training session.  Finding the ideal values for each took a little trial and error.  (Actually it is possible to automate this using a validation set, but it didn’t seem worth it here.)

  • Alpha (the learning rate).  This of this as the size of your step as you walk towards down the hill towards the bottom of the valley.  A small alpha will take a very long time to converge.  Too big, and you may “overstep” the bottom.  I ended up with alpha=25.
  • Batch size.  With millions of test positions, it’s not computationally feasible to train against all positions all at once.  A typical approach is to choose a small subset, running many iterations with different subsets each time.  I tested batch sizes of 10 up to 10,000.  The learning curves seemed similar, just more noise with lower batch sizes.  I settled on 10,000.
  • Lambda.  This is a decay parameter.  The idea is to taper down the size of your steps down the hill over time.  Anything I tried seemed worse than lambda=0 so I eventually disabled it altogether.
  • Regularization.  The idea behind regularization is to penalize the size of the parameters.  That is, not only do we want the eval weights to be in the proper ratios to each other, but we want the smallest values that satisfy those ratios.  In practice I never observed a difference so I eventually disabled it.

RESULTS

This exercise brought in around 120 ELO for Prophet.  The difference between Prophet 4.0 and 4.1 was around 100 ELO and can be fully attributed to optimizing evaluation parameters.  The difference between Prophet 4.1 and 4.2 is around 50 ELO, but some of that is due to the addition of pawn hashing and other eval terms.

I find it instructive to visualize the piece square tables as heat maps.  The following are the pawn piece square tables before and after training.  Note in 4.0 there was only a single pawn table, but by 4.2 I had split those into middle and endgame tables.  A couple of things that really stand out to me are how undervalued a pawn on the 7th rank was, and just how much the danger of pushing a pawn around the king was underestimated.

Pawn PST before training

Pawn PST after training (middle game)

Pawn PST after training (endgame)

FUTURE WORK

In the near term, the focus will be on adding some additional evaluation terms, particularly for pawns.  The nice thing about having an auto-tuning capability is that I can focus on correctly implementing a term without worrying too much about how to assign a value to the term.  Just rerun the tuner and verify with some games.

Longer term (perhaps Spring 2023) I plan to delve into neural networks and my own implementation of NNUE.

Prophet 4.2 and chess4j 5.0 are released

I’m happy to announce updates to both chess engines. Prophet 4.2 is approximately 50 elo stronger than 4.1, and 150 elo stronger than 4.0. (I missed a release announcement or two while this development blog was offline.) The most significant change, and the reason the chess4j major version number has been incremented, is that chess4j now includes an auto tuner! The tuner uses logistic regression with gradient descent to optimize evaluation terms. I’ll write more detail about that in a separate post. Those optimized weights have been added into Prophet, so it benefits from that work as well. Tapered evaluation has been fully implemented as well which added a few elo. I say “fully” because the king evaluation was already tapered, but now both programs fully evaluate the position with a middle game and an endgame score, and weight them based on material on the board. Some concept of mobility has been added as well – a simple count of available squares for both bishops and queens.

Here is how Prophet 4.2 stacks up against its current sparring partners in 1+0.5 games:

RankNameEloGamesScoreDraws
1tantabus-2.0.0824625063%22%
2arasan-13.4674625060%21%
3barbarossa-0.6.0584625059%20%
4qapla-0.1.1314625055%24%
5prophet-4.2244000053%24%
6loki-3.5234625054%23%
7myrddin-0.88-24625050%24%
8prophet-4.1-364000044%23%
9tjchess-1.3-834625037%21%
10jazz-840-1344625030%19%
11prophet-4.0-1411000030%18%

PVS – take 2

Some time back I tried implementing a Principal Variation Search , but as I wrote about in my post PVS – Another Fast Fail , the results were not good. At the time I concluded that if PVS is not a win, it must mean that the cost of the researches is outweighing the nodes saved by doing zero width searches. For that to be the case, it must mean that too often the first move is not the best move, which points to move ordering.

Since then move ordering has certainly improved, as documented in this post on Move Ordering . So, I decided to give PVS another try. In my first attempt, it appeared to be another loss. Then, I decided to not do PVS at the root node, and now it appears to be a very small win.

A win is a win, so I’m merging the changes in, but I think there is more to do here. My suspicion is that, as move ordering improves, the benefits of PVS will increase. The most obvious way to improve move ordering is to add a depth preferred hash table (the current strategy is a very naive “always replace”).

It seems like PVS at the root should work as well, if the program can reliably predict the best move often enough. I know a lot of programs put extra effort into ordering the moves at the root. I remember reading that Bob Hyatt’s Crafty does a quiescence search at the root. So, this is on the backlog as well, and once complete I will revisit the idea of PVS at the root.

For now, it is on to the next thing – Late Move Reductions. I’m hopeful that will yield a significant ELO increase, perhaps finally putting P4 on par with P3.

Small Improvement to “bad captures”

In my recent post on move ordering, I identified a potential area of improvement to the criteria for deciding if a capture is “good” or “bad.” As I wrote in that post, a capture is good if:

  1. It is a promotion (technically even non-capturing promotions are included)
  2. The value of the captured piece is greater than the value of the capturing piece
  3. The Static Exchange Evaluator (SEE) score is non-negative.

The issue is with knights and bishops. They are roughly the same value (which one is more valuable really depends on the position), but in Prophet the bishop has a slightly higher value. A knight has a material value equal to 3 pawns, but a bishop has 3.2 pawns. The consequence of that is that a simple Bishop x Knight capture would be categorized as “bad” and not tried until all non-captures have been tried.

I don’t have the link handy but I read an older post on talkchess.com where Tord Romstad, the author of Glaurang (pre-cursor to Stockfish), mentioned that he used different piece values for the purposes of move ordering than he did in the evaluation. He said he used 1, 3, 3, 5 and 10. That means Bishop x Knight captures, as well as Knight x Bishop captures would both be categorized as “good.” Also, by giving the queen a value of 10, it means that giving two rooks for a queen would be considered equal by the SEE, where giving a queen + pawn for two rooks would have a negative score.

Sure enough, that simple change seems to be worth about 6 ELO.

Pruning “bad” captures in quiescence

As suspected the change to move ordering to separate good captures from bad captures has already paid off. Moving bad captures to the bottom of the move order list made it trivial to skip bad captures in the quiescence search altogether. This is an idea I first read about in a discussion on r.g.c.c. between Bob Hyatt, Feng-Hsiung Hsu and others here https://groups.google.com/g/rec.games.chess.computer/c/H6XjY2L13eQ . Hyatt claimed a small improvement, though Hsu was skeptical. Time has proven Hyatt correct though; I believe this is something most strong programs do. In any event, it seems to be worth about 10 ELO for Prophet4 so it’s a keeper.