Category Archives: chess4j

Category for posts pertaining to the chess4j project.

PVS – Another Fast Fail

After the failure with aspiration windows I decided to implement PVS . The idea here is that, with proper move ordering the first move is usually the best. That being the case, it makes sense to search all of the moves after the first with a minimal alpha/beta window, the goal being to prove or disprove the first move was best rather than to get a usable score. If the first move is in fact best, you’ve saved some time. If not, you have to research with the “regular” alpha/beta window.

WinsLossesDrawsPctEloError
35148759845.3%-33.013.7
53367079646.6%-23.811.8
With PVS vs without, 5+0.25

This result is a little surprising and depressing. I’m not going to give up on PVS, but clearly the time isn’t quite right for it. I’ll have to revisit it later. This leads me to believe I should take a step back and look at move ordering. It seems to me that if PVS is a loss, that could mean the “first move is best” assertion is wrong more than we’d like, meaning move ordering is not where it should be. There are some tasks on my board related to move ordering, such as deferring losing captures until after non-captures and implementing a static exchange evaluator to aid in ordering the quiescence moves. I’ll leave the branch open and revisit this when move ordering has improved.

Aspiration Windows – a Fast Fail

Earlier this week I implemented aspiration windows to chess4j. It’s a very simple concept. In the iterative deepening loop, the depth 1 iteration is processed as usual, with an alpha bound of “negative infinity” and a beta bound of “infinity.” On subsequent iterations, the alpha bound is initialized to the previous score minus some margin (1/3 pawn in this case), and the beta bound is initialized to the previous score plus some margin (again, 1/3 pawn). If all goes well, and the score is within those bounds, the search should finish faster as there are more cutoffs. However, if the real score is not within that search window, then a research must be done.

My intention was to start with the 1/3 pawn windows, falling back to a “wide open” search if it fails. Once testing confirmed that was a win, I could play with the margins or incrementally expanding the window. Unfortunately, this was a pretty fast fail.

My first run was terminated after 2002 games, and the second after 2965 games.

WinsLossesDrawsPctEloError
54968976346.5%-24.512.0
8861026105347.6%-16.410.0
With aspiration windows vs. without, 5+0.25

I did expect this to be a win so I’m a little puzzled by the result, but that’s why we test. Perhaps when the search matures a little more and can hit deeper depths the scores between iterations will stabilize ? I’ll just queue this up to revisit later on.

On the plus side, SPRT saved a lot of time by terminating a 20,000 game match after 2-3k games. That’s a big time saver.

On to PVS.

Stopping at half time – and on making small changes

Great things are done by a series of small things brought together. — Vincent Van Gogh.

The last few changes — adding null move pruning, hashing, and a quiescence search — have had major impacts on playing strength, which was expected. But inevitably, not all changes can be big winners, or even big losers. Some will have a very minor effect, either for the good or bad. The change I deal with in this post is one of those. The idea being tested was — should the search iterator be stopped early if, after completing an iteration, more than half the allotted time has been used? The rationale is that, due to the exponential growth of search trees, if you have less than half your time left when starting a new iteration, you’re unlikely to complete the iteration. Perhaps it would be better to save the time for future use. (This would not apply for games with a fixed time-per-move.)

The idea is so simple I hesitate to even show the (pseudo) code below:

        boolean stopSearching = false;
        do {
            ++depth;
            // DO SEARCH
            score = search.search(board, depth ...)
            if (search.isStopped()) {
                break;
            }
            long elapsed = System.currentTimeMillis() - startTime;
            // .... OTHER STOP CONDITIONS ....
            // if we've used more than half our time, don't start a new iteration
            if (earlyExitOk && !skipTimeChecks && (elapsed > maxTimeMs / 2)) {
                stopSearching = true;
            }
        } while (!stopSearching);

Up until this point I’ve been running 2,000 game self-play matches to gauge if a change is good or not. 2,000 games is enough to get an Elo difference with error bars of around 12-13 points, which has been “good enough.” So, my starting point here was the same – 2 matches, running concurrently on my 2 core testing machine, each match being 2000 games.

WinsLossesDrawsPctEloError
63458977751.1%+7.8211.90
63461475250.5%+3.4712.02
Test 1 – 2 x 2000 game self play matches

Judging by the results of this test, it appears likely the change is a good change, albeit a minor one. The problem is the error bars. If the error bars are to be believed, it’s possible the change is actually neutral, or even a minor loss. The only way to tell would be to play more games – a lot more. People that have done the math state that to measure a 5 Elo change, you need something like 10,000 games. To measure a 2 Elo change requires 60,000 games! I decided I would run 2 20,000 game matches concurrently, for a combined 40,000 games. That should be enough to make a decision. The results of that experiment are below.

WinsLossesDrawsPctEloError
64925782772651.8%+12.343.77
63346151751550.5%+3.183.80
Test 2 – 2 x 20,000 game self play matches

These results were very confusing. If you take the Elo for each match and use the error bars to create an interval, the two intervals don’t even overlap! I have enough of an appreciation of the statistics to know this is not impossible, but it seems extremely suspect. I ended up creating a post about it on a computer chess forum – http://talkchess.com/forum3/viewtopic.php?f=7&t=74685 . One poster, H.G. Muller gave a possible explanation. Since I am not restarting the engines between matches (chess4j is a Java engine, and the JVM has a”warm up” cost that I’m avoiding by not restarting it), it’s possible that memory (cache) is being mapped in a way that is harmful to one process. Not restarting causes that condition to persist. I don’t know if that’s the case or not, but it does make some sense and raises the question of whether it really is OK to run two matches concurrently (or more generally N matches on an N core machine, especially if you don’t restart between matches). I decided to run another test, but just one match this time, not two.

WinsLossesDrawsPctEloError
61796137768450.1%+0.733.77
Test 3 – 1 x 20,000 game self play match

My conclusion is that it’s not as stable as I originally thought to run two games concurrently. Restarting between matches could help, but again, being a JVM engine that’s not very appealing either. I could switch to testing first with Prophet as it’s C and could be restarted between matches, but one of the points of having a Java engine was quick prototyping. It’s also possible that a machine with more cores would resolve the issue, if at least one core is left free. I’ll revisit this in the future. For now, it’s one match at a time.

As far as the effect of the change in strength, it appears that it is worth a very small but positive amount of Elo. I think it’s a keeper, but just barely. But, running tens of thousands of games (especially when I’m only able to use a single core!) is not tenable. This would be a good time to look into some statistical methods. Many engine authors use Sequential Probability Ratio Test . If all you want to know is “is this engine better than this engine?” , and not necessarily the Elo difference between the two, SPRT could give a faster result.

The starting point for SPRT is to form a hypothesis. If player PA is my new version, and player PB is the current, best known version, my hypothesis is “PA is stronger than PB”. That is almost complete; that statement is really equivalent to “PA is stronger than PB by at least 1 Elo”.

You also need a “null hypothesis” – the hypothesis that there is no significant difference between players. My null hypothesis is “PA is NOT stronger than PB by at least 5 Elo points.”

Finally, since this is a statistical test you need to determine your tolerance for an error. There are two types of errors to consider. One is the likelihood of rejecting a change that is good (a type I error). The other is the likelihood of accepting a change that is bad (a type II error). I decided to accept a 5% likelihood of error for both.

Putting all that together:

H1: PA is stronger than PB by at least 1 Elo.

Ho: PA is not stronger than PB by at least 5 Elo

A: 0.05

B: 0.05

Note that, SPRT does not necessarily mean the match will terminate early. You can read about the math behind it at https://en.wikipedia.org/wiki/Sequential_probability_ratio_test , but the idea is that, if either H1 or H0 are “accepted” , the match will terminate. If not, it will continue until the maximum number of games is reached.

As it turns out, the program I use to drive the testing process, Cutechess , has the capability to terminate a match using SPRT. I had to experiment with it a little bit, my first attempt ended in an 18,000 game match where H0 was accepted! – not good. But, I had some parameters flipped so the results are invalid. I decided it would be more productive to experiment with a very large and obvious change, like disabling null move. The correct flags are:

-sprt elo0=1 elo1=5 alpha=0.05 beta=0.05

Given the previous results, I am not going to repeat the experiment yet again to see if it would terminate early, as it’s very likely it wouldn’t and I’d end up losing a week of processor time. I will however be using SPRT going forward.

Since the change was a good one in chess4j I’ve ported it to Prophet4 as well, and for confirmation ran a 2,000 game P4 vs P3 match.

WinsLossesDrawsPctEloError
233115361427.0%-172.813.4
P4 vs P3, 2000 games 10.0+0.5

That is marginally worse than the last run by 1.0%, but well within the margin of error.

Edit: I ran the P4 vs P3 match again out of curiosity:

WinsLossesDrawsPctEloError
250113961127.8%-166.013.4
P4 vs P3, 2000 games 10.0+0.5

Once again, the error bars are large enough in a 2000 game match that there is a lot of variability between runs, so I’m not making decisions based on these results, but I do like to see how P4 is measuring up against P3 as changes are made.

Next up- aspiration windows.

Null move pruning

The null move heuristic has been added and yielded another pretty solid increase in Elo.

For the uninitiated, a “null move” isn’t a real over-the-board move of course, it’s just a heuristic that can be used to terminate a line of search early, before any moves are even generated. The heuristic is based on the “Null Move Observation,” which says that it is almost always better to do something rather than nothing.

The basic idea is to actually try “doing nothing” and seeing if that would allow the opponent to improve their position. If they can’t, odds are that when you really do make a move your position will be too good (the opponent will do something else which prevents this entire line anyway). An example might be capturing the opponent’s queen with a pawn, leaving none of my pieces hanging (en-prise). No matter what the opponent does in response, he’s probably losing, so he’ll make a move which avoids leaving his queen hanging in the first place.

For this to actually save time, the search that is done after “doing nothing” (allowing the opponent a second move) is done to a reduced depth. How much to reduce is the big question – too much and you will make some very unsafe and risky cutoffs. Too little and you’re not really saving much time. A very conservative value would be R=2 or R=3, but with increasing processor speeds and greater and greater search depths many programmers push this value to extremes. It should all be backed by testing. Currently I am using R=3 but I do plan to revisit this in the future.

I am also reducing that reduction towards the leaves to ensure there is at least one ply of full width depth search remaining before dropping into the quiescence search. The idea here is to give the search an opportunity to resolve checks (the qsearch is currently capture only). My programs also avoid using this heuristic in positions that are likely zugzwang, meaning any move could potentially be harmful (which violates the Null Move Observation the heuristic is based on).

You’d have to know a little about the alpha-beta algorithm for a more technical description to make sense, but there’s really no reason for me to do that when you can read all about it on CPW – https://www.chessprogramming.org/Null_Move_Pruning or on this old-but-excellent description by Bruce Moreland – http://web.archive.org/web/20070708232843/http://www.brucemo.com/compchess/programming/nullmove.htm .

The important question – how much is it worth?

WinsLossesDrawsPctEloError
78051670456.6%+46.13+/- 12.29
81250468457.7%+53.93+/- 12.41
chess4j self play match, with null move vs. without, 5+0.25

… and …

WinsLossesDrawsPctEloError
237111864528.0%-164.29+/- 13.13
Prophet 4 vs Prophet 3, 10+0.5

So once again a pretty large jump, and of course I’m very pleased. Prophet4 is now within 200 elo of Prophet3 and there is still a pretty good list of enhancements to implement.

The next four planned changes are going to be smaller and perhaps more difficult to measure: (1) stopping the iterator early if less than half the allotted time remains, (2) proper treatment of mate scores in hash (rather than the “cheat” that’s in place now), (3) aspiration windows within the iterative deepening function, and (4) re-examination of the three-fold rep: is a two-fold repetition good enough? I don’t expect any of those will be big changes on their own, but I’m hoping that cumulatively they’ll be worth a few points. But, you never know, which is why we test.

Hashing

Both chess4j and Prophet now have hashing. Maybe I should say that chess4j has hashing again, because it did have hashing before as I wrote about here. As stated in a previous post, I basically “tore down” the search in chess4j in order to build it back up alongside P4, so I could test equivalence as I go.

The replacement strategy is very simple and certainly an area for future work. For every single node in the full width search where moves are expanded and searched (so these wouldn’t include quiescence nodes, leaf nodes, or nodes where a draw caused an early exit), the result is stored in the hash table. For every node of the full width search except the root, the hash table is probed. The hash entry includes not just the score for the position, but the depth of the search that produced it, and the type of node (PV, ALL, CUT) as well so we know if the score represents a bound or an exact value. In the case of PV and CUT nodes, the “best move” is also stored to aid in move ordering. The hash table is not “bucketed” – an entry consists of just a single slot, so anytime something is hashed any previous values are overwritten. This is referred to in the literature as the “always replace” strategy. It’s very simple, but very effective. In the future I intend to experiment with probing from the qsearch, different replacement strategies (such as “depth preferred”) and expanding the number of values that can be mapped to a single key.

When a hash probe is done and a value found, if the depth associated with the entry is at least as big as the depth we intend to search, we may be able to immediately exit the search without expanding any moves. If not, we at least have a move to try. If we do have to search and there is a suggested move from the hash table, it is tried first, before any moves are generated.

The results of two runs of chess4j with hash vs. without:

WinsLossesDrawsPctEloError
95943960263.0%+92.4612.97
98146955062.8%+90.9713.22

… and Prophet4 vs Prophet3:

WinsLossesDrawsPctEloError
145136049519.6%-244.9214.94

A very solid jump in strength; I’ll take it. 🙂

I’ve been putting some thought into my testing regime. In the past I’ve tested changes by running gauntlets of 10+0.5, which can take quite a long time on a single processor if you really want to get the error bars down. I have an older laptop that I’ve dedicated to testing. It has two cores, so it would be nice to utilize both of them. So, I did some self play tests with both of my programs, running two games concurrently, and happily the results came out pretty close to 50%. Additionally, I also experimented with reducing game times in self play matches from 10+0.5 down to 5+0.25. The results were still around 50%, and without any timeouts. So, at the moment my testing regime is to run a self play match, and then to run a P4 vs P3 match as verification. I think this will work until I’ve reached parity with P3, at which point I intend to go back to gauntlet testing. It would be good to introduce some statistical methods to measure the confidence a change is good (and to use for early termination), particularly as the elo become harder to come by and more and more games are needed to measure the change. I imagine at some point I’ll need more cores for testing as well.

Quiescence Search

It’s not very often that you can make a change that will net your program 350 Elo, but that’s exactly what I just did! A quiescence search was added to chess4j and Prophet4, resulting in a dramatic increase in playing strength.

The main issue with a fixed depth search that simply calls a static evaluation at the leaves is the horizon effect. A quiescence search attempts to minimize this issue by performing a more limited (highly selective) search at the leaves, until the position becomes quiet. There is some variability between programs as to what moves to expand, but typically they would be capturing moves, check evasions and possibly checking moves. Promotions or other threatening moves may also be considered. The danger is search explosion – expanding too many moves will cause the subtree to be too big and take too much time to resolve, possibly at the expense of starting a new iteration of the full width search. Therefore, it may become necessary to limit the quiescence search in some way, such as a maximum depth.

My programs are currently only considering captures, though I do plan to experiment with checks at some point. Captures seemed to be a safe starting point, sort of a “base line,” and minimize the chance of a search explosion since capture sequences tend to “fizzle out” at some point (though I’m sure there are some pathological positions out there that would cause problems).

Results of a chess4j self play match: with a quiescence search vs without:

WinsLossesDrawsPctEloError
16831601570.881347.3621.23

… and the latest Prophet4 vs Prophet3:

WinsLossesDrawsPctEloError
8316612560.105-371.3320.18

Still a long way to go to catch P3, but considering that before the quiescence search was added the results were more like 5 wins, the needle has moved quite a lot.

The next planned addition – hash tables – should net another large gain. I’m not sure that I’ll ever see this kind of increase again, but I expect it will be significant.

I’m also starting to do some experimentation to devise a testing strategy. I’ll write about that in another post when I have some results to share.

Pondering and Analysis

chess4j can now ponder! If you’re not familiar with that term, it means that the program will make use of the time the opponent is on move. It does this by predicting the move the opponent will make, and thinking about what its response will be.

Let’s say that the program would spend X seconds formulating its response. The opponent makes its move in Y seconds, and it makes the predicted move. If X <= Y, the program can move instantly. If X > Y, then the program only needs to spend X-Y more seconds. If, however, the program did not correctly predict the opponent’s move, it will need to throw out the work and start over.

There are other ways to implement pondering. One variation is for the engine to start thinking about what it would do if it were the opponent. The idea is that when the opponent does move, the hash tables will be full of good information that should make the search move along very quickly. It’s hard to see how this can be more effective than the “guess” approach though, assuming the prediction rate is fairly good. A 50% prediction rate is completely reasonable. That’s a lot of the opponent time being utilized by the engine.

So how many ELO is pondering worth? Conventional wisdom is that a doubling of speed is worth around 70 ELO, so that should serve as an upper bound (effectively using 100% of the opponent’s time would conceptually be equivalent to a doubling of time). If we effectively use about 50% of the opponent’s time, I would expect a 30-40 ELO boost.

I ran some tests, chess4j with pondering vs without pondering. This was mainly a stability test; I wouldn’t put too much stock in the results as the prediction rate against itself will obviously be a lot higher than it would against another opponent. Still, the results are interesting.

With PonderingWithout PonderingDrawRate
10106513390.670

Clearly pondering is a big advantage.

Analysis mode is also supported now, meaning it’s possible to analyze games or just try different moves to see the engine’s “opinion” of each. Make a move, let the engine think for a while, back up and try another move, etc.

One final note: Pondering and Analysis mode are not supported in Prophet directly, but they are supported by chess4j when using Prophet. This highlights the benefit of the JNI integration – keep the native engine “lean and clean” and implement the ancillary functions in the high level language.

Now, onto a quiescence search.

chess4j + Prophet4 rewrite status 6/14/20

Support for time controls has been added to both Prophet and chess4j. Each is capable of playing with a fixed time per move or incremental chess clocks. They can use conventional chess clocks as well, but the timing strategy doesn’t really take into account that time will be added after the time control is up (for example 40 minutes for the first 40 moves, then 5 more minutes for each 10 moves, etc.).

Another addition that was tricky to get right was the JNI support to print changes in the principal variation during a search. I ended up adding a callback function that is invoked from the search when the PV changes at the root. When using P4 as a standalone engine, that callback would be a simple “print PV” function. When invoking the P4 search from chess4j, the callback is a method in the JNI layer that calls a similar “print PV” function in the Java code. It was tricky, but works beautifully.

I mentioned in my last post that I was kicking off some self play matches in P4 using 10 second matches with 0.5 second increments. This did uncover a couple of bugs, but everything is stable now. Unlike fixed depth matches where you would expect an exact .500 result, the timed matches will introduce a small amount of variability as the system becomes “busy” at times which will cause the search to stop at different points. So, the result of a 4000 game match was close to but not exactly 0.500.

Just out of curiosity I also played a 2000 game match of P4 vs P3. There was absolutely no doubt that P3 should win this match hands down. P4 is missing many search algorithms that are worth lots of ELO. I just wanted to test for stability and get a baseline measurement to gauge future improvements against. The result – 5 – 1959 – 36! Clearly a long way to go, but search improvements are coming soon.

Next up: analysis support, ponder support in chess4j, and a Windows build of the chess4j + P4 bundle. Once those features are complete, it will be time to start re-adding those search enhancements to climb the ELO ladder. Can’t wait.

chess4j + Prophet integration status 5/17/20

Several months ago I wrote about a proof-of-concept JNI integration between chess4j and Prophet4. At the time I had managed to compile Prophet4 as a static library, load that library into chess4j and write a JNI layer. The only function the Prophet4 library was serving at the time was for endpoint evaluation, but it was enough to show the idea was viable. Based on the success of the proof-of-concept, I’ve decided to go “all in” with this chess4j + Prophet4 integration. And, it’s going really well.

Since the time I wrote that article, I’ve added a search function to Prophet4 with some basic heuristics, as well as an iterative deepening driver. I’ve also added some additional JNI layer code that allows chess4j to utilize Prophet’s search. The native code runs 2-3x faster, so there is a tremendous speed improvement.

I’ve also realized some benefits when it comes to testing. Just to be able to compare apples to apples, I’ve stripped a lot of chess4j code out to simplify the search to the exact equivalent of Prophet’s. It didn’t really bother me too much to do that, because I wanted to refactor it to improve the test coverage anyway. Now, I’m building both programs back up, feature by feature. I’ve found that it’s often easier to build more comprehensive tests in Java, largely due to the ease of “injecting” mocked dependencies. As a feature is implemented in chess4j and I’m satisfied with the testing, I’ll implement the same feature in Prophet4. I still add tests in P4, using Google’s test framework, but not always as comprehensive. Then, the JNI layer is leveraged to ensure that the searches traverse exactly the same tree, produce the same score, and that the iterative deepening driver produces exactly the same PV. Proving that the searches are equivalent gives a sort of transitivity argument.

So, chess4j and Prophet4 are now and probably forever linked. In some sense you could consider chess4j a Java port of Prophet4, or Prophet4 a C port of chess4j. But, there’s more to it than that. The programs serve different purposes. chess4j will be the more “feature complete” engine. It contains an opening book, where P4 does not. It will be capable of pondering, but P4 may not. Prophet4 will be the lighter weight / faster of the two and more suitable for testers that focus on engine-to-engine testing. The chess4j releases will include platform specific builds that utilize P4 as a static library, and for any platforms that aren’t included, the Java engine will still work.

For future work I’m envisioning an SMP implementation such as Young Brothers Wait in P4. P4 is the more likely to use tablebases. Since chess4j uses P4 as a static library, it will automatically benefit from those features. Macro level work such as distributed computing and learning experiments are more likely to be done in chess4j. Who knows, but for now I’m having fun. I guess that’s the main benefit.

Prophet4 rewrite status 2/16/20

Added a simple depth first search. The search uses the alpha/beta algorithm and recognizes checkmated and stalemated positions but otherwise has no “intelligence.” It is a simple fixed depth search without iterative deepening or timing.

The next phase of work will be to add some basic move ordering, such as trying the previous PV first, MVV/LVA capture scoring and killer moves.

The JNI integration with chess4j has gone extremely well. chess4j is capable of using Prophet for search, so after this rewrite is complete there will likely be a chess4j bundle that includes P4 for some platforms. I am leaning towards keeping P4 as a lightweight, XBoard compatible “calculation engine” and leveraging chess4j for ancillary functions such as opening book, pondering support, and distributed computing support.