“Tree shaping” is an important component of a strong chess program. There are various ways to shape a search tree. One is by extending lines that seem interesting in some way; particularly those that cannot be fully resolved within the search horizon (see https://en.wikipedia.org/wiki/Horizon_effect). Other methods of shaping the search tree include reducing or even pruning lines that seem less promising. The quiescence search is also a form of tree shaping, as the search becomes more selective in which nodes it expands – typically just captures or captures + checks and check evasions.
One of the simplest, and perhaps the most effective extension is the “check extension.” Whenever a move is made, if the move gives check to the enemy, the search depth is increased by one. That is, instead of this:
apply_move(pos, move)
val = -search(pos, depth-1, -beta, -alpha)
Do this:
apply_move(pos, move)
bool gives_check = in_check(pos)
int ext = gives_check ? 1 : 0
val = -search(pos, depth-1+ext, -beta, -alpha)
Disclaimer: that is probably a little too naive, as it doesn’t really guarantee the line will terminate. There are surely positions that would cause the extension as written above to explode the search tree. But, on average, it’s a win.
I implemented the check extension in Prophet4 and indeed it seems to be a big win.
Wins
Losses
Draws
Pct
Elo
Error
331
183
402
58.1%
56.6
16.8
Prophet4 with check extension vs without, 5+0.25
In an effort to add some sort of terminating criteria to it, I also tried limiting the check extension to capturing moves, and while still a win it didn’t work out as well.
Wins
Losses
Draws
Pct
Elo
Error
3206
3119
4505
50.4%
2.8
5.0
P4 with capture only check ext vs without, 5+0.25
I also tried another form of extensions entirely – promoting moves. However, it had no measurable effect at all and may have even been a small loss, so I abandoned it.
After settling on the “naive check extension” I measured P4’s standing against P3.
Wins
Losses
Draws
Pct
Elo
Error
313
929
758
34.6%
-110.6
12.2
Prophet4 vs Prophet3, 10+0.5
The last time I measured, P4 was -164.29 vs P3, so it’s definitely gained some ground.
Next on the list is to take a careful look at move ordering before moving onto another form of tree shaping – reductions. I expect Late Move Reductions (LMR) in particular to give another large jump. But, the algorithm is very sensitive to good move ordering, so it’s worth taking some time to study that first.
I’ve been pretty busy as of late, but in an effort to get some momentum going I picked some items that have been on my to do list that would take far more processor time than programming time – validating some evaluation terms that have been part of Prophet for many years now. The goal, really, was to validate what I was sure was true- that these evaluation terms do help; if they are removed, surely performance would drop as well. As you’ll see, it didn’t quite work out that way.
Here’s a list of evaluation terms that were tested, with results below.
Knight tropism – the idea that keeping your knight as close as possible to the enemy’s king is generally a good thing.
Rooks on open files, or even half open files.
Passed pawns – pawns with no enemy pawn in front or on an adjacent file should be rewarded, since the likelihood of promotion increases dramatically.
Isolated pawns – pawns with no friendly pawns on an adjacent file should be penalized, since they are weakened considerably without the supporting pawns.
Doubled pawns – pawns that occupy the same file as another friendly pawn are a (small) positional weakness.
Major pieces (rooks, queens) on the 7th rank with the enemy king on the back rank are usually very deadly, especially when “connected.”
Knight Tropism
This term works by penalizing a knight by 2 x distance_from_enemy_king centipawns. Distance from the enemy king is the max of difference in ranks and difference in files.
I actually started out running this one with chess4j, the Java engine, before deciding to use Prophet4 going forward. As explained in a previous post, the overhead of restarting the JVM between matches is just too high, and not restarting the engine between games isn’t great either, so doing these fast tests with a lightweight executable that can be quickly restarted seems preferable. At any rate, the outcome was nearly the same so I’ve combined the results into one table.
Also, since these are validations of existing terms, player A was the “control” player, and player B the “without” player. The hypothesis is that “Player A is better than Player B.”
Wins
Losses
Draws
Pct
Elo
Error
6312
5982
7674
50.8%
5.7
3.8
2343
2122
2766
51.5%
10.6
6.3
5942
5789
8269
50.4%
2.7
3.7
Control vs “without knight tropism”, 5+0.25
As you can see, the knight tropism is worth a few ELO. Not a game wrecker by any means, but it does have a small positive effect. We’ll check this one off the list.
Rooks on Open Files
Rooks on open files or even half open files are known to be a strategic advantage. It allows the player to move their rooks around easily, projecting strength and penetrating into the opponent position.
Rooks on files with no other pieces are given a 25 centipawn bonus. Rooks on files with just enemy pieces are given a 15 centipawn bonus.
Wins
Losses
Draws
Pct
Elo
Error
573
411
717
54.7%
33.2
12.6
Control vs “No rook on open files”, 5+0.25
This term is obviously doing the job. Check this one off the list.
Passed Pawns
Passed pawns become promoted pawns. Passed pawns are awarded 20 centipawns.
Wins
Losses
Draws
Pct
Elo
Error
2143
1930
2656
51.6%
11.0
6.5
Control vs “no passed pawn bonus”, 5+0.25
Another one validated. That’s not to say it’s tuned correctly, but at least we can say it does help.
Isolated Pawns
Isolated pawns don’t have any friendly pawns on adjacent ranks to give them any support. They are a weakness. In Prophet and chess4j, isolated pawns are penalized 20 centipawns.
Wins
Losses
Draws
Pct
Elo
Error
502
641
741
46.3%
-25.7
12.2
1385
1532
1847
48.5%
-10.7
7.7
Control vs “no isolated pawn penalty”, 5+0.25
That is NOT a good result! I was in such disbelief after the first test that I ran a second test, and though the result doesn’t seem quite as bad, it’s still bad. As it stands, the isolated pawn penalty is hurting. I haven’t disabled it, because the major focus right now is rewriting the engine before improving it, and I want to be able to compare apples to apples after the rewrite. However, I have put an item on the backlog to study this in more detail. The heuristic should work. Either the implementation isn’t quite right, or it’s too expensive, or the weights aren’t right. I’ll have to get to the bottom of this.
Doubled Pawns
Doubled pawns also known to be a positional weakness. They are penalized 10 centipawns (note this penalty gets “awarded” to each pawn).
Wins
Losses
Draws
Pct
Elo
Error
593
722
950
47.2%
-19.8
10.9
858
1149
1516
45.9%
-28.8
8.7
Control vs “no doubled pawn penalty”, 5+0.25
Another surprising and disappointing result! Investigating this has also been added to the post-rewrite backlog.
Majors on 7th
This evaluation term awards rooks and queens on the 7th rank when the enemy king is on the back rank. If so, 50 centipawns are awarded. Additionally, if connected to another major piece on the 7th rank, an additional 80 centipawns are awarded – the idea being this is likely a deadly / mating attack.
I can’t remember how long this term has been around, but it’s been a looooong time. Sadly, the results aren’t so good.
Wins
Losses
Draws
Pct
Elo
Error
1019
1137
1542
48.4%
-11.1
8.5
3178
3509
4483
48.5%
-10.3
5.0
Control vs “without majors on 7th”, 5+0.25
Clearly, not a good outcome! I wondered what would happen if I just disabled the “connected” part but left the award for having a major piece on the 7th when the enemy king is on the 8th?
Wins
Losses
Draws
Pct
Elo
Error
5705
6024
8271
49.2%
-5.5
3.7
5691
5973
8336
49.3%
-4.9
3.7
Control vs “without CONNECTED term”, 5+0.25
Disabling the “connected” bit may help a little, but still, the heuristic is hurting overall, not helping.
Conclusion
Out of the six evaluation terms to be tested / validated, three of them were found to be helpful and three are actually harmful. The “majors on 7th”, doubled pawn, and isolated pawn heuristics have all been added to the post-rewrite backlog for further study.
The moral of the story here is, do NOT assume anything. Always test! This has been my philosophy for a good while now, but these eval terms were all added at a time that I wasn’t as rigorous in my testing as I am now. (And, frankly, I don’t think many in the computer chess community were before 10-15 years ago.)
If there is any upshot, it’s that there is a guaranteed strength improvement to be had, even if it’s just removing the terms altogether. But, since I know they really should work, I’ve opted to leave them for now. Also, when the Prophet4 rewrite is complete, I really want to be able to run some benchmarks against Prophet3 and be able to make comparisons. The rewritten codebase should “stand on its own.” Improving the evaluator now would cast some doubt on those comparisons.
Great things are done by a series of small things brought together. — Vincent Van Gogh.
The last few changes — adding null move pruning, hashing, and a quiescence search — have had major impacts on playing strength, which was expected. But inevitably, not all changes can be big winners, or even big losers. Some will have a very minor effect, either for the good or bad. The change I deal with in this post is one of those. The idea being tested was — should the search iterator be stopped early if, after completing an iteration, more than half the allotted time has been used? The rationale is that, due to the exponential growth of search trees, if you have less than half your time left when starting a new iteration, you’re unlikely to complete the iteration. Perhaps it would be better to save the time for future use. (This would not apply for games with a fixed time-per-move.)
The idea is so simple I hesitate to even show the (pseudo) code below:
boolean stopSearching = false;
do {
++depth;
// DO SEARCH
score = search.search(board, depth ...)
if (search.isStopped()) {
break;
}
long elapsed = System.currentTimeMillis() - startTime;
// .... OTHER STOP CONDITIONS ....
// if we've used more than half our time, don't start a new iteration
if (earlyExitOk && !skipTimeChecks && (elapsed > maxTimeMs / 2)) {
stopSearching = true;
}
} while (!stopSearching);
Up until this point I’ve been running 2,000 game self-play matches to gauge if a change is good or not. 2,000 games is enough to get an Elo difference with error bars of around 12-13 points, which has been “good enough.” So, my starting point here was the same – 2 matches, running concurrently on my 2 core testing machine, each match being 2000 games.
Wins
Losses
Draws
Pct
Elo
Error
634
589
777
51.1%
+7.82
11.90
634
614
752
50.5%
+3.47
12.02
Test 1 – 2 x 2000 game self play matches
Judging by the results of this test, it appears likely the change is a good change, albeit a minor one. The problem is the error bars. If the error bars are to be believed, it’s possible the change is actually neutral, or even a minor loss. The only way to tell would be to play more games – a lot more. People that have done the math state that to measure a 5 Elo change, you need something like 10,000 games. To measure a 2 Elo change requires 60,000 games! I decided I would run 2 20,000 game matches concurrently, for a combined 40,000 games. That should be enough to make a decision. The results of that experiment are below.
Wins
Losses
Draws
Pct
Elo
Error
6492
5782
7726
51.8%
+12.34
3.77
6334
6151
7515
50.5%
+3.18
3.80
Test 2 – 2 x 20,000 game self play matches
These results were very confusing. If you take the Elo for each match and use the error bars to create an interval, the two intervals don’t even overlap! I have enough of an appreciation of the statistics to know this is not impossible, but it seems extremely suspect. I ended up creating a post about it on a computer chess forum – http://talkchess.com/forum3/viewtopic.php?f=7&t=74685 . One poster, H.G. Muller gave a possible explanation. Since I am not restarting the engines between matches (chess4j is a Java engine, and the JVM has a”warm up” cost that I’m avoiding by not restarting it), it’s possible that memory (cache) is being mapped in a way that is harmful to one process. Not restarting causes that condition to persist. I don’t know if that’s the case or not, but it does make some sense and raises the question of whether it really is OK to run two matches concurrently (or more generally N matches on an N core machine, especially if you don’t restart between matches). I decided to run another test, but just one match this time, not two.
Wins
Losses
Draws
Pct
Elo
Error
6179
6137
7684
50.1%
+0.73
3.77
Test 3 – 1 x 20,000 game self play match
My conclusion is that it’s not as stable as I originally thought to run two games concurrently. Restarting between matches could help, but again, being a JVM engine that’s not very appealing either. I could switch to testing first with Prophet as it’s C and could be restarted between matches, but one of the points of having a Java engine was quick prototyping. It’s also possible that a machine with more cores would resolve the issue, if at least one core is left free. I’ll revisit this in the future. For now, it’s one match at a time.
As far as the effect of the change in strength, it appears that it is worth a very small but positive amount of Elo. I think it’s a keeper, but just barely. But, running tens of thousands of games (especially when I’m only able to use a single core!) is not tenable. This would be a good time to look into some statistical methods. Many engine authors use Sequential Probability Ratio Test . If all you want to know is “is this engine better than this engine?” , and not necessarily the Elo difference between the two, SPRT could give a faster result.
The starting point for SPRT is to form a hypothesis. If player PA is my new version, and player PB is the current, best known version, my hypothesis is “PA is stronger than PB”. That is almost complete; that statement is really equivalent to “PA is stronger than PB by at least 1 Elo”.
You also need a “null hypothesis” – the hypothesis that there is no significant difference between players. My null hypothesis is “PA is NOT stronger than PB by at least 5 Elo points.”
Finally, since this is a statistical test you need to determine your tolerance for an error. There are two types of errors to consider. One is the likelihood of rejecting a change that is good (a type I error). The other is the likelihood of accepting a change that is bad (a type II error). I decided to accept a 5% likelihood of error for both.
Putting all that together:
H1: PA is stronger than PB by at least 1 Elo.
Ho: PA is not stronger than PB by at least 5 Elo
A: 0.05
B: 0.05
Note that, SPRT does not necessarily mean the match will terminate early. You can read about the math behind it at https://en.wikipedia.org/wiki/Sequential_probability_ratio_test , but the idea is that, if either H1 or H0 are “accepted” , the match will terminate. If not, it will continue until the maximum number of games is reached.
As it turns out, the program I use to drive the testing process, Cutechess , has the capability to terminate a match using SPRT. I had to experiment with it a little bit, my first attempt ended in an 18,000 game match where H0 was accepted! – not good. But, I had some parameters flipped so the results are invalid. I decided it would be more productive to experiment with a very large and obvious change, like disabling null move. The correct flags are:
-sprt elo0=1 elo1=5 alpha=0.05 beta=0.05
Given the previous results, I am not going to repeat the experiment yet again to see if it would terminate early, as it’s very likely it wouldn’t and I’d end up losing a week of processor time. I will however be using SPRT going forward.
Since the change was a good one in chess4j I’ve ported it to Prophet4 as well, and for confirmation ran a 2,000 game P4 vs P3 match.
Wins
Losses
Draws
Pct
Elo
Error
233
1153
614
27.0%
-172.8
13.4
P4 vs P3, 2000 games 10.0+0.5
That is marginally worse than the last run by 1.0%, but well within the margin of error.
Edit: I ran the P4 vs P3 match again out of curiosity:
Wins
Losses
Draws
Pct
Elo
Error
250
1139
611
27.8%
-166.0
13.4
P4 vs P3, 2000 games 10.0+0.5
Once again, the error bars are large enough in a 2000 game match that there is a lot of variability between runs, so I’m not making decisions based on these results, but I do like to see how P4 is measuring up against P3 as changes are made.
The null move heuristic has been added and yielded another pretty solid increase in Elo.
For the uninitiated, a “null move” isn’t a real over-the-board move of course, it’s just a heuristic that can be used to terminate a line of search early, before any moves are even generated. The heuristic is based on the “Null Move Observation,” which says that it is almost always better to do something rather than nothing.
The basic idea is to actually try “doing nothing” and seeing if that would allow the opponent to improve their position. If they can’t, odds are that when you really do make a move your position will be too good (the opponent will do something else which prevents this entire line anyway). An example might be capturing the opponent’s queen with a pawn, leaving none of my pieces hanging (en-prise). No matter what the opponent does in response, he’s probably losing, so he’ll make a move which avoids leaving his queen hanging in the first place.
For this to actually save time, the search that is done after “doing nothing” (allowing the opponent a second move) is done to a reduced depth. How much to reduce is the big question – too much and you will make some very unsafe and risky cutoffs. Too little and you’re not really saving much time. A very conservative value would be R=2 or R=3, but with increasing processor speeds and greater and greater search depths many programmers push this value to extremes. It should all be backed by testing. Currently I am using R=3 but I do plan to revisit this in the future.
I am also reducing that reduction towards the leaves to ensure there is at least one ply of full width depth search remaining before dropping into the quiescence search. The idea here is to give the search an opportunity to resolve checks (the qsearch is currently capture only). My programs also avoid using this heuristic in positions that are likely zugzwang, meaning any move could potentially be harmful (which violates the Null Move Observation the heuristic is based on).
chess4j self play match, with null move vs. without, 5+0.25
… and …
Wins
Losses
Draws
Pct
Elo
Error
237
1118
645
28.0%
-164.29
+/- 13.13
Prophet 4 vs Prophet 3, 10+0.5
So once again a pretty large jump, and of course I’m very pleased. Prophet4 is now within 200 elo of Prophet3 and there is still a pretty good list of enhancements to implement.
The next four planned changes are going to be smaller and perhaps more difficult to measure: (1) stopping the iterator early if less than half the allotted time remains, (2) proper treatment of mate scores in hash (rather than the “cheat” that’s in place now), (3) aspiration windows within the iterative deepening function, and (4) re-examination of the three-fold rep: is a two-fold repetition good enough? I don’t expect any of those will be big changes on their own, but I’m hoping that cumulatively they’ll be worth a few points. But, you never know, which is why we test.
Both chess4j and Prophet now have hashing. Maybe I should say that chess4j has hashing again, because it did have hashing before as I wrote about here. As stated in a previous post, I basically “tore down” the search in chess4j in order to build it back up alongside P4, so I could test equivalence as I go.
The replacement strategy is very simple and certainly an area for future work. For every single node in the full width search where moves are expanded and searched (so these wouldn’t include quiescence nodes, leaf nodes, or nodes where a draw caused an early exit), the result is stored in the hash table. For every node of the full width search except the root, the hash table is probed. The hash entry includes not just the score for the position, but the depth of the search that produced it, and the type of node (PV, ALL, CUT) as well so we know if the score represents a bound or an exact value. In the case of PV and CUT nodes, the “best move” is also stored to aid in move ordering. The hash table is not “bucketed” – an entry consists of just a single slot, so anytime something is hashed any previous values are overwritten. This is referred to in the literature as the “always replace” strategy. It’s very simple, but very effective. In the future I intend to experiment with probing from the qsearch, different replacement strategies (such as “depth preferred”) and expanding the number of values that can be mapped to a single key.
When a hash probe is done and a value found, if the depth associated with the entry is at least as big as the depth we intend to search, we may be able to immediately exit the search without expanding any moves. If not, we at least have a move to try. If we do have to search and there is a suggested move from the hash table, it is tried first, before any moves are generated.
The results of two runs of chess4j with hash vs. without:
Wins
Losses
Draws
Pct
Elo
Error
959
439
602
63.0%
+92.46
12.97
981
469
550
62.8%
+90.97
13.22
… and Prophet4 vs Prophet3:
Wins
Losses
Draws
Pct
Elo
Error
145
1360
495
19.6%
-244.92
14.94
A very solid jump in strength; I’ll take it. 🙂
I’ve been putting some thought into my testing regime. In the past I’ve tested changes by running gauntlets of 10+0.5, which can take quite a long time on a single processor if you really want to get the error bars down. I have an older laptop that I’ve dedicated to testing. It has two cores, so it would be nice to utilize both of them. So, I did some self play tests with both of my programs, running two games concurrently, and happily the results came out pretty close to 50%. Additionally, I also experimented with reducing game times in self play matches from 10+0.5 down to 5+0.25. The results were still around 50%, and without any timeouts. So, at the moment my testing regime is to run a self play match, and then to run a P4 vs P3 match as verification. I think this will work until I’ve reached parity with P3, at which point I intend to go back to gauntlet testing. It would be good to introduce some statistical methods to measure the confidence a change is good (and to use for early termination), particularly as the elo become harder to come by and more and more games are needed to measure the change. I imagine at some point I’ll need more cores for testing as well.
It’s not very often that you can make a change that will net your program 350 Elo, but that’s exactly what I just did! A quiescence search was added to chess4j and Prophet4, resulting in a dramatic increase in playing strength.
The main issue with a fixed depth search that simply calls a static evaluation at the leaves is the horizon effect. A quiescence search attempts to minimize this issue by performing a more limited (highly selective) search at the leaves, until the position becomes quiet. There is some variability between programs as to what moves to expand, but typically they would be capturing moves, check evasions and possibly checking moves. Promotions or other threatening moves may also be considered. The danger is search explosion – expanding too many moves will cause the subtree to be too big and take too much time to resolve, possibly at the expense of starting a new iteration of the full width search. Therefore, it may become necessary to limit the quiescence search in some way, such as a maximum depth.
My programs are currently only considering captures, though I do plan to experiment with checks at some point. Captures seemed to be a safe starting point, sort of a “base line,” and minimize the chance of a search explosion since capture sequences tend to “fizzle out” at some point (though I’m sure there are some pathological positions out there that would cause problems).
Results of a chess4j self play match: with a quiescence search vs without:
Wins
Losses
Draws
Pct
Elo
Error
1683
160
157
0.881
347.36
21.23
… and the latest Prophet4 vs Prophet3:
Wins
Losses
Draws
Pct
Elo
Error
83
1661
256
0.105
-371.33
20.18
Still a long way to go to catch P3, but considering that before the quiescence search was added the results were more like 5 wins, the needle has moved quite a lot.
The next planned addition – hash tables – should net another large gain. I’m not sure that I’ll ever see this kind of increase again, but I expect it will be significant.
I’m also starting to do some experimentation to devise a testing strategy. I’ll write about that in another post when I have some results to share.
Support for time controls has been added to both Prophet and chess4j. Each is capable of playing with a fixed time per move or incremental chess clocks. They can use conventional chess clocks as well, but the timing strategy doesn’t really take into account that time will be added after the time control is up (for example 40 minutes for the first 40 moves, then 5 more minutes for each 10 moves, etc.).
Another addition that was tricky to get right was the JNI support to print changes in the principal variation during a search. I ended up adding a callback function that is invoked from the search when the PV changes at the root. When using P4 as a standalone engine, that callback would be a simple “print PV” function. When invoking the P4 search from chess4j, the callback is a method in the JNI layer that calls a similar “print PV” function in the Java code. It was tricky, but works beautifully.
I mentioned in my last post that I was kicking off some self play matches in P4 using 10 second matches with 0.5 second increments. This did uncover a couple of bugs, but everything is stable now. Unlike fixed depth matches where you would expect an exact .500 result, the timed matches will introduce a small amount of variability as the system becomes “busy” at times which will cause the search to stop at different points. So, the result of a 4000 game match was close to but not exactly 0.500.
Just out of curiosity I also played a 2000 game match of P4 vs P3. There was absolutely no doubt that P3 should win this match hands down. P4 is missing many search algorithms that are worth lots of ELO. I just wanted to test for stability and get a baseline measurement to gauge future improvements against. The result – 5 – 1959 – 36! Clearly a long way to go, but search improvements are coming soon.
Next up: analysis support, ponder support in chess4j, and a Windows build of the chess4j + P4 bundle. Once those features are complete, it will be time to start re-adding those search enhancements to climb the ELO ladder. Can’t wait.
The last couple of weeks have been focused on transitioning P4 from a single threaded engine that plays to a fixed search depth to a multi-threaded engine that can respond to commands while searching, and can search for a fixed period of time.
As soon as the code to use threads for searching was complete, I immediately pulled out an old laptop, got everything up to date, and turned it loose on doing a series of fixed depth self play matches using the fantastic Cute Chess: https://github.com/cutechess/cutechess .
The first test was to run depth 1 vs depth 1, then 2 vs 2, up to 5 vs 5. Each match was 20,000 games using over 1000 starting positions. The results are below. The score itself isn’t interesting, only that the end result was a perfect .500 in each case, and that there were no crashes, stalls, etc.
Depth
Wins
Losses
Draws
Score
1
2255
2255
15490
0.500
2
7147
7147
5706
0.500
3
8913
8913
2174
0.500
4
7804
7804
4392
0.500
5
9123
9123
1754
0.500
100,000 self play games (actually 200,000 if you consider that the program played both sides) is a pretty good indication that it’s stable, but just out of curiosity I also played some fixed depth self play matches where one side was able to search deeper than the other. Naturally you would expect the side that can “see further” to have a large advantage, and indeed that is the case. The table below shows the results of a series of 20,000 game matches.
1
2
3
4
5
1
0.5
0.012
0.001
2
0.988
0.5
0.038
0.045
3
0.999
0.962
0.5
0.1
0.029
4
0.955
0.9
0.5
0.095
5
0.972
0.905
0.5
As you can see, even a 1 ply advantage is enormous, at least at very shallow depths. I believe that advantage would diminish at larger depths, but our branching factor isn’t good enough to play any longer games without it taking forever. Anyway, the point of this exercise wasn’t really to quantify the advantage of a larger search depth, but to test P4’s stability, and I’m happy to report it’s rock solid. P4 has played around 1 million fixed depth games at this point with 0 crashes.
I’m starting another round of testing now, using incremental clocks – 10 seconds per side with 0.5 second increment. I don’t expect any stability issues but I’d like to see that the engine never runs out of time (or at least very, very rarely).
Several months ago I wrote about a proof-of-concept JNI integration between chess4j and Prophet4. At the time I had managed to compile Prophet4 as a static library, load that library into chess4j and write a JNI layer. The only function the Prophet4 library was serving at the time was for endpoint evaluation, but it was enough to show the idea was viable. Based on the success of the proof-of-concept, I’ve decided to go “all in” with this chess4j + Prophet4 integration. And, it’s going really well.
Since the time I wrote that article, I’ve added a search function to Prophet4 with some basic heuristics, as well as an iterative deepening driver. I’ve also added some additional JNI layer code that allows chess4j to utilize Prophet’s search. The native code runs 2-3x faster, so there is a tremendous speed improvement.
I’ve also realized some benefits when it comes to testing. Just to be able to compare apples to apples, I’ve stripped a lot of chess4j code out to simplify the search to the exact equivalent of Prophet’s. It didn’t really bother me too much to do that, because I wanted to refactor it to improve the test coverage anyway. Now, I’m building both programs back up, feature by feature. I’ve found that it’s often easier to build more comprehensive tests in Java, largely due to the ease of “injecting” mocked dependencies. As a feature is implemented in chess4j and I’m satisfied with the testing, I’ll implement the same feature in Prophet4. I still add tests in P4, using Google’s test framework, but not always as comprehensive. Then, the JNI layer is leveraged to ensure that the searches traverse exactly the same tree, produce the same score, and that the iterative deepening driver produces exactly the same PV. Proving that the searches are equivalent gives a sort of transitivity argument.
So, chess4j and Prophet4 are now and probably forever linked. In some sense you could consider chess4j a Java port of Prophet4, or Prophet4 a C port of chess4j. But, there’s more to it than that. The programs serve different purposes. chess4j will be the more “feature complete” engine. It contains an opening book, where P4 does not. It will be capable of pondering, but P4 may not. Prophet4 will be the lighter weight / faster of the two and more suitable for testers that focus on engine-to-engine testing. The chess4j releases will include platform specific builds that utilize P4 as a static library, and for any platforms that aren’t included, the Java engine will still work.
For future work I’m envisioning an SMP implementation such as Young Brothers Wait in P4. P4 is the more likely to use tablebases. Since chess4j uses P4 as a static library, it will automatically benefit from those features. Macro level work such as distributed computing and learning experiments are more likely to be done in chess4j. Who knows, but for now I’m having fun. I guess that’s the main benefit.
Killer moves have been implemented. Two killer moves are kept per ply. A killer move is defined as a non-capturing move that causes a fail high. A new killer move is stored in “slot 1,” while the previous killer move in slot 1 is shifted to slot 2. Killer moves are tried after capturing moves, before any other non-capturing moves are even generated.
The next item on the agenda is to add draw checks into the search, then iterative deepening and “last PV” move ordering.