Monday, January 6, 2014


The majority of players play their game of chess, afterwards sometimes have a chat about it and the scoresheet is thrown in the wastepaper basket.  Only the more ambitious players also make an analysis at home of their games. I already started very early with analyzing and commenting my own games. Even before I participated in competitions so middle of the 90 ties, I already made analysis on paper of my games played against computers. At that time I didn't have any contact with stronger players to help me so I used the same chessengines on their highest level to detect mistakes.

Since the arrival of the PC (my first dates from 1996) I obviously work with databases instead of paper. Correcting, researching, adding, saving... is many times easier with a database. Recently I noticed on chesspub the interesting question if Chessbase is mandatory or it is sufficient to work with the much cheaper Fritz Gui. Well I use exclusively the Fritz Gui and after almost 700 commented games I can safely state that I don't have the feeling to miss something critical. A rare case as in my blogarticles problemmoves (existence of special knightmoves in the corner) or the scientific approach (record long castling) I have to call help via my blog but these are only nice things to know which you surely don't need to improve at playing chess.

Naturally an ambitious player will make these analysis to learn from the mistakes made and wants to avoid them in a next confrontation. It is an individual learning-curve in which each player adopts its own approach conform best return. For a lot of players this approach doesn't go beyond using one of the automatic tools which Chessbase presents to us. Blunder-checking and full analysis already exist more than a decade (see the manual on chesscafe). Today we also have modern applications:  Lets check and Cloud Engines which use the internet. However all these automatic tools have some serious disadvantages. The output is in a very unfriendly reading format. Also the analysis often are limited to the direct mistakes which an engine can detect. The engine doesn't take into account what the player finds interesting. In other words if you want to have more than what the automatic tools present then you need to help.

I already explained extensively in my blogarticle analyzing with an engine how I help so I don't want to discuss this here anymore. What I do want to explain, is how I afterwards consolidate these analysis. In below screenshot you can see how the end result of the analysis look of my game against Soors.
Draft analyse Soors - Brabo

If we compare above screenshot with the publication of the same game in my previous blogarticle a moral victory then we notice a complete metamorphose. The labyrinth of variations has been replaced by prose and reduced to some key-variations (selected by myself). Also this process largely happens via strict guidelines which I follow already for some years. Now what is the purpose to explain this in an article? I admit that as long a game with analysis and comments isn't shared with others, it is not important how you syntheses. A different story it becomes when you publish stuff as you need to make it correctly and easily readable for others. That this isn't always easy, can be found in reactions of even + 2300 players see below my blogarticles:  een minithematornooibelgische interclubs apotheose.

In the literature there is very little information about how you best comment for an audience, likely as few players have done this. Besides an author can have very different reasons to publish a game. Sometimes he just wants to show a nice fragment. In another case it fits in a larger story. If I announce in advance that the game has only some light comments ( or you see few comments and analysis) then you may assume that I am rather telling some story (mainly for all the games which I didn't play myself). I believe only a minority of the publications (so anything available and not restricted to my blog) does include some serious analysis of complete games as it is very time-consuming and often leading to disputes. Commenting is criticizing which unavoidably creates conflicts. However a conflict doesn't need to be something bad as it often can be a catalyst for new refreshing views.

The British grandmaster John Nunn and the German grandmaster Robert Huebner have tried in the past to explain how they work with using annotations for commenting games as can be read on Wikipedia. Unfortunately we can't reuse this as today it is still impossible for +99% of the positions to make an exact evaluation. Just because it is so difficult to make an exact evaluation, an analysis remains for a big part subjective. As I try to stress as much as possible the scientific part in the analysis (just like in my games, see the scientific approach) I developed a method to eliminate as much as possible this subjective aspect.

The trick is to replace myself by engines when an evaluation of the positions and adding the annotations needs to be done.  Engines are today many times stronger than ourselves so it sounds to me perfectly acceptable to prefer their evaluations above our own incomplete and subjective assessments. On top we get as big advantage that every chessplayer can achieve the same results if same HW and SW is used. Now I immediately have to add that defining the right engine-evaluation and according annotation is a bit more complicated than something we just read from the screen. Some further explanation to better understand is therefore necessary.

Today we notice that each engine shows next to an evaluation based on hundredths of a pawn also an evaluation-sign.

I also use the same logic with some important adaptations. First I replace always = by unclear unless we have a tablebase or another 100% draw-position. I want to make a clear distinction between a balanced position and a real draw. Next as I use always 2 engines (see e.g. my blogarticle analyzing with an engine) , I use a calibrated choice between both. I mean if both programs show a different evaluation-sign for the same position then I always follow the evaluation-sign which is closest to unclear. In my blogarticle about stockfish I mentioned that the evaluations shown are often optimistic which means today I mainly use the more classical evaluation of houdini (most exceptions are not surprisingly in the endgame). My feeling is that a more conservative evaluation better corresponds with the real winning chances in chess but I've no strong evidence to proof this. By the way recently I noted that also other players prefer to read a more calibrated evaluation of their engines which is exactly one of the newest features advertised today by Chessbase for Houdini. Finally I always correct the winning evaluation to big advantage in an endgame (maximum 4 pieces excluding kings) when it is clear that a shoot-out of the position doesn't lead to a win. It regularly happens that an engine doesn't succeed to increase the advantage above 5 points in a shoot-out despite an initial advantage above 1,4 points.

Assigning the annotations once the evaluations are fixed, is easy. A worse move causing a drop of 1 step on the evaluation-ladder gets ?! Examples are from +/- to +/= for white or from unclear to +/= for black. A worse move which drops 2 steps in the evaluation-ladder gets ? Examples are unclear to -/+ for white or -+ to =/+ for black. Finally all worse moves in which a drop of 3 or more steps on the evaluation-ladder happens, get ?? Examples of this are +- to unclear for white or -/+ to +/- for black. Further I assign !? to moves which I want to stress that they are interesting in this specific position. I am not using the ! anymore for some years already as it is too subjective (I share the approach of the German grandmaster Robert Huebner whom also avoids emotions in contrary with former world-champion Garry Kasparov whom loves using exlamationmarks in his books). Nevertheless I do use the ! to stress some computermoves which are only moves and have been missed earlier by a player or engine (so when only 1 of the 2 engines shows the correct move).

Thanks to these strict self applied rules, we get a very objective analysis of the game. However there are also some disadvantages connected with this method. In a rare case it can happen that a mistake of 1 hundredth of a pawn is already punished by ?! as we just drop 1 step on the evaluation-ladder. If we compare with drops which are 100 times bigger but not punished by a negative annotation (as the position is still considered winning) then I can imagine that some people find this unfair. Now ?! should in such case rather be seen as a signal that we pass a threshold instead of a mistake in the classical sense. Sometimes it is very hard even after elaborated analysis to define which worse move exactly had an impact on the final score.

Today it happens rarely that we disagree about an evaluation of a certain position as we have become all very dependent from engines. Nevertheless there was a small discussion on my blog  about my last interclub-game played last season, see below position.
Rab1 !? or ?!
I didn't comment this position as I didn't notice a drop of the evaluation. Anyway I put less time in analyzing alternatives once we are not anymore in the opening as otherwise an analysis would take too much time. Now I do agree with Glen that after the game-continuation white mainly plays for a draw while after b5 the 3 results rather remain available. B.t.w. recently somebody asked on Quality Chess Blog if it is possible for a top-engine like Komodo whom recently became worldchampion,  to choose moves which avoid the draw (so playing for 3 results). Larry Kaufman answered that the priority of the program was to play the moves which were best conform the algorithm used and not what could provide the best winning chances (there is no direct link between both). I just mention this to illustrate that Glens vision of how to play chess is shared by lot of (most? ) players.

A similar critic on my analysis is that I sometimes take not enough into account that in a game of chess there is more than just the strength of the moves. Kara earlier already rightly claimed that I don't note all the practical chances with my annotations. Now I do think that I multiple times have shown on my blog that I am aware that there is more in chess than just playing correctly, see e.g. playing the person. Especially in the games which I played, i try as much as possible to compensate with prose what is left out by the annotations.

There are still a few smaller things which can be explained but I do believe that the article should largely suffice to better understand in the future how to make the right interpretation of my analysis. I welcome players to use the same or a similar methodology but of course anybody is free to act as they wish. A large advantage of the diversity in publications is that you can get different interesting views of the same game.



  1. Interesting that you let the final evaluation be done by the computer. A few months ago I read Axel Smith's _Pump Up Your Rating_, and I have to paraphrase from my bad memory here. He also builds large trees of analysis that he later prunes down to shorter variations and explanations in words. He also uses the computer extensively to analyze and follows its lines (as well as his own ideas), BUT: all final evaluations must be his own!

    Of course opening analysis is different in that it is mostly about deciding which moves you wan to play in the future and for that it's important that you like them and understand them, not only the computer, but it's a striking difference with your method.

  2. There is no right or wrong in this discussion. However if the readers don't understand the author then there is problem.
    Very likely Axel (as most others) annotate from the sportive point of view which is a perfectly understandable choice as a game is a sportive encounter. Computers are still not able to evaluate the sportive nuances in our game.