How To Use AI To Get Better At Chess
The hard part of the ultimate Chess coach has been done already
Leela Odds is a superhuman chess AI designed to beat humans despite ludicrous odds. I’m a decent player and struggle to beat it with two extra rooks. It’s fun doing this for sheer entertainment value. Leela odds plays like the most obnoxious troll club player you’ve ever run into, more like a street hustler than something superhuman. Obviously getting beaten in this way is also humiliating, but it also seems to teach a lot about playing principled chess, in a way which raises questions about objectivity, free will, and teaching pedagogy.
Most computer chess evaluations suffer from being deeply irrelevant to human play. When decently strong humans review games with computer evaluation as reference they talk about ‘computer lines’, meaning insane tactics which no human would ever see and probably wouldn’t be a good idea for you to play in that position even after having been told the tactics work out for you in the end, much less apply to your more general chess understanding. There’s also the problem that the only truly objective evaluation of a chess position is one of three values: win, lose, or draw. One move is only truly better or worse than another when it crosses one of those thresholds. If a chess engine is strong enough it can tell that a bunch of different moves are all the same and plays one of them at random. Current engines already do that for what appear to be highly tactical positions which are objectively dead drawn. The only reason their play bears any resemblance to normal in those positions is they follow the tiebreak rule of playing whichever move looked best before they searched deeply enough to realize all moves are equivalent
So there’s the issue: When a computer gives an evaluation, it isn’t something truly objective or useful, it’s an evaluation of its chances of winning in the given position against an opponent of equal superhuman strength. But what you care about is something more nuanced: What is the best move for me, at my current playing strength, to play against my opponent, with their playing strength? That is a question which has a more objective answer. Both you and your opponent have a probability distribution of what moves you’ll play in each position, so across many playouts of the same position you have some chance of winning.
This is the reality which Leela Odds already acknowledges. Technically it’s only looking at ‘perfect’ play for its own side, but in heavy odds situations like it’s playing the objectively best moves are barely affected by the disadvantaged side’s strength anyway because the only way a weaker player can win is to get lucky by happening to play nearly perfect moves. And here we’re led to what I think is the best heuristic anyone has ever come up with for how to play good, principled, practically winning chess: You should play the move which Leela Odds thinks makes its chances against you the worst. The version of you playing right now has free will can look ahead and work out tactics but the version of you playing in the future cannot and is limited to working out tactics with only some probability of success. You can learn from advice from the bot about what are the most principled chess moves which give you the best practical chances assuming the rest of the game will be played out by your own not free will having self. Everybody has free will but nobody can prove it to anybody else, not even themselves in the past or the future. The realization that your own mental processes are simply a probability distribution does not give you license to sit around having a diet of nothing but chocolate cake and scrolling on your phone all day while you wait for your own brain to kick in and change your behavior.
Philosophical rant aside, this suggests a very actionable thing for making a better chess tutor: You should be told Leela Odds’s evaluation of all available moves so you can pick out the best one. The scale here is a bit weird. In an even position it will say things like your chances of winning in this position are one in ten quadrillion but if you play this particular move it improves to one in a quadrillion. But the relative values do mean a lot and greater ratios mean more so some reasonable interface could be put on it. I haven’t worked out what that interface might be. This approach may break down in a situation where you’re in an objectively lost position instead of an objectively won one and you should be playing tricky troll moves yourself. That seems to matter less than you might think, and could be counteracted by reverting to a weaker version of Leela Odds which can’t work out the entire rest of the game once it gets into such a position.
So far no one is building this. Everybody uses Stockfish for evaluation, which suggests a lot of lines you could have played if you were it, but of course you’re not, and is overly dismissive of alternative lines that would have been perfectly fine against your actual non-superhuman opponent. Somebody should build this. In the meantime if you want to improve your chess you’re stuck getting humiliated by Leela Odds even when you’re in what seem to be impossible to lose situations.


I feel like what we really want is a per-position model that takes the rating of the player into account. Rather than saying "what move wins this position" you try to predict "what would an 1800 do in this position". And then you can have a second model of "which move would most likely lead the current player to a victory, if the position was being played out by two players of rating X". It seems like you should be able to train these effectively given a large dataset of games with attached ratings. Maybe you don't even need to do any tree search, just modern neural net techniques.