Our first experience with online cheating was traumatic to say the least.
In the early Covid lockdowns we setup an online chess tournament, not expecting much and were blown away when we had to close entries two weeks later after 300 players had paid their entry fee!
It was an online junior event in age groups from Under 8 to Under 18.
- zero prize money
- no prizes at all, not even trophies
- players in teams
- using real names
This was as low-stakes an event as you can possibly imagine.
We saw no incentive for anyone to cheat. No prizes to be won, no rating points to be gained, the teams concept was new so there was very little prestige as well. A bunch of the players knew each other too which I believed would help.
There was also an apparent huge downside. We were using real names, so players’ real reputations were on the line, coaches would find out, team-mates would know. Why risk the embarrasment? Why take a chance at devastating your reputation in a tiny community?
We naiively assumed that with high-risk and low-reward, using real names and live arbiters, there would be no cheating.
We couldn’t have been more wrong!
After 5 rounds, accusations started flying. Coaches stress levels rose and I buried my head in Stockfish and PGN Spy and spent hundreds of hours analysing games. It all of a sudden went from the lowest priority for Tornelo to the highest priority. It was a genuine problem for tournament oganisers and needed attention!
Not only did I dive into the game analysis, I also spent dozens of hours on the phone with parents trying to explain what we were doing, listening to their reactions and seeing how players justified their actions, or defended their innocence.
The weirdest thing was that some of the strongest players were getting assistance! In some cases a player 1000 points higher rated still used an engine to play almost every move! Why?
We couldn’t understand.
Of course that was May – it’s now August and 3 months later we are much wiser.
Firstly, a huge thanks to Professor Kenneth Regan, the absolute world expert at detecting assistance in chess games. I can’t even imagine how much work he has done supporting the entire chess world through these last 6 months.
Ken Regan provided us with a Statistical Analysis of the event, of course it didn’t say explicitly who cheated, but it gave us a lot of information to work with. Without this report I wouldn’t have been able to make any sensible decisions.
Ultimately (2 weeks after the event ended) I made the decision to remove 10 players’ results.
- 4 of the 10 players admitted to receiving assistance and apologised
- 5 players declared their innocence and accepted the decision
- 1 player protested, but not too much
The players removed were:
- 2 aged 15-18
- 6 aged 13-14
- 2 aged 11-12
- None younger than 11
Shortly after I suspected that I’d made a mistake. One of those 10 players was incorrectly removed. I looked too closely at individual game statistics and didn’t trust the “big picture” statistics enough. The power of large numbers!
Just last week Tornelo released a Fair Play assessment tool and I thought I’d go back and test my decision making from 2 months ago against the new tool.
This is the report generated by Tornelo (available to arbiters in real time, round-by-round).
All players who were removed were anonymised with a repeated random letter. Players who were not removed appear unaltered.
I’m very excited to see that the Top 9 players on this Tornelo report were all players whom I had chosen to remove. As expected the 10th is way down the list, an unfortunate confirmation that I got it wrong.
This provides reassurance that my decisions at the time were good, but also gives me confidence in the Tornelo Fair Play report. Instead of spending hundreds of hours pouring over game analysis and agonising over a decision, I could have used this in real-time during the event!!
Next time I’ll be able to flag suspicious behaviour while the event is in progress stop it before it starts, saving everyone time and heartache!