Think of cheating in online chess tournaments like a virus, and catching a cheater is like curing the disease … without contact tracing, masks and social distancing, no amount of ‘cure’ is going to help.
What should our goals be?
- Educate players
- Create an environment of Trust
- Assume, then ensure 100% Fair Play
- Change behaviours
All evidence should be collected with a view to proving Fair Play, not picking the cheaters. Starting with the question “Is this player Fair?” rather than “Is this player a Cheat?” gives you a different burden of proof and the atmosphere and energy is completely different.
What evidence should be collected?
TYPES OF EVIDENCE
Ranked from best to worst
- Physical evidence
- Observed evidence
- Statistical analysis tools
- Expert opinion
- Non-expert opinion
- Opinion of those most interested (coaches, parents, friends)
If you are using a human ‘expert’ to provide feedback on the moves of a game:
- Ask “Can you help us to justify declaring this game as Fair Play?”
Our job is to find any excuse possible to let the player go free (because our basline assumption must be that All Players are Fair, it is this belief that builds trust). If we cannot find any evidence and the human “expert” tells you “Sorry, I failed you – I cannot find any evidence this player is Fair” …. then we must reluctantly admit that our environment and supervision has failed and declare the game as unfair.
Asking an expert “Is this person cheating?” is a bad idea … then they are looking for cheating, and a good result is to find what you are looking for.
What is Success?
We need clear definition of success … if some “smart cheaters” get through our net, but we end with a community that is more trusting, that is a win. Catching all cheaters, but ending with a suspicious and fearful community, is a huge failure. Surely we’ve all seen enough dystopian movies to know this?
The mistake I see arbiters making is treating Fair Play as an intellectual challenge. Yes, statistics can help identify players who may using a chess engine to select their moves, but at the end of the day this is a HUMAN problem. It is caused by human behaviour and will be solved by human behaviour.
You will never be certain based entirely on statistics. And, if your goal is not to ‘catch cheaters’ but instead to create an event where everyone trusts one another and has a positive experience, then you probably wouldn’t want to use statistics to catch players anyway.
You should be aiming for:
- Early intervention
- Change behaviour
I’ve see a 12 year old girl (FIDE rating in the 1200’s) destroy a titled player, scoring an 80% move match in a game (17 consecutive “perfect moves”) with a 15 CPL for the game. There was no observed evidence of cheating during supervision and no statistical concern over the longer-term. She was disqualified for that game.
The arbiters and Fair Play panel went through a rigourous intellectual exercise before coming to the decision, including contemplating Tornelo stats, deliberating over Ken Regan screening and deeper analysis, reflecting on a Grandmaster review of the game and speculating during long discussions about the likelihood of a 1200 player making these moves.
But they failed completely. Their decision and process created more mistrust and fear, it established an “us against them” mentality and in the long run will create players who have no choice but to cheat.
They failed because they had the wrong goal. They set out to Catch Cheaters and to Prevent Cheating. This meant they asked the wrong questions “Is this person cheating?” and provided an inside-out scoreboard (measure of success was “Number of Cheaters Caught”).
With this framework in place there was never a possibility to have a Fair Play tournament – because it would mean the Arbtiers and Fair Play supervisors would have failed… they would have caught no cheaters.