Roland Walker (BBC) talking with David Edmonds. The context is how Chess.com monitors cheating.
“We can’t overstress this enough, humans and computers play utterly differently. Humans play by planning and recognizing patterns. Computers play in unusual ways, it forgets everything that it knew in between every move. A computer doesn’t really have a plan.
“An engine will take back a previous move if it realizes that in the context of the following moves it wasn’t good. A human has a kind of sticky feeling about their plan.”
Chess engines make people better at chess and good players use them to practice, if not to play. It’s the Cowen idea of meta-rationality (more here). The idea of using the right resources.
Computers are good because they compute without bias (kinda) and avoid human mistakes like sunk cost. As Mohnish Pabrai pointed out, “when we spend a lot of time on something, we feel we should get something in return for that time, it’s a danger if you say, I’m going to research a company and decide if I want to invest or not. I think you’re better off researching a company with no such preconceived notion.”
This week my daughters (12, 10) and I watched both Sherlock (also BBC) and Enola Holmes (Netflix, we loved it). In both the episode and the movie, the characters had to be more objective to solve the crime.
However, it’s going full-Sherlock as much as moving in that direction. Like someone training to gain/lose weight, the goal isn’t to become extremely skinny/strong but to be more than the current state.
Meta-rationality then is under indexed, unless of course, it’s outlawed like chess.
h/t Cowen-kinda-queue, a podcast feed of Marginal Revolution mentions.