New research findings from a Chinese university offer an insight into why generative natural language processing models such as GPT-3 tend to ‘cheat’ when asked a difficult question, producing answers that may be technically correct, but without any real understanding…