17 06

New research findings from a Chinese university offer an insight into why generative natural language processing models such as GPT-3 tend to ‘cheat’ when asked a difficult question, producing answers that may be technically correct, but without any real understanding of why the answer is correct; and why they demonstrate little or no ability to explain the logic behind their ‘easy’ answers.

The second problem is that even though recent research initiatives have studied AI’s tendency to ‘cheat’ in this way, and have identified the phenomenon of ‘shortcuts’, there has until now been no effort to classify ‘shortcut’-enabling material in a contributing dataset, which would be the logical first step in addressing what may prove to be a fundamental architectural flaw in machine reading comprehension (MRC) systems.

The new paper, a collaboration between the Wangxuan Institute of Computer Technology and the MOE Key Laboratory of Computational Linguistics at Peking University, tests various language models against a newly annotated dataset which includes classifications for ‘easy’ and ‘hard’ solutions to a possible question.

The researchers contend that datasets tend to contain a high proportion of shortcut questions, which makes trained models rely on shortcut tricks.

Regarding some of the architectural reasons why shortcuts are so prioritized in NLP training workflows, the authors comment MRC models may learn the shortcut tricks.

Add your comment