23 04

Advanced AI software, like OpenAI’s GPT-2 language model, is now being used for things like auto-completion, writing assistance, and more, and it can also be used to produce large amounts of false information — fast.

To mitigate this risk, researchers have recently developed automatic detectors that can identify this machine-generated text.

In addition, because the detector thinks the machine-generated text is fake, it might be forced to also falsely condemn totally legitimate uses of automatic text generation.

The team produced the following strategy: Instead of generating the text from scratch, they used the abundance of existing human-written text, but automatically corrupted it to alter its meaning.

“There’s a growing concern about machine-generated fake text, and for a good reason,” says CSAIL PhD student Tal Schuster.

“We need to have the mindset that the most intrinsic ‘fake news’ characteristic is factual falseness, not whether or not the text was generated by machines,” says Schuster.

“This finding of ours calls into question the credibility of current classifiers in being used to help detect misinformation in other news sources”.

With that in mind, in a second paper, the same team from MIT CSAIL used the world’s largest fact-checking dataset, Fact Extraction and VERification (FEVER), to develop systems to detect false statements.

FEVER has been used by machine learning researchers as a repository of true and false statements, matched with evidence from Wikipedia articles. 

Add your comment