28 04

Can a machine powered by artificial intelligence (AI) successfully persuade an audience in debate with a human?

As this research develops, it’s also a reminder of the urgent need for guidelines, if not regulations, on transparency in AI — at the very least, so that people know whether they are interacting with a human or a machine.

Project Debater is a machine-learning algorithm, meaning that it is trained on existing data.

Among them is the language model called Generative Pretrained Transformer (GPT), devised by OpenAI, a company based in San Francisco, California.

As AI systems become better at framing persuasive arguments, should it always be made clear whether one is engaging in discourse with a human or a machine?

It is equally important to make sure that the person or organization behind the machine can be traced and held responsible if people are harmed.

Project Debater’s principal investigator, Noam Slonim, says that IBM implements a policy of transparency for its AI research, for example making the training data and algorithms openly available.

In addition, in public debates, Project Debater’s creators refrained from making its voice synthesizer sound too human-like, so that the audience would not confuse it with a person.

Right now, it’s hard to imagine systems such as Project Debater having a big impact on people’s judgements and decisions, but the possibility looms as AI systems begin to incorporate features based on those of the human mind.

Add your comment