Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligence—machines that can multitask, think, and reason for themselves.
“Most of the value that’s being generated by AI today is returning back to the billion-dollar companies that already have a fantastical number of resources at their disposal,” says Karen Hao, MIT Technology Review’s senior AI reporter.
Really just copying machines: algorithms that have learned to do specific things by being trained on thousands or millions of correct examples.
Can we create actual intelligence, machines that can independently think for themselves? Machines that can think and do things like humans can. And the way a transformer algorithm deals with language is it looks at millions or even billions of examples, of sentences of paragraph structure of, maybe even code structure.
So, what OpenAI did with GPT-3 is they’re not just training it on more examples of words from corpora like Wikipedia or from articles like the New York Times or Reddit forums or all these things, they’re also training it on, sentence patterns, it trains it on paragraph patterns, looking at what makes sense as an intro paragraph versus a conclusion paragraph.
So, transformers are kind of a self-supervised technique where the algorithm is not being told exactly what to look for among the language.