15 06

AI researchers at Google, set the bar for search engines in the first place, are sketching out a blueprint for what might be coming up next.

They say large language models—machine learning algorithms like OpenAI’s GPT-3—could wholly replace today’s system of index, retrieve, then rank.

Though search engines surface sources that contain at least pieces of an answer, the burden is on the searcher to scan, filter, and read through the results to piece together that answer as best they can.

Though they have their own shortcomings large language models like GPT-3 are much more flexible and can construct novel replies in natural language to any query or prompt.

The Google team suggests the next generation of search engines might synthesize the best of all worlds, folding today’s top information retrieval systems into large-scale AI.

It’s worth noting machine learning is already at work in classical index-retrieve-then-rank search engines.

“What would happen if we got rid of the notion of the index altogether and replaced it with a large pre-trained model that efficiently and effectively encodes all of the information contained in the corpus?.”

Seekers of information pose questions, the system answers conversationally—that is, with a natural language reply as you’d expect from an expert—and includes authoritative citations in its answer.

The authors note that another benefit of large language models is their ability to learn many tasks with only a little tweaking (this is known as one-shot or few-shot learning).

Add your comment