“We have a three-pronged approach that broadly looks at the origins of a given content, the content itself, and also the associated metadata that provides us additional informant and clues about a given piece of content”.
From a technical standpoint, the company makes extensive use of machine learning, NLP, network theory, knowledge graphs, and traditional rules-based decisioning to automatically identify and classify large amounts of suspicious content.
Natural language generation is also used to explain why the system determined a given piece of content is fake or harmful.
Specifically, in this vein, it uses a custom-trained, multi-billion-parameter language model based on BERT, a transformer-based NLP mode model.
“We built technology to understand the context in which misinformation is embedded, and how it spreads in a network, the Internet, or the social media,” Bandhakavi says.
“[We also built technology] to understand the integration patterns and communities and users with fake news and misinformation.”
Having a multi-pronged approach to detecting fake news and misinformation is critical to success.
The in-house experts are a critical element of Logically’s approach, as they provide a check against the AI models for late-breaking news and fast-changing information types.
“AI alone can get very stale when it comes to the understanding the evolution of content, so it’s always important and useful to have expert intelligence input,” Bandhakavi says.