Taking advantage of NLP’s ability to process data at scale, Stanford University researchers examined recordings of conversations between police officers and people stopped for traffic violations.
Using computational linguistics, the researchers were able to demonstrate that officers paid less respect to Black citizens during traffic stops.
In Los Angeles County, individuals who are homeless and White exit homelessness at a rate 1.4 times greater than people of color, a fact that could be related to housing policy or discrimination.
To redress this injustice, the University of Southern California Center for AI in Society will explore ways artificial intelligence can help ensure housing is fairly distributed.
As a study from the UC Berkeley Center for Long-Term Cybersecurity found earlier this year, it’s also essential that governments establish ethical guidelines for their own use of the technology.
The researchers implore the federal government to hire more in-house AI talent for vetting AI systems and warn that algorithmic governance could widen the public-private technology gap and, if poorly implemented, erode public trust or give major corporations an unfair advantage over small businesses.