Algorithms are increasingly shaping children’s lives, but new guardrails could prevent them from getting hurt.
A policy specialist from UNICEF led the drafting of a new set of guidelines designed to help governments and companies develop AI policies that consider children’s needs.
Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world.
If they’re designed well, for example, AI-based learning tools have been shown to improve children’s critical-thinking and problem-solving skills, and they can be useful for kids with learning disabilities.
The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI) released a set of AI principles for children too.
The new principles outlined specifically for children are meant to be “a concrete implementation” of the more general ones, says Yi Zeng, the director of the AI Ethics who led their drafting.
A guideline to improve children’s physical health, for example, includes using AI to help tackle environmental pollution.
So if over time we see more examples of children being included in the AI or policy development cycle, more care around how their data is collected and analyzed—if we see AI made more explainable to children or to their caregivers—that would be a win for us.”
No More Waiting 😊 Subscribe to our newsletter!