Digital Child’s Play: Protecting Children from the Impacts of AI


Artificial intelligence has been used in products targeting children for several years, but legislation protecting them from the potential impacts of technology is still in its infancy. Ahead of a global forum on AI for children, UN News spoke to two experts from the United Nations Children’s Fund (UNICEF) about the need to improve policy protection.

Children already interact with AI technologies in different ways: they are embedded in toys, virtual assistants, video games and adaptive learning software. Their impact on children’s lives runs deep, but UNICEF has found that when it comes to AI policies and practices, children’s rights are at best an afterthought.

In response, the United Nations Children’s Agency developed draft AI guidelines for children to promote children’s rights and raise awareness of how AI systems can advocate or advocate. undermine these rights.

Conor Lennon of UN News interviewed Jasmina Byrne, policy manager for the UNICEF Global Insights team, and Steven Vosloo, data, research and policy specialist at UNICEF, about the importance of putting children at the center AI-related policies.
AI technology will fundamentally change society.

Steven Vosloo, UNICEF Data, Research and Policy Specialist, by UNICEF

Steven Vosloo At UNICEF, we saw that AI was a very hot topic and something that was going to fundamentally change society and the economy, especially for generations to come. But when we looked at national AI strategies and company policies and guidelines, we found that not enough attention was being paid to children and the impact of AI on them.

We have therefore embarked on a broad consultation process, reaching out to experts from around the world and nearly 250 children in five countries. This process led to our draft guidance document and, after its publication, we invited governments, organizations and businesses to lead it. We develop case studies around guidance, so that we can share the lessons learned.

Jasmina byrne AI has been in development for many decades. It is neither harmful nor beneficial in itself. It is the application of these technologies that makes them beneficial or harmful.

There are many positive applications of AI that can be used in education for personalized learning. It can be used in healthcare, simulation and language processing, and it is used to support children with disabilities.

And we use it at UNICEF. For example, it helps us predict the spread of disease and improve poverty estimates. But there are also many risks associated with the use of AI technologies.

Children interact with digital technologies all the time, but they are unaware, and many adults are not, that many of the toys or platforms they use are powered by artificial intelligence. This is why we thought that special attention should be paid to children and because of their particular vulnerabilities.

UNICEF / Diefaga Children using computers

Privacy and the greed

Steven Vosloo The AI ​​could use natural language processing to understand words and instructions, and so it collects a lot of data from this child, including intimate conversations, and that data is stored in the cloud, often on commercial servers. So there are confidentiality issues.

We also know of cases where these types of toys were pirated, and they were banned in Germany because they were considered safe enough.

About a third of all online users are children. We often find that young children use social media platforms or video sharing platforms that were not designed for them.

They are often designed for maximum engagement and rely on some level of profiling based on datasets that may not represent children.

Jasmina Byrne, Policy Officer, UNICEF Global Insights Team, by UNICEF

Predictive analytics and profiling are especially relevant when it comes to kids: AI can profile kids in a way that puts them in a certain bucket, which can determine the kind of educational opportunities they have. ‘they will have in the future or the benefits that parents can access for children. So AI is not only affecting them today, but it could point their entire life journey in a different direction.

Jasmina byrne Last year was big news in the UK. The government used an algorithm to predict the final marks of high school students. And because the data entered into the algorithms was biased in favor of children in private schools, their results were truly appalling and they really discriminated against a lot of children from minority communities. So they had to abandon this system.

This is just one example of how, if algorithms are based on biased data, it can actually have very negative consequences for children.

“It’s a digital life now”

Steven Vosloo We really hope that our recommendations will be filtered down to the people who actually write the code. The policy guidance is aimed at a broad audience, governments and policymakers who are increasingly strategizing and starting to think about regulating AI, and the private sector who often develops these AI systems.

We see competing interests: Decisions about AI systems often need to balance an incentive for profit versus ethics. What we are advocating is a commitment to responsible AI that comes from above: not just at the data scientist or software developer level, senior management and senior government ministers.

Jasmina byrne The data footprint that children leave using digital technology is commercialized and used by others for their own benefit and for their own gain. They are often targeted by advertisements that do not really suit them. It is something that we are following and monitoring very closely.

However, I would say that there is now more political appetite to address these issues, and we are working to put them on the agenda of policy makers.

Governments must think about and put children at the center of all their policies regarding advanced digital technologies. If we don’t think about them and their needs. Then we really miss out on some great opportunities.

Steven Vosloo The Scottish government released its AI strategy in March and formally adopted UNICEF’s policy guidance on AI for children. And this is partly because the government as a whole has adopted the Convention on the Rights of the Child. Children’s lives are no longer truly online or offline. And it’s a digital life now.

This conversation has been edited for length and clarity. You can listen to the interview here.

UNICEF / Schverdfinger UNICEF has developed policy guidance to protect children from the potential impacts of AI,

The Global Forum on AI for Children

  • From November 30 to December 1, UNICEF and the Finnish government are hosting the Global AI Forum for Children.
  • This event brings together the world’s leading experts in children’s rights and technology, policy makers, practitioners and researchers, as well as children active in the AI ​​space, to connect and share knowledge about pressing issues at the intersection of children’s rights, digital technology policies and AI systems.
  • The forum aims to recap the achievements and impacts of the project, share knowledge on what worked and what did not work for more child-centered AI, and enable networking on how whose work can continue and inspire participants to take action.

Visit UN News to learn more.


Comments are closed.