Cosmas Zavazava, Director of the Telecommunication Development Bureau at the International Telecommunications Union (ITU) – one of the key agencies that drafted the statement, which includes guidelines and recommendations – catalogues a dizzying array of ways that children are targeted.
This extends from grooming to deepfakes, the embedding of harmful features, cyberbullying and inappropriate content: “We saw that, during the COVID-19 pandemic, many children, particularly girls and young women, were abused online and, in many cases, that translated to physical harm,” he says.
Organisations that advocate for children report that predators can use AI to analyse a child’s online behaviour, emotional state, and interests to tailor their grooming strategy.
AI is also enabling offenders to generate explicit fake images of real children, driving a new form of sexual extortion.
The Childlight Global Child Safety Institute, an independent global institute established to gather the most reliable data available on child sexual exploitation and abuse, found in a 2025 report that technology‑facilitated child abuse cases in the US increased from 4,700 in 2023 to more than 67,000 in 2024.
Young adults check social media in North Macedonia.
Australia leads the way
UN Member States are taking stronger measures, as they learn about the scale and severity of the problem.
At the end of 2025, Australia became the first nation in the world to ban social-media accounts for children under 16, on the basis that the risks from the content they share far outweighs the potential benefits.
The Government there cited a report it had commissioned, which showed that almost two-thirds of children aged between 10 and 15 had viewed hateful, violent or distressing content and more than half had been cyberbullied. Most of this content was seen on social media platforms.
Several other countries, including Malaysia, the UK, France and Canada, look set to follow Australia’s lead, preparing regulations and laws for similar bans or restrictions.
AI-illiteracy
And, at the beginning of 2026, a wide variety of UN bodies with a stake in child safety put their names to a Joint Statement on Artificial Intelligence and the Rights of the Child, published on 19 January, which pulls no punches in its description of the risks – and society’s collective inability to cope with them.
The statement identifies a lack of AI literacy among children, teachers, parents and caregivers, as well as a dearth of technical training for policymakers and governments on AI frameworks, data protection methods and child rights impact assessments.
Responsibility of the tech giants
Tech companies are also in the frame: the statement says that most of the AI-supported tools they make – along with their underlying models, techniques and systems – are currently not designed with children and their well-being in mind.
“We are really concerned and we would like the private sector to be involved, to engage, to be part of the story that we are writing together with the other UN agencies and other players who believe that technology can be an enabler, but it can also destroy,” says Mr. Zavazava.
The senior UN official is confident however, that these businesses are committed to making their tools safer.
“Initially, we got the feeling that they were concerned about stifling innovation, but our message is very clear: with responsible deployment of AI, you can still make a profit, you can still do business, you can still get market share.
“The private sector is a partner, but we have to raise a red flag when we see something that is going to lead to unwanted outcomes.
We have regular meetings where we talk about their responsibilities, and some of them already have statements on how they should protect populations and children. It is our duty together to be fighting the ills that come with the technology.”
A children’s rights issue
While the UN bodies named in the document (full list below) stress the need for these companies to make sure their products are designed to respect children’s rights, they are also calling on all parts of society to take responsibility for the way they are used.
This is far from the first time that concerns have been raised from a rights perspective: in 2021, new language was attached to the Convention on the Rights of the Child – a cornerstone of international child rights law and the most ratified human rights treaty in history – to reflect the dangers of the digital age.
However, the UN bodies feel more guidance is needed to help countries regulate more effectively and have produced a comprehensive list of recommendations.
“Children are getting online at a younger age, and they should be protected, says Mr. Zavazava. That’s why we set out these child online protection guidelines. The first part of the guidelines addresses parents, the second is for the teachers, the third part is for regulators, and the fourth is relevant to industry and private sector.”
Source of original article: United Nations (news.un.org). Photo credit: UN. The content of this article does not necessarily reflect the views or opinion of Global Diaspora News (www.globaldiasporanews.com).
To submit your press release: (https://www.globaldiasporanews.com/pr).
To advertise on Global Diaspora News: (www.globaldiasporanews.com/ads).
Sign up to Global Diaspora News newsletter (https://www.globaldiasporanews.com/newsletter/) to start receiving updates and opportunities directly in your email inbox for free.






























