This is Part 1 of the four-part series that reviews the prevalence and patterns of online child sexual exploitation (OCSE) in Asia Pacific and discusses the role and potential unintended consequences AI implementation has in the region.
Introduction
In the last few years technologies broadly labelled as artificial intelligence (AI) have sparked numerous debates and, according to some, created a new ‘economic bubble’. It's no surprise that AI has quickly entered the realm of content moderation, presenting both a potential solution and a new challenge to overcome.
As global tech companies reduced their workforces throughout 2023 and into 2024 in multiple rounds of layoffs, policy and content moderation teams were also among the ones that were affected. At the same time social media companies, or indeed any company that hosts consumer generated content, increasingly come under pressure from regulators and under scrutiny of mass media for an apparent lack of investment in combating OCSE (e.g. X and Google in Australia, Meta in the US). The introduction of the Online Safety Act in the UK and the recent U.S. Senate Judiciary Committee hearing are just some of the most recent and prominent examples of the said pressure. This coincides with an apparent race among companies to build and deploy AI-driven solutions for content moderation, including Meta’s Few-Shot Learner and Microsoft’s Azure, and a number of solutions offered by private entities like ActiveFence or WebPurify.
Therefore, it can only be assumed that algorithms are heavily relied upon to keep violations of Terms of Service and Community Guidelines in check, less they bring along legislative repercussions or business decline due to consumer and advertiser attrition.
While such algorithms, including the ones driven by AI, can, undoubtedly, make certain aspects of detection and enforcement work more efficient, less gruelling and potentially put us on a more equal footing with the bad actors, they do not provide a universal or systemic solution. If implemented with an appropriate level of oversight from subject-matter experts, algorithms can help to reduce the availability and the reach of harmful content online and minimise re-victimisation of OCSE victims. However, they address only a sliver of a wider issue. The apparent reluctance of the platforms to invest time and effort into tackling it at a more systemic level is a disheartening trend which greatly contributes to the scope of the problem.
What’s more, even if detection and removal mechanisms are prioritised over everything else, in regions like South and Southeast Asia they are likely to yield mixed results, produce unintended consequences and further overshadow the need for implementing more systemic and costly measures. In the haste to deploy one-size-fits-all solutions, unique aspects and abuse patterns that exist in the region are easy to overlook.
Definitions and focus
Online child sexual exploitation
For the forthcoming discussion the definition of OCSE provided by the Luxembourg Guidelines is particularly relevant as it highlights not only a close connection between online and offline but also the fact that it can take multiple forms including those where the illegal content itself isn’t present online.
“The reference to “online child sexual exploitation” includes all acts of a sexually exploitative nature carried out against a child that have, at some stage, a connection to the online environment. It includes any use of ICT that results in sexual exploitation or causes a child to be sexually exploited or that results in or causes images or other material documenting such sexual exploitation to be produced, bought, sold, possessed, distributed, or transmitted.” (p.27)
This definition also highlights that OCSE is not a purely media-based crime. In fact, for cases that involve grooming, extortion or coerced self-produced OCSE, which, according to some estimations, accounts for up to 78% of detected images, numerous interactions may take place before any illicit media is produced or shared.
Artificial intelligence, machine learning & large language models
To set common ground, the terms related to various types of technology will be used in this series as follows:
“A large language model (LLM) is a type of artificial intelligence (AI) program that can recognize and generate text, among other tasks. LLMs are trained on huge sets of data — hence the name "large." LLMs are built on machine learning: specifically, a type of neural network called a transformer model.” (Cloudflare)
With AI being the broadest term, it will be used throughout the series to refer to software that has the capacity to mimic aspects of human intelligence and human cognitive functions like problem-solving and learning. This comes with an understanding that AI labelled tools vary in their capacity and the precise technology and include image recognition and classification, natural language processing and predictive algorithms.
Asia Pacific
When talking about the Asia Pacific, which is a large and diverse region that encompasses multiple languages, nationalities, and cultures, the following countries are particularly important to include in the discussion - the Philippines, Thailand, Viet Nam and Indonesia in Southeast Asia and Bangladesh, India and Pakistan in South Asia.
Child population, meaning individuals under the age of 18, of these seven countries account for 30% of the world’s total child population. This number ranges from 19% in Thailand to 43% in Pakistan with the rest of the countries falling somewhere in between.
While there are no precise estimates of how many of these children have online presence, the Internet Telecommunications Union (ITU) suggests that anywhere from 4.5% of children below 14 years old in Pakistan to 99% of young people between 15 and 24 years old in Thailand use the internet. The data from UNICEF paints a similar picture, showing that the number of school children that have an internet connection at home ranges from 9% in India and Pakistan to 71% in Thailand.
Scale of the issue
The apparent prevalence of OCSE in these countries has also grown in the past five years. For instance, they consistently rank among the top 15 geographic locations where content reported to the National Centre for Missing and Exploited Children (NCMEC) originates from, amounting to over 15 million or over 50% of all reports in 2022. NCMEC also notes that the proportion of reports coming from the region has increased significantly since 2008.
Some available country specific data, also points at an increase in the number of detected cases of child sexual exploitation (CSE) cases overall, and OCSE more specifically. According to the International Justice Mission, almost half a million Filipino children were trafficked to produce new CSE material in 2022. In India the total number of recorded cases under the Protection of Children From Sexual Offences Act increased by 62% between 2017 and 2020.
The Thai Royal police reported 46 cases of child sexual abuse and 84 cases of posession of child pornography in 2022. Since this number only includes cases that resulted in an arrest, it may appear low. However, it’s important to note that these numbers have increased by 600% and 860% respectively since 2016. Further, Safe Child Thailand estimates that of the 800,000 sex workers in Thailand, up to 1/3 may be children.
Finding robust data on the prevalence of child sexual exploitation, especially online, is challenging. Estimates range from dozens to hundreds of thousands cases per year and numbers are persistently affected by the use of anonymisers and proxies, each country’s legislation, societal awareness of the issue and the willingness of victims to report crimes and of law enforcement agencies to accept such reports. Nevertheless, it is clear that the issue exists, it is significant, and is on the rise.
Estimating the scale of and finding a solution to the problem of child sexual exploitation is a major challenge that is wrapped in layers of cultural nuances. Human resources as well as technological advances are, therefore, required to tackle it. Yet, there’s been a disproportionate investment in the latter which puts us at a disadvantage, exacerbating the problem and making it harder to solve in future.
What to expect from the series?
The upcoming chapters in the series will discuss the nuance of OCSE perpetration in Southeast and South Asia, focusing specifically on examples from the Philippines, Thailand, Viet Nam, Indonesia, Bangladesh, India and Pakistan.
The chapter will discuss the social context of OCSE in the region, focusing on a broader prevalence of gender-based violence (GBV) in real-life and digital domains. It will also cover some of the known regional abuse patterns and will argue that the complex nature of the crime both in terms of its origin and manifestations, makes it hard to detect consistently without significant oversight from the subject matter specialists.
The chapter will draw a connection between the global rate of AI adoption, climate change and the prevalence of natural disasters in South and Southeast Asia. It will argue that indiscriminate implementation of AI which requires a major investment of energy resources may put further pressure on the environment and trigger adverse climate events. This would in turn put vulnerable populations at risk and, therefore, create or exacerbate conditions associated with an increased likelihood of sexual violence against women and children.
The concluding chapter in the series will examine the state of apparent investment in OCSE prevention efforts by the tech companies. It will also petition for resources required to produce and localise materials and meaningful product interventions intended to boost digital skills, to raise awareness around OCSE and to shift the focus from crime detection to crime prevention.