Artificial intelligence can have long-term impacts on the future prospects and living conditions of children and young people. Coinciding with World Children’s Day on November 20, AlgorithmWatch CH, humanrights.ch, Pro Juventute and UNICEF Switzerland and Liechtenstein highlight how algorithms and AI can increase inequality and affect children.
Digitalization can be an important factor in increasing equity in education. Children are also increasingly encountering algorithmic and AI systems – in AI-assisted toys, personalized learning systems and social media algorithms. But these systems can also have negative impacts by reinforcing inequality and reproducing biases, with long-term effects on children’s future opportunities and living conditions.
Example 1: Learning with AI
AI and algorithmic systems are being used in many classrooms, including in Switzerland, to help with performance assessments, personalized learning, proctoring or classroom allocation. For example, many schools are using adaptive learning systems – computer programs that are designed to adapt exercises automatically to a child’s performance. These programs offer targeted support and help to increase equity, but they can also reinforce existing prejudices. This has been confirmed in a study by the Netherlands Institute for Human Rights. The study showed that even though AI learning systems should offer personalized learning content to students, they can put certain children at a disadvantage. Students from low-income families, for example, and those whose parents or grandparents immigrated to the Netherlands were given easier assignments, regardless of their performance. The study also revealed that personalized learning systems are not always successful at correctly assessing the level of a child. This can be the case when a child has a different way of learning (because of ADHD, dyslexia or autism, for example) or uses different words than students on whom the system was trained. If AI and algorithms are developed and tested without duly considering children’s different needs, they can reinforce existing negative impacts through their systemization and dissemination. This can lead to discrimination and inequality regarding opportunities.
Example 2: Evaluated and monitored by algorithms
Access to formal higher education often depends on assessments of the students’ performance. AI systems are increasingly being used to assist in – or even conduct – performance assessments. But the use of AI can lead to the perpetuation of socioeconomic inequalities. In the United Kingdom, for example, examinations were canceled during the COVID-19 pandemic. The results of these exams determine who may enroll in universities. Instead, the British government had the results calculated through the use of an algorithm. The system used historical grade distributions of schools. The result was that many students with good grades from disadvantaged schools were given lower scores, while students with average grades from privileged schools (especially private schools) tended to receive better scores. The algorithm thus put students from low-income households at a disadvantage.
More and more educational institutions are now monitoring online examinations through the use of AI-assisted proctoring software tools. Such tools can have negative impacts on students on account of skin color or disability. The Netherlands Institute for Human Rights found that students at the Free University of Amsterdam were discriminated against through the use of an anti-cheating software based on facial recognition. The system flagged a disproportionately high number of non-white people as potential cheaters. For students with disabilities, the fact of their disability alone can be enough to flag them as being suspected of cheating in examinations. Because a disability can influence how someone moves their body, their eyes or their cursor, as well as how they look, virtual exam-monitoring software poses a risk that students with disabilities will be categorized as suspicious. These kinds of errors can make their fears and traumas even worse.
The lack of transparency laws has made it difficult to find out where and why such systems are being used in Switzerland, and to what effect. But the risk of algorithmic and AI systems having negative impacts on the lives of children and young people in Switzerland cannot be ruled out. AlgorithmWatch CH’s Atlas of Automation records examples of algorithmic and AI systems used in Switzerland, including in the country’s schools.
Algorithmic discrimination in Switzerland: The existing anti-discrimination laws in Switzerland offer no effective protection against discrimination through algorithms and AI and must be made stronger. AlgorithmWatch CH and 45 other organizations have issued an appeal to the Federal Council. UNICEF Switzerland and Liechtenstein is a co-signatory. This appeal calls for the strengthening of protections against discrimination through algorithms and AI.
Example 3: AI for more equity?
Many countries – Switzerland included – use algorithmic systems with the intention of improving equity. Since 2023, Uster has used an algorithm to optimize the social intermixing of school classes in order to provide added value for all students. Such approaches can be promising, because algorithms are especially good at processing data and recognizing patterns – including discriminatory patterns.
For algorithmic systems to actually contribute to reducing inequalities, they must first be designed and used specifically for this purpose, and second, be subjected to rigorous testing. In the US state of Nevada, an AI-based system was used to predict which students would struggle to graduate. The authorities hoped that a new system would be a step towards better oversight of disadvantaged children. However, the AI system calculated that the state’s earlier estimation of the number of children struggling in school was too high. The algorithm reduced the number of affected students to less than half the previous number and removed the at-risk designation for homeless students. Nevada previously classified all students from low-income families to be at risk of academic and social difficulties. The algorithm set the bar much higher.
Example 4: Learning through social media
According to the most recent HBSC national study on online activities of young people, over 80 percent of 15-year-olds in Switzerland visit social media platforms every day. But social media can also expose them to harmful content. A study has shown that social media algorithms amplify extremist content, such as misogynistic posts. This normalizes discriminating ideologies for young people. Social media algorithms can also have impacts on the mental and physical health of children and young people. Parents, schools and authorities in the USA have sued Instagram, Snapchat and other platforms for their role in the youth mental health crisis. SRF reported on Alexis Spence, a young student whose Instagram algorithm showed her content about fitness and models in the beginning, but then over time showed her more and more posts about eating disorders. The posts made Alexis feel increasingly worthless and ugly. Eventually, she received treatment for anorexia and suicidal thoughts in a psychiatric clinic. In France, seven families have recently filed a lawsuit against TikTok after two teen suicides. There is no conclusive scientific evidence on how and to what extent social media affects the mental health of children and young people. More research is needed, and framework conditions – ranging from sensitization and prevention of children, young people and parents to platform regulations – must be ensured in order to prevent such fates.
Children and adolescents are especially susceptible to harmful content like deepfakes (photos and videos that look real but are AI-generated) and are less able to fight it. A case in Spain made headlines recently when several students in a small Spanish town used generative models to create “nude” images of their classmates. The negative impacts of deepfakes rise considerably when social media algorithms encourage such content to be shared widely.
These examples illustrate how the use of algorithms and AI today does not always increase equity, and can also have negative impacts on young people and their rights. The use of such systems can lead to discrimination against children, young people and many other segments of the population. In the series “Diskriminierung 2.0: Wie Algorithmen Menschen diskriminieren” (In English, “Discrimination 2.0: How algorithms discriminate against people”) AlgorithmWatch CH and humanrights.ch, in cooperation with other organizations, show different cases of discrimination through algorithms. We need your support to raise awareness of the risks of systematic negative impacts on children and young people through algorithms and AI!