According to a 2020 report by the National Center for Missing and Exploited Children (NCMEC), there were over 21.7 million reports of child sexual abuse material (CSAM) submitted to their CyberTipline in that year alone.
AÂ joint study by INTERPOL and ECPAT International in 2022 estimated that at any given time, 750,000 predators are actively seeking to exploit children online.
Having grown up in the digital age, young people often understand the online world better than anyone else. However, despite their familiarity with the internet, young people still need stronger protections because, as it stands, the Internet remains an unsafe space.
For real change to happen, we must recognize a critical issue: discussions about online safety for young people often occur without their direct input.
We must address our perspectives on the need for more education, stricter regulations, and the inclusion of young voices in online safety discussions.
Young people have grown up surrounded by digital technology, and their understanding of the online world often surpasses that of older generations.
However, with this familiarity come significant risks, especially regarding online safety. It is essential that young people are included in conversations about internet safety and that their voices are heard. But, more importantly, there is a growing need to implement better protections, as the internet remains far from a safe space for them.
Harmful content is prevalent, and algorithms play a significant role in exposing young users to it. Content recommendation systems on social media platforms and other online spaces are designed to keep users engaged. Still, they often lead young people down a rabbit hole of endless videos, posts, and articles.
Unfortunately, many of these algorithms fail to filter harmful or inappropriate material effectively. As a result, young people are bombarded with content that could negatively affect their mental health, sense of self-worth, and overall well-being and eventually cause them to fall prey to perpetrators.
The internet holds immense potential as a positive, even sacred, space for young people. Education plays a vital role for young audiences and everyone online. There have been ample discussions about what needs to be done for young people to be safe online without including any of them. If we’re serious about improving the online experience for young people, they need to be involved in these conversations. More young voices need to be included in discussions where decisions are made about their future.
Technology plays a significant role in both the spread and detection of child sexual abuse material. Unfortunately, many platforms lack the necessary safeguards to protect children from online predators.
In response to increasing incidents, some companies have started implementing AI and machine learning to detect and block abusive content. For instance, Facebook’s child safety detection systems flagged 20.3 million pieces of content related to child sexual abuse in 2020 alone.
Current age verification systems are far too easy to bypass. Until more robust regulations are implemented, young people will continue to be exposed to risks that could be mitigated.
However, much more can be done, especially around age verification. A 2021Â Common Sense Media survey found that 30% of children aged 8-12 use social media despite the minimum age requirement of 13 on most platforms. This highlights the need for stricter enforcement of age-related policies.
Some might argue that young people shouldn’t be on these platforms if they’re at risk, but they’re already there, and removing them from these spaces is impossible. So, instead of hoping young people will stay off specific platforms, we should focus on making the online environment safer.
The Internet is an incredible resource for learning, socializing, and entertainment, but it must be safer for the younger generation. Stronger regulations, improved age verification, and the inclusion of young voices in discussions about online safety are all essential steps toward protecting young people in the digital world.
It is unrealistic to expect to be able to restrict young people’s access to these algorithms altogether. These digital tools are deeply embedded in young people's daily lives—platforms that adults use. Simply keeping young users off these apps or telling them not to engage with certain types of content isn’t practical or effective.
More robust measures are needed, particularly when it comes to age verification. A startling one-third of young people admit to lying about their age online, often claiming to be over 18. This allows them to access material intended for adults, leaving them vulnerable to inappropriate content that they are not emotionally or mentally equipped to handle.
Ultimately, protecting young people online requires collective action. It means building better systems to verify age, designing safer algorithms, and ensuring that young voices are part of the solution. The internet has the potential to be a powerful and positive tool for learning and connection, but it must become a safer space for everyone—especially the younger generation.
The internet can be a valuable resource for children, but it also presents serious risks when it comes to online sexual abuse. By increasing awareness, implementing better safety measures, and encouraging cooperation between tech companies, law enforcement, and parents, we can help make the online world safer for children.
The just-ended Global Ministerial Conference on Ending Violence Against Children presented a pivotal opportunity to confront this challenge directly. Governments must ensure that moment through their pledges; the right people are at the table and ready to take bold action to keep children safe online.
Sign up to the Brave Movement monthly newsletter to stay up-to-date on our efforts and learn more about how YOU can take action.
Thank you for being brave!