Algorithms on Instagram actively promote communities of pedophiles – WSJ

Indsragran, the social media platform, owned by Meta, utilizes recommendation algorithms that promote pedophile networks.
This conclusion was reached through an investigation conducted by The Wall Street Journal, as well as experts from Stanford University and the University of Massachusetts Amherst.
The investigation revealed that Instagram's recommendation systems allow pedophiles to connect with each other and direct them to sellers of related content. The researchers discovered Instagram accounts that were promoted using explicit hashtags, and these account owners offered child pornography for sale. As part of the investigation, participants created a test account to view the materials offered by these accounts. Subsequently, Instagram algorithms started recommending other accounts containing scenes of child sexual violence.
The investigation also found that the social media platform's moderators often ignored user complaints regarding illegal materials. The article quotes Alex Stamos, former Chief Security Officer at Meta, who expressed concern over the ability of a small team of researchers to uncover such a vast network. Stamos emphasized that Meta has significantly more effective tools for mapping its pedophile networks than external parties. He called for the company to invest in investigative personnel once again.
Meta acknowledged the presence of problems after the scandal involving advertisements for child sex on Instagram became known. The company stated that it is actively combating pedophilia by blocking 27 pedophile networks and thousands of hashtags with sexualized child content over the past two years. Adjustments were also made to Meta's algorithms to prohibit the system from recommending search terms related to sexual violence.
Following the publication of the investigation's findings, the company's stocks dropped by 2.77%.
It is worth noting that in 2022, the issue of the lack of accountability of technology platforms, specifically "algorithmic discrimination" (referring to the opacity of recommendation algorithms), was identified as a problem in the U.S. presidential administration. However, platforms are not legally responsible for user-generated content.
Since the beginning of the full-scale Russian invasion, Ukrainian users have consistently encountered content moderation on social media platforms. Pages and posts related to the war and its consequences on Facebook, Instagram, Twitter, TikTok, LinkedIn are being blocked and removed. However, the companies themselves do not disclose the exact moderation algorithms.
At the start of the Russian invasion in Ukraine, Meta (formerly Facebook) established a Center for Special Operations. The company refers to it as an additional line of defense against misinformation, allowing for faster removal of content that violates community standards and rules.
Meta claims to have increased funding for Ukrainian fact-checking partners, although they do not specify the exact number or amounts involved.