Majority of Irish People Want Stricter Social Media Controls
Research commissioned by the Irish Council for Civil Liberties (ICCL) and Uplift, a campaigning community, has found that almost three-quarters or 74% of the Irish population believes that social media algorithms, which select content and insert it into users’ online feeds, should be regulated more strictly.
The new poll also shows that more than four-fifths or 82% of people across Ireland are in favour of social media companies being forced to stop building up specific data about users’ sexual desires, political and religious views, health conditions and or ethnicity, and using that data to pick what videos are shown to people.
The research was conducted by Ireland Thinks, using a representative sample of 1,270 people, selected across age, income, education, and region, across Ireland.
The findings come in the wake of a major step taken by Coimisiún na Meán, Ireland’s new online regulator. Its new draft rules say that recommender systems based on intimately profiling people are turned off by default on social media video platforms like YouTube, Facebook, and TikTok.
A statement from the ICCL states that these “recommender system” algorithms promote suicide and self-loathing among teens, drive children into online addictions, and feed users personalised diets of hate and disinformation for profit.
It says they put children at risk and cites information from Amnesty International which says that just one hour after Amnesty’s researchers started a TikTok account posing as a 13-year-old child who views mental health content, TikTok’s algorithm started to show the child videos glamourising suicide.
The statement also quotes the European Commission Digital Services Act: “Application of the Risk Management Framework to Russian disinformation campaigns”, which reported that Big Tech’s recommender systems aided Russian’s disinformation campaign about its invasion of Ukraine.
Siobhan O’Donoghue of Uplift said, ‘Big Tech’s toxic recommender systems and algorithms are amplifying hate speech, weaponising every fault line within our communities – driven by relentless surveillance to maximise “engagement” and ultimately profits. It is time social media corporations be made to give users real control over what they see, and be held to account for failing to do so.’
Big Tech’s algorithmic “recommender systems” select emotive and extreme content and show it to people who the system estimates are most likely to be outraged. These outraged people then spend longer on the platform, which allows the company to make more money showing them ads.