Britain reveals new rules for monitoring children’s safety on the Internet
25 April 19:39
The British supervisory agency Ofcom has presented a list of rules for social networks, search and gaming applications to protect children from harmful content.
This was reported by Ofcom, "Komersant Ukrainian" reports.
More than 40 “practical measures” were presented as part of the UK’s Internet Safety Act. In preparing them, Ofcom consulted with more than 27 thousand children, 13 thousand parents, as well as representatives of social networks and child safety experts.
One of the first measures is personalized recommendations for children. Each developer whose product poses a threat of harmful content must develop an algorithm to filter it in children’s feeds.
Products with the highest risk of harmful content must also introduce age identification systems for users and restrict parts of the application, website, or all access to it.
At the same time, developers may choose not to implement age verification, but they are still obliged to ensure that their content is not harmful to children.
All developers are also required to develop a system for detecting, evaluating, and blocking harmful content. A designated employee will be responsible for content safety, and content management will be evaluated annually.
Children should also be given more control over their use of social media and other websites. This includes the ability to flag content they do not like, manage invitations to group chats and friend requests, and the ability to block accounts and disable comments on their own posts.
In case of non-compliance with the rules, developers will be fined, and for more serious violations, Ofcom may take them to court and demand that their product be blocked in the UK.
Many countries around the world are strengthening measures to protect children on the Internet. The European Union, through its Digital Services Act, requires platforms like TikTok to minimize risks to minors.
France has banned the creation of accounts for children under 15 without parental consent, while the United States and Australia are tightening control over data collection and requiring the removal of harmful content. Such initiatives are a response to the growing threat of online violence and cyberbullying.