From the proliferation of health misinformation, child sexual abuse and exploitation, and violent extremist and terrorist content online, parts of the web can be dangerous.
Delivering safe and secure online experiences is essential for global businesses, civil society groups and individuals alike. Stakeholders from multiple industries and geographies agree there is an urgent need for deliberate global coordination to improve digital safety.
The Forum’s Global Coalition for Digital Safety is accelerating public-private cooperation to tackle harmful content online by bringing together a diverse group of leaders who are well placed to exchange best practices for new online safety regulation and help millions of connected citizens improve digital media literacy.
Engaged partners, including Google, Microsoft, AWS, Meta, Oxford Internet Institute, UNICEF, INTERPOL and several government ministers are developing principles and practices that address all aspects of digital safety in a proportionate manner.
Since the coalition launched in June 2021, companies and public institutions have collaborated on clarifying the actions businesses need to take to improve the safety of their platforms, while regulators across jurisdictions have shared key learnings from forthcoming legislation, such as the UK’s Online Safety Bill. Coalition members are also looking ahead and considering a range of mechanisms to embed safety across the metaverse.
More than 4.7 billion people use the internet and McKinsey projects that roughly $2.5 trillion will be transacted online by 2025. Global estimates suggest that one in three internet users is a child under 18 years of age and one in three children is exposed to sexual content online.
Many digital platforms are faced with a complex challenge when it comes to regulating the internet. How do they keep users safe from harmful content without stifling free expression?
Private companies have the right to moderate content on their platforms according to their own policies, but there is an ongoing tension between too little and too much content being actioned by platforms that operate globally. Striking the right balance requires significant collaboration and continual monitoring of safeguards.
Our approach to keeping connected citizens safe.
The Forum is supporting collaboration on regulation which addresses the safety of many vulnerable groups, especially children.
The Global Coalition for Digital Safety is considering both preventive and proactive tactics such as safety by design and content moderation as well as regulation to address online safety risks according to a company’s size, role in the digital ecosystem, the users it services and many other factors.
While there is no one-size-fits-all solution, the Forum is forging a way forward by creating action-oriented reports and a follow-through agenda for stakeholders who are working on digital safety from different perspectives. The Coalition has launched an expert advisory group that will be consulted to ensure the best multistakeholder insights on digital safety are brought to the forefront for implementation by members, as they drive public-private cooperation on three questions:
How do international human rights principles translate in a digital context?
What tech, policy, process and design interventions can improve online safety?
How should platforms assess digital safety risks and measure impact of interventions?
Members contributing to the coalition are also evaluating the range of existing efforts to improve online safety, and many have already implemented the following principles and practices to tackle harmful and illegal activities, behaviours, and content on the web:
Five Eyes Voluntary Principles to Counter Online Sexual Exploitation and Abuse
The Christchurch Call to Action To Eliminate Terrorist and Violent Extremist Content Online
The Santa Clara Principles On Transparency and Accountability in Content Moderation
Safety by Design
Republished from the World Economic Forum