from the absolutely-nothing dept
August 2012, YouTube briefly took
video that had been uploaded by NASA. The video, which depicted a
landing on Mars, was caught by YouTube’s Content ID system as a
potential copyright infringement case but, like everything else NASA
creates, it was in the public domain. Then, in 2016, YouTube’s
automated algorithms removed
another video, this time a lecture by a Harvard Law professor, which
included snippets of various songs ranging from 15 to roughly 40
seconds. Of course, use of copyright for educational purposes is
perfectly legal. Examples of unwarranted content takedowns are not
limited to only these two. Automated algorithms have been responsible
for taking down perfectly legitimate content that relates to
or the mere existence of information that relates to war
the over-blocking of content through automated filters is only one
part of the problem. A few years ago, automated filtering was
somewhat limited in popularity, being used by a handful of companies;
but, over the years, they have become increasingly the go-to
technical tool for policy makers wanting to address any content issue
— whether it is copyrighted or any other form of objectionable
content. In particular, in the last few years, Europe has been
upload filters as a solution for the management of content. Although
never explicitly mentioned, upload filters started appearing as early
as 2018 in various Commission documents but became a tangible policy
tool in 2019 with the promulgation of the Copyright Directive.
speaking, upload filters are technology tools that platforms, such as
Facebook and YouTube, use to check whether content published by their
users falls within any of the categories for objectionable content.
They are not new – YouTube’s Content ID system dates back to
2007; they are also not cheap – YouTube’s Content ID has cost a
$100 million to make. Finally, they are ineffective
machine learning tools will always over-block or under-block content.
even with these limitations, upload filters continue to be the
preferred option for content policy making. Partly, this is due to
the fact that policy makers depend on online platforms to offer
technology solutions that can scale and can moderate content en
masse. Another reason is that elimination of content and take-downs
is perceived to be easier and has an instant effect. In a world where
more than 500 hours of content are uploaded
on YouTube or 350 million photos are posted
Facebook, technology solutions such as upload filters appear more
desirable than the alternative of leaving the content up. A third
reason is the computer-engineering bias of the industry. What this
means is that typically when you build programmed systems, you follow
a pretty-much predetermined route: you identify a gap, build
something to fill that gap (and, hopefully, in the process make money
at it) and, then you iteratively fix bugs in the program as they are
uncovered. Notice that in this process, the question of whether the
problem is best solved through building a software is never asked.
This has been the case with the ‘upload filters’
online platforms become key infrastructure for users, however, the
moderation practices they adopt are not only about content removal.
Through such techniques, online platforms undertake a
governance function, which must ensure the productive, pro-social and
lawful interaction of their users. Governments have depended on
platforms carrying out this function for quite some time but, over
the past few years, they have become increasingly interested in
setting the rules for social network governance. To this end, there
seems to be a trend of several new regional
policies that mandate upload filters for content moderation.
is at stake?
use of upload filters and the legislative efforts to promote them and
make them compulsory is having a major effect on Internet
infrastructure. One of the core properties of the Internet is that it
is based on an open architecture of interoperable and reusable
building blocks. In addition to this open architecture, technology
building blocks work together collectively to provide services to end
users. At the same time, each building block delivers a specific
function. All this allows for fast and permissionless innovation
platforms are now inserting deep in their networks automated
filtering mechanisms to deliver services to their users. Platforms
with significant market power have convened a forum called the Global
Internet Forum to Counter Terrorism (GIFCT),
through which approved participants (but not everyone) collaborate to
create shared upload filters. The idea is that these filters are
interoperable amongst platforms, which, prima
is good for openness and inclusiveness. But, allowing the design
choices of filters to be made by a handful of companies turns them
standards bodies. This provides neither inclusivity nor openness. To
this end, it is worrisome that some governments appear keen to
and perhaps anoint
this industry consortium
as a permanent institution
for anyone who accepts content from users and republishes it.
In effect, this makes an industry consortium, with its design
assumptions, a legally-required and permanent feature of Internet
closed consortiums, like the GIFCT, combined with governments’
urge to make upload filters mandatory can violate some of the most
important Internet architecture principles: ultimately, upload
filters are not based on collaborative, open, voluntary
standards but on closed, proprietary ones, owned by specific
companies. Therefore, unlike traditional building blocks, these
upload filters end up not being interoperable. Smaller online
platforms will need to license them. New entrants may find the
barriers to entry too high. This,
tilts the scales in favor of large, incumbent market players and
disadvantages an innovator with a new approach to these problems.
mandating GIFCT tools or any other technology, determines the design
assumptions underpinning that upload filter framework. Upload filters
function as a sort of panopticon device that is operated by social
media companies. But, if the idea is to design a social media system
that is inherently resistant to this sort of surveillance, then
upload filters are not going to work because the communications are
protected from users. In effect, that means that mandating GIFCT
tools, further determines what sort of system design is acceptable or
not. This makes the regulation invasive because it undermines the
“general purpose” nature of the Internet, meaning some
purposes get ruled out under this approach.
current policy objective of upload filters is twofold: regulating
content and taming the dominance by certain players. These are
legitimate objectives. But, as technology tools, upload filters fail
on both counts: not only do they have limitations
in moderating content effectively, but they also cement
the dominant position of big technology companies. Given the costs of
creating such a tool and the requirement for online platforms to have
systems that ensure the fast, rigorous and efficient takedown of
content, there is a trend emerging where smaller players depend on
the systems of bigger ones.
upload filters are imperfect and not even an effective solution to
our Internet and social media governance problems. They don’t
reduce the risk of recidivism and only eliminate the problems, not
their recurrence. Aside from the fact that upload filters cannot
solve societal problems, mandated upload filters can adversely affect
Internet architecture. Generally, the Internet’s architecture
can be impacted
by unnecessary technology tools, like deep packet inspection, DNS
blocking or upload filters. These tools produce consequences that run
counter to the benefits expected by the Internet: they compromise its
flexibility and do not allow the Internet to continuously serve a
diverse and constantly evolving community of users and applications.
Instead, they require significant changes to the networks in order to
support their use.
there is a real risk that upload filters become a permanent feature
of the Internet architecture and online dialogue. This is not a
society that any of us should want to live in – a society where
speech is determined by software that will never be able to grasp the
subtlety of human communication.
Komaitis is the Senior Director, Policy Strategy at the Internet
Badiei is the Director of the Social Media Governance Initiative at
Yale Law School.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Website of source