As background, I’m a professor of communication and journalism, which means I study how people make meaning through media. This is broad, but this focus on meaning and media poses an intellectual challenge that is tightly tied to practices like journalism, technology design, and policy making. These are the professions that often create the conditions under which people make meaning through media, so they matter to public life.
By “making meaning through media,” I mean this: how people who will never meet face-to-face discover, argue about, and manage collective life, and the stakes involved in our interconnections — how communication make publics.
Publics are not natural. They don’t exist in the wild. We make public life through:
- what we watch or read or post online, and what’s available to us;
- what stories journalists choose to tell, and are incentivized to tell;
- which speech and ideas advertising markets reward and make more likely;
- how regulations govern speech, media monopolies, and political communication.
I could go on. The point is this: How well we govern ourselves — learn about each other, discover shared concerns, encourage or sanction behavior — all of this governance depends on how well our communication systems work.
Today, these systems of communication — these systems of self-governance that make publics — increasingly live within privately controlled infrastructures. These infrastructures create the conditions under which people make meaning. They make some publics more likely than others. These infrastructures are often called platforms.
Platform makers often say that they don’t create information, that they’re neutral. But we know that they make important decisions about how information is gathered, circulated, analyzed, and sold. They make images of the world with our information. Following José van Dijck, Thomas Poell, and Martijn de Waal, we can distinguish between two kinds of platforms. The first is “sectoral platforms” — think Airbnb, Spotify, Netflix, Uber. They connect people who have something with people who want something — and they are typically focused in domains, like housing, entertainment, transportation, or news.
But there is a second, more powerful kind of platform I want to focus on: “infrastructural platforms.” These platforms make the often invisible web through which almost all data today are captured, processed, stored, circulated, and sold. They are typically created by the “Big Five” technology companies: Microsoft, Apple, Google, Facebook, Amazon. They are most obviously search engines, browsers, email clients, advertising markets, social networking sites, geolocation and navigation systems.
But they also make and apply rules about what content is allowed to exist and circulate online. They direct vast global workforces of contractors and private algorithms that moderate speech. Facebook is creating its own “Supreme Court” to judge appeals. Google has tried to create its own artificial intelligence ethics board. Platforms sometimes talk about themselves as governments and, indeed, the government of Denmark has an official “ambassador of technology.”
They are building a complete stack of experience through custom hardware, software, server farms, data warehouses, private internet networks, undersea cables, and even entire city neighborhoods like Google’s Sidewalk project in Toronto. Instead of thinking about platform companies as the next generation of newspapers, radio stations, or TV channels, we should see them as entirely new entities that shapeshift constantly. Sometimes they are like cities, newsrooms, post offices, libraries, or utilities — but they are always like advertising firms. Do not forget this: They earn the vast majority of their revenue through advertising. They are primarily driven by advertising priorities.
The scope and scale of these platforms is unprecedented, moving far faster than governments and civil society, often outpacing the very idea of governance. We are usually left anticipating and reacting — imagining what these companies might do and coping with what they have done. We and they are now trying to figure out whether we should simply apply existing rules or invent entirely new ones.
In trying to understand public life in these platform societies, I think there are at least 5 ways to see platform power. (There are likely many more but these seem like the most currently pressing.)
- Image of the public. Platforms are often motivated by their own vision of public life. They often use words like “community,” “connection,” and “public,” but without much precision. Governments can question platforms’ assumptions. What does “community” mean? Why are “connections” almost always good? Do not accept platforms’ starting points — know your vision of public life and demand precision from platforms.
- Scale. Platforms want large-scale networks. They need big data, rich connections, constant surveillance. This is so their advertising profiles can be targeted, their algorithms trained, their predictive models improved. This type of scale is not necessarily the scale that works for good public life. Question platforms’ desires for scale.
- Categories and terms. Platforms have rapidly changed the popular meanings of words like “friend,” “share,” “like,” “private,” “speech,” “trusted,” and “popular.” They have co-opted many of these words. Civil society and government can make and defend their own definitions of these and other words that matter for public life. Deep understandings of these and other words are one of the many reasons that the humanities and the arts matter. Don’t uncritically accept platforms’ definitions of the words we need to describe ourselves.
- Transparency, accountability, and explainability are not the same thing. Seeing inside a system is not the same as knowing its power or how it works. Transparency does not automatically create understanding or accountability. Governments and civil society should question what platforms mean by transparency and demand better knowledge of platforms, even when they say that such knowledge is proprietary. If companies cannot explain their systems, then those systems should be seen as uncontrolled and harmful by default. Many platforms cannot explain exactly how their artificial intelligence systems work.
- Use controversies as opportunities. Whenever a controversy arises, governments and civil societies can use it to clarify types of power and images of the public. Crises are fights over something — harm, trust, difference, identity, freedom. Rapidly frame controversies in terms of the public values at stake, the image of the public in question. Crises teach us how platforms understand harm, trust, freedom — they are always cases of something.
Note that I haven’t asked: “What’s the impact of technology on society?” That’s the wrong question. Platforms are societies of intertwined people and machines. There is no such thing as “online life” versus “real life.” We give massive ground if we pretend that these companies are simply having an “effect” or “impact” on some separate society.
I think self-regulation is proving insufficient, and even platforms’ own requests for regulation need to be viewed skeptically. It would certainly be easier for them to apply global speech standards rather than fuss with different geographies and cultures, but their desires for simplicity and large-scale standards cannot be allowed to collapse human differences. We should lead with public principles grounded in democratic legitimacy and accountability, not let platforms define for themselves the terms of their own regulation.
Flawed as they are (and they often are), we have courts, we have parliaments, we have elections, we have civil societies — we have traditions of democratic legitimacy. And let’s not forget: Platforms need us — our content, out labor, our attention, our money. They are ours to control — if we can figure out how to do it.