“Net neutrality” sounds like a good idea. It isn’t.
As political slogans go, the phrase net neutrality has been enormously effective, riling up the chattering classes and forcing a sea change in the government’s decades-old hands-off approach to regulating the Internet. But as an organizing principle for the Internet, the concept is dangerously misguided. That is especially true of the particular form of net neutrality regulation proposed in February by Federal Communications Commission (FCC) Chairman Tom Wheeler.
Net neutrality backers traffic in fear. Pushing a suite of suggested interventions, they warn of rapacious cable operators who seek to control online media and other content by “picking winners and losers” on the Internet. They proclaim that regulation is the only way to stave off “fast lanes” that would render your favorite website “invisible” unless it’s one of the corporate-favored. They declare that it will shelter startups, guarantee free expression, and preserve the great, egalitarian “openness” of the Internet.
No decent person, in other words, could be against net neutrality.
In truth, this latest campaign to regulate the Internet is an apt illustration of F.A. Hayek’s famous observation that “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Egged on by a bootleggers-and-Baptists coalition of rent-seeking industry groups and corporation-hating progressives (and bolstered by a highly unusual proclamation from the White House), Chairman Wheeler and his staff are attempting to design something they know very little about-not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.
Origins of a Regulatory Meme
“Network neutrality” was coined in a 2003 paper by the law professor Tim Wu. A “neutral” Internet, Wu postulated, “is an Internet that does not favor one application (say, the world wide web) over others (say, email).” For Wu, “email, the web, and streaming applications are in a battle for the attention and interest of end-users. It is therefore important that the platform be neutral to ensure the competition remains meritocratic.”
Over time, Wu’s notion has morphed from vague abstraction to regulatory imperative and even article of faith. Net neutrality has come to represent a set of edicts aimed at constraining Internet Service Providers (ISPs) to a specific, static vision of the Internet in which they treat all data equally-not charging differentially (or “discriminating,” in activists’ parlance) by user, content, site, platform, application, type of equipment used, or mode of communication.
Along the way, the movement acquired some radical political baggage: to “get rid of the media capitalists in the phone and cable companies and to divest them from control,” in the 2009 words of media activist Robert McChesney, founder of the anti-media-consolidation group Free Press. Not coincidentally, Free Press, which has been at the vanguard of net neutrality activism, was long chaired by Tim Wu.
But the net neutrality movement has had less to do with class struggle than with the familiar delusion of technocrats everywhere: that government can “design” a better future if only it pulls the right levers. The goal, in theory, is to “save the Internet” from big corporations, ensuring (in Free Press’ words) that “it will remain a medium for free expression, economic opportunity and innovation.” According to a group of pro-net-neutrality startup investors, this can be accomplished only by locking in yesterday’s business model. They want new Internet applications, like their favorite Internet companies of the past, to “be able to afford to [make] their service freely available and then build a business over time as they better understand the value consumers find in their service.” In the name of innovation, net neutrality proponents want the Internet to remain just as it is.
But even without government’s guiding hand, neutrality has long been an organizing principle of the Net. The engineers who first started connecting computers to one another decades ago embraced as a first-cut rule for directing Internet traffic the “end-to-end principle”-a component of network architecture design holding that the network itself should interfere as little as possible with traffic flowing from one end-user to another. Yet the idea that this network “intelligence” should reside only at the ends of the network, has never been—and could never be—an absolute. Effective network management has always required prominent exceptions to the end-to-end principle.
Not all bits are created equal, as the designers of those first Internet software protocols recognized. Some bits are more time-sensitive than others. Some bits need to arrive at their destination in sequence, while others can turn up in any order. For instance, live streaming video, interactive gaming, and VoIP calls won’t work if the data arrive out of order or with too much delay between data packets. But email, software updates, and even downloaded videos don’t require such preferential treatment-they work as long as all the bits eventually end up where they’re supposed to go.
Anticipating the needs of future real-time applications, early Internet engineers developed differentiated services (“DiffServ”) and integrated services (“IntServ”) protocols, which have discriminated among types of Internet traffic for decades. The effect on less time-sensitive applications has gone virtually unnoticed. Does anyone really care if their email shows up a few milliseconds “late”?
But these are engineering prioritizations, and they come without an associated price mechanism. As a result, there’s little incentive for anyone to mark these packets accurately: In the face of network congestion, everyone wants the highest priority as long as it’s free.
Here, as throughout the economy, prices would make everyone reveal the value they place on a transaction, thereby allocating scarce resources efficiently. An Internet characterized by business prioritization, offering fast and slow lanes for purchase by end-users or content providers, could make all applications work better, significantly increasing consumer satisfaction while also promoting broadband adoption and deployment.
Thus far the demand for these types of business models has been fairly limited for the simple reason that congestion (scarcity of bandwidth) is, for now, an infrequent problem. To be sure, ISPs offer consumers varying tiers of service, and mobile broadband providers (facing far more frequent congestion) are increasingly experimenting with prioritization schemes, such as AT&T’s Sponsored Data program and T-Mobile’s Music Freedom service. But the current lack of uptake doesn’t mean that a market for prioritization wouldn’t develop without rules preventing it. And it will be the unknown applications of tomorrow (say, holographic video streaming) that will most likely lead to-and benefit from-this type of prioritization.
Generally speaking, neutrality advocates don’t spend much time in the weeds of boring traffic-flow engineering and network prioritization. What has animated everyone from HBO comedian/anchor John Oliver to millions of irate FCC commenters has been an angry suspicion that somewhere, some rich corporations are on the verge of hijacking the Internet’s architecture to profit themselves while excluding others. In Oliver’s pointed words, net neutrality rules are code for “preventing cable company fuckery.”