Let’s say you’re a colonist on the Galactica, trying to outrun the Cylons, and you need to get a message off quickly. You’ve got the benefit of wireless communication streaming through a vacuum. Your turnaround time is limited only by the speed of light. The network, such as it is, has zero latency and delays are consistent with the distance traveled divided by lightspeed.
Here on Earth, however, we’re not so lucky. Sending and receiving information over the internet has improved dramatically since its emergence in the 1980s. As long as you’ve got a good connection, it often feels like you get what you’ve asked for pretty much instantaneously, but the internet’s latency reveals itself if you look closely.
You’ve likely spent a considerable amount of time on Zoom calls or other video call applications over the last several years, and it’s not uncommon for people to speak over one another due to the delay between when one person speaks and when that information reaches the other end. Additionally, online multiplayer gaming hits a snag when there’s too great a delay between when you enter a command and when the server receives and distributes it to other players. At the speed light travels, it should be able to get anywhere in the world in less than a tenth of a second, but it takes 30 to 40 times as long, on average, and sometimes even longer. What gives?
Bruce Maggs, Brighten Godfrey, and Gregory Laughlin, from Duke University, Harvard University, and Yale University respectively, were the principal investigators on a recent paper outlining a microwave-based network which would bring internet latency as close to the speed of light as possible. The paper was presented earlier this month at the 19th USENIX Symposium on Networked Systems Design and Implementation.
“When you’re web browsing, you’re clicking on buttons and the browser is going back and forth to the cloud asking for bits of data. Each one of those is not necessarily one back and forth. There might be several trips to find out where the server is, establish a connection, and do the security handshake. Even if it’s not a lot of data, the delay comes from having to wait to go back and forth multiple times,” Godfrey told SYFY WIRE.
Those trips through the internet’s famed series of tubes can add up pretty quickly for a couple of reasons. First, the speed of light you’re probably familiar with — 300,000 kilometers per second — is only the speed of light in a vacuum. Traveling through glass, which is what fiber optic cables are made of, slows it down by about 40%. That’s just a limitation of the information superhighway, but it gets worse when you realize the roads information travels aren’t straight shots.
Photo: Dr. Bruce Maggs et al
“If you’re trenching fiber, someone’s house might be in the way or you have to go around a factory, there’s all sorts of obstacles you have to go around,” Laughlin said.
Even taking that into account, data doesn’t always travel along the shortest available route. Sometimes, your information might be going thousands of miles out of the way before getting to its destination.
“You can be communicating with someone a few hundred miles away and your packets could be traveling to another country or across an ocean, just because of the way ISPs might have decided to route data to save themselves cost,” Godfrey said.
Getting around those limitations will require not only more direct routes of communication but also, changing the medium through which information flows. To pull it off, the team looked toward communication innovations implemented by financial traders more than a decade ago.
“In financial trading, it’s winner take all. The first people to get their order in, make a profit, and no one else does,” Maggs said.
In 2010, a trading company built a fiber optic cable from Chicago to New York on a straight route in order to give themselves an edge, but they were attacking the wrong part of the latency problem. Almost immediately, other people realized they could beat the new cable by sending their information over the air using microwave radio towers. A lot of effort was put into improving existing towers and installing new equipment in order to give the financial sector the fastest possible routes of communication. The new network would take that same philosophy and expand it across a wider area.
“To download a webpage takes about 37 times the speed of light. We realized we’re really far off, and yet there’s this technology that has been proven, in a niche application, to be able to achieve very close to the speed of light,” Maggs said.
By placing a network of microwave radio towers around the country, or around the world, at close enough intervals, we could reduce latency to about a third of what it is today. That improvement is a result of faster transit speeds and more direct routes and doesn’t even take into account internet protocols which require multiple trips between the user and the cloud. Addressing those problems, which is a software problem not an engineering problem, could reduce latency even further.
The technology to build such a network exists today and could be implemented right now. What’s less clear is the business model for making it happen.
“The present internet, we’re not going to throw that away. You can use them in parallel to send the traffic that’s latency sensitive on one network and the other traffic on the present network. There’s no reason we know of that a network like this couldn’t be built, technologically. The next question is how you begin to incrementally build it,” Godfrey said.
A microwave-based internet would allow for communication anywhere on Earth, and even to low-Earth orbit quickly enough to make it feel like real time. It could spell the end of latency, at least on a planetary scale. As our species spreads across space, however, communications delays will return with a vengeance. There’s not much we can do about the speed of light, unless you’re a Cylon.
If you’re suddenly in the mood, head over to Peacock for all your Battlestar Galactica needs.