Far be it from me to dispute Scientific American, but their recent bit about the internet reaching it's "limit" is nonsense.

The number of smartphones, tablets and other network-connected gadgets will outnumber humans by the end of the year. Perhaps more significantly, the faster and more powerful mobile devices hitting the market annually are producing and consuming content at unprecedented levels. Global mobile data grew 70 percent in 2012, according to a recent report from Cisco, which makes a lot of the gear that runs the Internet. Yet the capacity of the world's networking infrastructure is finite, leaving many to wonder when we will hit the upper limit, and what to do when that happens.

There are ways to boost capacity of course, such as adding cables, packing those cables with more data-carrying optical fibers and off-loading traffic onto smaller satellite networks, but these steps simply delay the inevitable. The solution is to make the infrastructure smarter. Two main components would be needed: computers and other devices that can filter their content before tossing it onto the network, along with a network that better understands what to do with this content, rather than numbly perceiving it as an endless, undifferentiated stream of bits and bytes.

Now I'm all for "smarter" networks, for some definition of "smart", but there are significant downsides to the approach mentioned later in the article by Markus Hofmann, head of Bell Labs Research in New Jersey. As I bolded above, there is a very simple and proven way to expand the capacity of the internet: more and fatter cables.

Says Hofmann:

We know there are certain limits that Mother Nature gives us--only so much information you can transmit over certain communications channels. That phenomenon is called the nonlinear Shannon limit [named after former Bell Telephone Laboratories mathematician Claude Shannon], and it tells us how far we can push with today's technologies. We are already very, very close to this limit, within a factor of two roughly. Put another way, based on our experiments in the lab, when we double the amount of network traffic we have today--something that could happen within the next four or five years--we will exceed the Shannon limit. That tells us there's a fundamental roadblock here. There is no way we can stretch this limit, just as we cannot increase the speed of light. So we need to work with these limits and still find ways to continue the needed growth.

The Shannon-Hartley theorem is real, but it all it does is define the limit for the amount of information that can be pushed through a pipe. It doesn't prevent us from laying new pipes.

How do you keep the Internet from reaching "the limit"?

The most obvious way is to increase bandwidth by laying more fiber. Instead of having just one transatlantic fiber-optic cable, for example, you have two or five or 10. That's the brute-force approach, but it's very expensive--you need to dig up the ground and lay the fiber, you need multiple optical amplifiers, integrated transmitters and receivers, and so on.

Yes, it's expensive to lay more pipes, but the return on investment is massive. Cars are expensive too, and yet few people ride Segways. Laying fiber costs money, but the benefits far outweigh the costs. DoD, Microsoft, Google, Apple, IBM, the financial industry, telecommunications... etc. The demand is huge, the profits are huge, and more fiber cables are being laid down as fast as humanly possible.

Hofmann continues:

What's needed is a network that no longer looks at raw data as only bits and bytes but rather as pieces of information relevant to a person using a computer or smartphone. On a given day do you want to know the temperature, wind speed and air pressure or do you simply want to know how you should dress? This is referred to as information networking. ...

Today, if you want to know more about the data crossing a network--for example to intercept computer viruses--then you use software to peek into the data packet, something called deep-packet inspection. Think of a physical letter you send through the normal postal service wrapped in an envelope with an address on it. The postal service doesn't care what the letter says, it's only interested in the address. This is how the Internet functions today with regard to data. With deep-packet inspection, software tells the network to open the data envelope and read at least part of what's inside. [If the data contains a virus, the inspection tool may route that data to a quarantine area to keep it from infecting computers connecting to that network.] However, you can get only a limited amount of information about the data this way, and it requires a lot of processing power. Plus, if the data inside the packet is encrypted, deep-packet inspection won't work.

A better option would be to tag data and give the network instructions for handling different types of data. There might be a policy that states a video stream should get priority over an e-mail, although you don't have to reveal exactly what's in that video stream or e-mail. The network simply takes these data tags into account when making routing decisions.

There are a whole host of downsides. False tagging and labeling would completely undermine such a system, which is one of the reasons it doesn't already exist. The idea is not new. "Information networking" is basically impossible in an untrusted environment because people want their data to be safe and encrypted, not inspected by every computer that routes it to its destination. This kind of trusting approach might work on specialized, contained networks, but it won't work on the internet.

What's more, opening up the content of data on the network would give enormous power to the governments and corporations that control the internet's infrastructure. Who is in favor of that other than bureaucrats and tyrants?

Even if a smarter Net can move data around more intelligently, content is growing exponentially. How do you reduce the amount of traffic a network needs to handle?

Our smartphones, computers and other gadgets generate a lot of raw data that we then send to data centers for processing and storage. This will not scale in the future. Rather, we might move to a model where decisions are made about data before it is placed on the network. For example, if you have a security camera at an airport, you would program the camera or a small computer server controlling multiple cameras to perform facial recognition locally, based on a database stored in a camera or server. [Instead of bottlenecking the network with a stream of images, the camera would communicate with the network only when it finds a suspect. That way it sends an alert message or maybe a single digital image when needed.]

Increasing the amount of processing done on data before transmitting it is a viable approach, but there are trade-offs. In particular, the trend has been to move towards "post before process" rules that intentionally post raw data onto networks so that end users with purposes that aren't known to the data provider can process the raw data according to their needs. If the data collector processes data before posting it there is a strong likelihood that something will be discarded that would have been valuable to someone. The data collector (and the human who designed it) shouldn't be responsible for knowing the needs of everyone who might use the data ever in the future. Hence, post before process. It's an extremely valuable method, and network capacity will expand to preserve it.

So my prediction is that networks will get more efficient, but they'll also get fatter. Bandwidth will keep going up forever. The only physical limit to wired bandwidth expansion is the physical space required to lay cable.

(HT: MG.)

0 TrackBacks

Listed below are links to blogs that reference this entry: The "Limit" of the Internet.

TrackBack URL for this entry: http://www.mwilliams.info/mt5/tb-confess.cgi/8217

Comments

Supporters

Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Site Info

Support