You are here

On the two-tier internet

Of late there's been talk of ISPs somehow "charging" media-over-IP providers (such as Google video) for access to "their" pipes. This is hard to make sense of, since when I download a video from a site, I am doing it over my pipe, which I have bought from my ISP, subject to the contract that I have with it. Google is sending the data over their pipe, which they bought to connect to the central peering points and to my ISP. However, companies like BellSouth, afraid that voice and video will be delivered to their customers in competition with their own offerings, want to do something to stop it.

To get around rules about content neutrality on the network that ILEC based ISPs are subject to, they now propose this as a QOS issue. That there will be two tiers, one fast enough for premium video, and one not fast enough.

Today I've seen comments from Jeff Pulver and Ed Felten on possible consequences of such efforts. However, I think both directions miss something... (read on) It is interesting to imagine how an ISP might try to make this two-tier system happen. Now this exists already to some degree, in that companies like Akamai, in order to provide higher-performance service to users, install servers as close as possible to the ISPs. In some cases, paying these ISPs to do so. ISPS can certainly make it the case that their connection to the broad internet is slow and not good enough for widespread real time streaming video, and truthfully say that to do that you need to connect directly to them, at a price. Indeed, this may even be true of some connections out there. Customers, where they have a choice, may decide to go to an ISP that has a better connection to the broad internet rather than having a good connection only to a few providers.

It's hard otherwise for an ISP to say to Google, "pay up!" because it's a question of "Pay up or what?" If Google doesn't pay up, will the ISP try to throttle Google video service to a lower volume than it would normally get? That seems like something that would cause customer revolt, along with technical tricks to bypass the filters. They would not block Google search (nor is Google likely to withhold search) for customers, that would cause a complete revolt and violate network neutrality rules.

They might try to alter the terms of their contracts with customers, to put on bandwidth caps, except where the customer pays more or the source company pays. To do this in a neutral way, however, would slow down all large file downloads.

The most workable system for them would be to try to limit sustained bandwidth. Ie. allow the user the full number of megabits they are paying for for bursts long enough to download most large files but not long enough to stream videos over say, 100mb. After that, they could cut the bandwidth down to a smaller number, high enough to finish a large file download but not high enough to continue streaming a video. They could even make it deliberately choppy. Ie. they start you at full speed for the first 50mb, then they cut you down for the next 20mb to a level where a live streaming video would fail, and then pop you back up again to allow you to finish a file download.

Of course, if file downloads work, then so do non-streaming video downloads, those meant to be watched later. This is TVoIP as opposed to IPTV. It's what people do with BitTorrent, or with Akimbo or a number of other offerings.

The bandwidth of voice calls and even video calls is quite a bit smaller than that needed for live streaming video at TV quality. As such it's hard to see stopping VoIP with such techniques. In addition, such bandwidth limiting is not trivial to do, it requires fairly advanced software in the routers. However, if the ISPs want it and see lots of revenue from it, providers would build it.

In the end, this will be odd because it will mean deliberately downgrading service, in order to sell what used to be there to somebody. They might be able to avoid that charge by placing the limits only on use of new capacity they build in the future. For example, and ISP with a 100mbit connection to the internet might decide to upgrade to a 200mbit connection, and build complex throttles on any use that would have hit a physical wall on the old 100mbit connection, unless that use is to or from somebody who paid them a toll.

Still, it all sounds like a bizarre system. I want my ISP to provide service to me, and be responsible to me. They fear that just being in the packet selling business is a dead end and don't want to do that. This would be fine if there was a large amount of ISP choice in most locations. In some locations there is, and in those places we can let the market decide which ISP offerings will fail and which will win. In other areas, there is not really any choice in high-speed ISP, or at most two choices.

(A cable industry rep once declared that cable companies were not a monopoly. I agreed, saying that you can choose any cable company you want just by moving your home into their territory.)


Like many observers, I think google should fight this now. I wrote an editorial about it: I think there are many young users who would never subscribe to an ISP that cut off or throttled google for their entire lifetimes.

[BTW would have preferred to use instead of strong but could not do so.]

It is large Tier I carriers like AT&T and Bell South. Chances are, your ISP "leases" a data pipe from one of the large Telcos. They would use this Large Data Pipe (a DS3, OC3, OC12 circuit) to connect your market to one of the Large Public Data Exchange points located in places like NY City, Washington, DC, San Jose, Atlanta, GA and Miami, FL.

The TELCO wants to charge EXTRA to bring GOOGLE and other "Content Providers" data to the Public Data Exchange point.

it is the large Telco's belief that since they own the lines, they have a right to prioritize the data flowing down those lines.

This, of course, ruins the current business model of places like Google and Blizzard and yahoo and will most definately drive up the cost of using the internet to individual users, resulting in pay per use subscription policies for things that have been traditionally ad based revenue generators.

Quite literally, this is the very last bastion of the free and open internet, and it's the large telco's that are doing this, not the ISP's.

Every ISP buys a pipe to one or more major peering points. Either they buy their own pipe, or they buy carriage from another ISP including the large telcos. Those pipes contracts, to the best of my knowledge, are priced on bandwidth, not on application. If a telco tried to change the pricing, the ISPs would switch to any available competition, and there is plenty of competition in this part of the field, and plenty of unlit dark fiber to compete with.

Google probably has its own pipes into the peering points, I would be very surprised if they don't have pipes into almost all of them. It's one of the reasons Google is always one of the fastest and most reliable sites. If I can't ping google, the cause is almost always my own link, not any intermediate link on the net.

The only pipes the carriers own on which there is little competition are the first mile pipes to our homes. It is only on these that they can say, "I am altering the deal. Pray I don't alter it any further."

See & Basically, it looks like they plan on throttling the peer to peer, leaving the regular broadband speeds about the same & letting the streaming video get guaranteed service. I doubt it will have any impact on our current broadband experience -- unless you already have 10 or 20 Mbps.

Isn't an oc3 connection very expensive? Bandwidth at that speed is for large corporations with deep pockets.

Add new comment

Subscribe to Comments for "On the two-tier internet"