Internet

Internet economics, technology and issues

Towards a more secure web, and better TLS

Today an interesting paper (written with the assistance of the EFF) was released. The authors have found evidence that governments are compromising trusted "certificate authorities" by issuing warrants to them, compelling them to create a false certificate for a site whose encrypted traffic they want to snoop on.

Will search engines focus on the negative?

I'm at DLD in Munich, and going to Davos tomorrow. While at DLD I made a brief mention during a panel on identity and tracking of my concept of the privacy dangers of the AIs of the future, which are able to extract things from recorded data (like faces) that we can't do today.

I mentioned a new idea, however, which is a search engine which focuses on the negative, because though advanced algorithms it can tell the difference between positive and negative content.

Topic: 

The net needs a free way to combine video and slides for showing talks

These days it is getting very common to make videos of presentations, and even to do live streams of them. And most of these presentations have slides in Powerpoint or Keynote or whatever. But this always sucks, because the camera operator -- if there is one -- never moves between the speaker and the slide the way I want. You can't please everybody of course.

In the future, everyone famous will get service 15 minutes faster

There's a phenomenon we're seeing more and more often. A company screws over a customer, but this customer now has a means to reach a large audience through the internet, and as a result it becomes a PR disaster for the company. The most famous case recently was United Breaks Guitars where Nova Scotia musician David Carroll had his luggage mistreated and didn't get good service, so he wrote a funny song and music video about it.

Twitter clients, only shorten URLs as much as you truly need to and make them readable

I think URL shorteners are are a curse, but thanks to Twitter they are growing vastly in use. If you don't know, URL shorteners are sites that will generate a compact encoded URL for you to turn a very long link into a short one that's easier to cut and paste, and in particular these days, one that fits in the 140 character constraint on Twitter.

Tags: 

A super-fast web transaction (and Google SPDY)

(Update: I had a formatting error in the original posting, this has been fixed.)

A few weeks ago when I wrote about the non deployment of SSL I touched on an old idea I had to make web transactions vastly more efficient. I recently read about Google's proposed SPDY protocol which goes in a completely opposite direction, attempting to solve the problem of large numbers of parallel requests to a web server by multiplexing them all in a single streaming protocol that works inside a TCP session.

While calling attention to that, let me outline what I think would be the fastest way to do very simple web transactions. It may be that such simple transactions are no longer common, but it's worth considering.

Consider a protocol where you want to fetch the contents of a URL like "www.example.com/page.html" and you have not been to that server recently (or ever.) You want only the plain page, you are not yet planning to fetch lots of images and stylesheets and javascript.

Today the way this works is pretty complex:

  1. You do a DNS request for www.example.com via a UDP request to your DNS server. In the pure case this also means first asking where ".com" is but your DNS server almost surely knows that. Instead, a UDP request is sent to the ".com" master server.
  2. The ".com" master server returns with the address of the server for example.com.
  3. You send a DNS request to the example.com server, asking where "www.example.com is."
  4. The example.com DNS server sends a UDP response back with the IP address of www.example.com
  5. You open a TCP session to that address. First, you send a "SYN" packet.
  6. The site responds with a SYN/ACK packet.
  7. You respond to the SYN/ACK with an ACK packet. You also send the packet with your HTTP "GET" reqequest for "/page.html." This is a distinct packet but there is no roundtrip so this can be viewed as one step. You may also close off your sending with a FIN packet.
  8. The site sends back data with the contents of the page. If the page is short it may come in one packet. If it is long, there may be several packets.
  9. There will also be acknowledgement packets as the multiple data packets arrive in each direction. You will send at least one ACK. The other server will ACK your FIN.
  10. The remote server will close the session with a FIN packet.
  11. You will ACK the FIN packet.

You may not be familiar with all this, but the main thing to understand is that there are a lot of roundtrips going on. If the servers are far away and the time to transmit is long, it can take a long time for all these round trips.

It gets worse when you want to set up a secure, encrypted connection using TLS/SSL. On top of all the TCP, there are additional handshakes for the encryption. For full security, you must encrypt before you send the GET because the contents of the URL name should be kept encrypted.

A simple alternative

Consider a protocol for simple transactions where the DNS server plays a role, and short transactions use UDP. I am going to call this the "Web Transaction Protocol" or WTP. (There is a WAP variant called that but WAP is fading.)

  1. You send, via a UDP packet, not just a DNS request but your full GET request to the DNS server you know about, either for .com or for example.com. You also include an IP and port to which responses to the request can be sent.
  2. The DNS server, which knows where the target machine is (or next level DNS server) forwards the full GET request for you to that server. It also sends back the normal DNS answer to you via UDP, including a flag to say it forwarded the request for you (or that it refused to, which is the default for servers that don't even know about this.) It is important to note that quite commonly, the DNS server for example.com and the www.example.com web server will be on the same LAN, or even be the same machine, so there is no hop time involved.
  3. The web server, receiving your request, considers the size and complexity of the response. If the response is short and simple, it sends it in one UDP packet, though possibly more than one, to your specified address. If no ACK is received in reasonable time, send it again a few times until you get one.
  4. When you receive the response, you send an ACK back via UDP. You're done.

The above transaction would take place incredibly fast compared to the standard approach. If you know the DNS server for example.com, it will usually mean a single packet to that server, and a single packet coming back -- one round trip -- to get your answer. If you only know the server for .com, it would mean a single packet to the .com server which is forwarded to the example.com server for you. Since the master servers tend to be in the "center" of the network and are multiplied out so there is one near you, this is not much more than a single round trip.

Do you get Twitter? Is a "sampled" medium good or bad?

I just returned from Jeff Pulver's "140 Characters" conference in L.A. which was about Twitter. I asked many people if they get Twitter -- not if they understand how it's useful, but why it is such a hot item, and whether it deserves to be, with billion dollar valuations and many talking about it as the most important platform.

Some suggested Twitter is not as big as it appears, with a larger churn than expected and some plateau appearing in new users. Others think it is still shooting for the moon.

No, I don't want to participate in a customer satisfaction survey every time

It seems that with more and more of the online transactions I engage in -- and sometimes even when I don't buy anything -- I will get a request to participate in a customer satisfaction survey. Not just some of the time in some cases, but with every purchase. I'm also seeing it on web sites -- sometimes just for visiting a web site I will get a request to do a survey, either while reading, or upon clicking on a link away from the site.

ClariNet history and the 20th anniversary of the dot-com

Twenty years ago (Monday) on June 8th, 1989, I did the public launch of ClariNet.com, my electronic newspaper business, which would be delivered using USENET protocols (there was no HTTP yet) over the internet.

ClariNet was the first company created to use the internet as its platform for business, and as such this event has a claim at being the birth of the "dot-com" concept which so affected the world in the two intervening decades. There are other definitions and other contenders which I discuss in the article below.

Towards better pseudonym posting on message boards - casual commenting.

As you may know, I allow anonymous comments on this blog. Generally, when a blog is small, you don't want to do too much to discourage participation. Making people sign up for an account (particularly with email verification) is too much of a barrier when your comment volume is small. You can't allow raw posting these days because of spammers -- you need some sort of captcha or other proof-of-humanity -- but in most cases moderate readership sites can allow fairly easy participation.

Simple script to count how many read your blog

Ok, admit it, who likes blogging in to a vacuum. You want to know how many people are actually reading your blog.

I have created a simple Perl script that scans your blog's log file and attempts to calculate how many people read the blog and the RSS feeds.

You can download the feed reader script. I release it under GPL2.

The Glass Roots movement

Recently, while keynoting the Freedom 2 Connect conference in Washington, I spoke about some of my ideas for fiber networks being built from the ground up. For example, I hope for the day when cheap kits can be bought at local stores to fiber up your block by running fiber through the back yards, in some cases literally burying the fiber in the "grass roots."

An instant temporary internet kit

Over the weekend I was at the [BIL conference]http://www.bilconference.com, a barcamp/unconference style justaposition on the very expensive TED conference. I gave a few talks, including one on self driving cars, privacy and AI issues.

The conference, being free, was at a small community center. This location did not have internet. Various methods were possible to provide internet. The easiest are routers which can take cellular network EVDO cards and offer an 802.11 access point. That works most places, but is not able to handle many people, and may or may not violate some terms of service. However, in just about all these locations there are locations very nearby with broadband internet which can be used, including hotels, businesses and even some private homes. But how to get the access in quickly?

What would be useful would be an "instant internet kit" with all you need to take an internet connection (or two) a modest distance over wireless. This kit would be packed up and available via courier to events that want internet access on just a couple of days notice.

What would you put in the kit?

Topic: 

A universal Web-USB plugin for all browsers

As our devices get more and more complex, configuring them gets harder and harder. And for members of the non-tech-savvy public, close to impossible.

Here's an answer: Develop a simple browser plug-in for all platforms that can connect a USB peripheral to a TCP socket back to the server where the plugin page came from. (This is how flash and Java applets work, in fact this could be added to flash or Java.)

The impact of Peer to Peer on ISPs

I'm a director of BitTorrent Inc. (though not speaking for it) and so the recent debate about P2P applications and ISPs has been interesting to me. Comcast has tried to block off BitTorrent traffic by detecting it and severing certain P2P connections by forging TCP reset packets. Some want net neutrality legislation to stop such nasty activity, others want to embrace it. Brett Glass, who runs a wireless ISP, has become a vocal public opponent of P2P.

Some base their opposition on the fact that since BitTorrent is the best software for publishing large files, it does get used by copyright infringers a fair bit. But some just don't like the concept at all. Let's examine the issues.

A broadband connection consists of an upstream and downstream section. In the beginning, this was always symmetric, you had the same capacity up as down. Even today, big customers like universities and companies buy things like T-1 lines that give 1.5 megabits in each direction. ISPs almost always buy equal sized pipes to and from their peers.

With aDSL, the single phone wire is multiplexed so that you get much less upstream than downstream. A common circuit will give 1.5mbps down and say 256kb up -- a 6 to 1 ratio. Because cable systems weren't designed for 2 way data, they have it worse. They can give a lot down, but they share the upstream over a large block of customers under the existing DOCSIS system. They also will offer upstream on near the 6 to 1 ratio but unlike the DSL companies, there isn't a fixed line there.

Topic: 

Securing home computer networks

Bruce Schneier has made a fuss by writing about how he leaves his wireless internet open. As a well regarded security expect, how can he do this. You'll see many arguments for and against in his posting. I'll expand on one of mine.

Part of Bruce's argument is one I express different. I sometimes say "Firewalls are a hoax." They are the wrong choice for security, but we sell them as a good choice. Oddly, however, this very fact does make them a valid choice. I will explain the contradiction.

Pages