Subsidize customers, not phones

As you may know, if you buy a cell phone today, you have to sign up for a 1 or 2 year contract, and you get a serious discount on the phone, often as much as $200. The stores that sell the phones get paid this subsidy when they sell to you, if you buy from a carrier you just get a discount. The subsidy phones are locked so you can’t go and take them to another carrier, though typically you can get them unlocked for a modest fee either by the carrier or unlock shops.

The phones are locked in a different way, in that this subsidy pretty much makes everybody buy their phone through a carrier. Since you are going to sign up with a carrier for a year or two anyway, you would be stupid not to. And except for prepaid, signing up even without a subsidy phone still requires a contract, you just don’t get anything for it.

Because of this, it is carriers that shop for phones, not consumers. The carriers tell the handset makers what to provide, and quite often, what not to provide. Subsidy phones tend to come with features disabled, such as bluetooth access for your laptop to sync the address book or connect to the internet. A number of PDA phones are sold with 802.11 access in them in Europe, but this feature is removed for the U.S. market. The carriers don’t want you using 802.11 to bypass their per minute fees, or they want to regulate your data use.

This method of selling phones is the biggest crippler of the cell phone industry. If consumers bought phones directly, there would be more competition and more features. But less control by the carriers.

That’s the only reason I can think of why they don’t do what seems obvious to me. If you walk up to a carrier and say you will sign the 2 year contract, but want to bring your own phone, they should be very happy to hear that and give you the subsidy. They can give it to you as a $10 discount for 20 months instead of $200 all at once and it would actually be cheaper for them. This would allow a much better resale market in used phones, and allow new and innovative phones — even open source homebuilt phones. Competition and free markets means innovation.

They could even exercise some control if they truly needed to. They need not let you just bring in any phone, they could still specify which ones are approved. I think that would be stupid, but they could do it. However, this would still not let them so easily control what applications you could get on the phone. For example, one reason they disabled bluetooth features (other than headset) on many phones is they wanted you to pay their fees to download your apps and photos over the network, not just sync them up to your computer for free. An open phone market would deprive them of that revenue.

So frankly, if they are so worried about just these revenue issues, then give me less subsidy. Figure out what you’re losing by letting me have my choice of phone, and take it out of the subsidy. I can still put in my choice of phone today if I am willing to pay the extra $200, but of course few want to do that, so there’s no market for such phones. This would improve that.

There must be some number which makes this work, and the innovation generated would benefit the carriers in the long run. In Asia, subsidies have largely gone away, and there is word this trend may be moving to Europe, where at least carriers are happy to have 802.11 in their phones. Let’s hope.

It's OK, the internet will scale fine

I’ve been seeing a lot of press lately worrying that the internet won’t be able to handle the coming video revolution, that as more and more people try to get their TV via the internet, it will soon reach a traffic volume we don’t have capacity to handle. (Some of this came from a Google TV exec’s European talk, though Google has backtracked a bit on that.)

I don’t actually believe that, even given the premise behind that statement, which is traditional centralized download from sites like Youtube or MovieLink. I think we have the dark fiber and other technology already in place, with terabits over fiber in the lab, to make this happen.

However, the real thing that they’re missing is that we don’t have to have that much capacity. I’m on the board of Bittorrent Inc., which was created to commercialize the P2P file transfer technology developed by its founder, and Monday we’re launching a video store based on that technology. But in spite of the commercial interest I may have in this question, my answer remains the same.

The internet was meant to be a P2P network. Today, however, most people do download more than they upload, and have a connection which reflects this. But even with the reduced upload capacity of home broadband, there is still plenty of otherwise unused upstream sitting there ready. That’s what Bittorrent and some other P2P technologies do — they take this upstream bandwidth, which was not being used before, and use it to feed a desired file to other people wishing to download the file. It’s a trade, so you do it from others and they do it for you. It allows a user with an ordinary connection to publish a giant file where this would otherwise be impossible.

Yes, as the best technology for publishing large files on the cheap, it does get used by people wanting to infringe copyrights, but that’s because it’s the best, not because it inherently infringes. It also has a long history of working well for legitimate purposes and is one of the primary means of publishing new linux distros today, and will be doing hollywood major studio movies Feb 26.

Right now the clients connect with whoever they can connect with, but they favour other clients that send them lots of stuff. That makes a bias towards other clients to whom there is a good connection. While I don’t set the tech roadmap for the company, I have expectations that over time the protocol will become aware of network topology, so that it does an even better job of mostly peering with network neighbours. Customers of the same ISP, or students at the same school, for example. There is tons of bandwidth available on the internal networks of ISPs, and it’s cheap to provide there. More than enough for everybody to have a few megabits for a few hours a day to get their HDTV. In the future, an ideal network cloud would send each file just once over any external backbone link, or at most once every few days — becoming almost as efficient as multicasting.

(Indeed, we could also make great strides if we were to finally get multicasting deployed, as it does a great job of distributing the popular material that still makes up most of the traffic.)

So no, we’re not going to run out. Yes, a central site trying to broadcast the Academy Awards to 50 million homes won’t be able to work. And in fact, for cases like that, radio broadcasting and cable (or multicasting) continue to make the most sense. But if we turn up the upstream, there is more than enough bandwidth to go around within every local ISP network. Right now most people buy aDSL, but in fact it’s not out the question that we might see devices in this area move to being soft-switchable as to how much bandwidth they do up and and how much down, so that if upstream is needed, it can be had on demand. It doesn’t really matter to the ISP — in fact since most users don’t do upstream normally they have wasted capacity out to the network unless they also do hosting to make up for it.

There are some exceptions to this. In wireless ISP networks, there is no up and downstream, and that’s also true on some ethernets. For wireless users, it’s better to have a central cache just send the data, or to use multicasting. But for the wired users it’s all 2-way, and if the upstream isn’t used, it just sits there when it could be sending data to another customer on the same DSLAM.

So let’s not get too scared. And check out the early version of bittorrent’s new entertainment store and do a rental download (sadly only with Windows XP based DRM, sigh — I hope for the day we can convince the studios not to insist on this) of multiple Oscar winner “Little Miss Sunshine” and many others.

A solar economics spreadsheet

In light of my recent threads on CitizenRe I built a spreadsheet to do solar energy economic calculations. If you click on that, you can download the spreadsheet to try for yourself. If you don’t have a spreadsheet program (I recommend the free Gnumeric or Open Office) it’s also up as a Google Solar Spreadsheet but you may need a Google account to plug in your own numbers.  read more »

Do taxi monopolies make sense in the high-tech world?

Many cities (and airports) have official taxi monopolies. They limit the number of cabs in the city, and regulate them, typically by issuing “medallions” to cabs or drivers or licences to companies. The most famous systems are in London and New York, but they are in many other places. In New York, the medallions were created earlier in the century, and have stayed fixed in number for decades after declining from their post-creation peak. The medallion is a goldmine for its “owner.” Because NY medallions can be bought and sold, recently they have changed hands at auction for around $300,000. That 300K medallion allows a cab to be painted yellow, and to pick up people hailing cabs in the street. It’s illegal for ordinary cars to do this. Medallion owners lease the combination of cab and medallion for $60 to $80 for a 7-9 hour shift, I believe.

Here in San Francisco, the medallions are not transferable, and in theory are only issued (after a wait of a decade or more) to working cab drivers, who must put in about 160 4-hour shifts per year. After that, they can and do rent out their medallion to other drivers, for a more modest rental income of about $2,000 per month.

On the surface, this seems ridiculous. Why do we even need a government monopoly on taxis, and why should this monopoly just be a state-granted goldmine for those who get their hands on it? This is a complex issue, and if you search for essays on taxi medallions and monopoly systems you will find various arguments pro and con. What I want to get into here is whether some of those arguments might be ripe for change, in our new high-tech world of computer networks, GPSs and cell phones.

In most cities, there are more competitive markets for “car services” which you call for an appointment. They are not allowed to pick up hailing passengers, though a study in Manhattan found that they do — 2 of every 5 cars responding to a hail were licenced car services doing so unlawfully.  read more »

CitizenRe, real or imagined -- a challenge

Recently I opened up a surprising can of worms with a blog post about CitizenRe wondering if they had finally solved the problem of making solar power compete with the electrical grid. At that post you will see a substantial comment thread, including contributions by executives of the firm, which I welcome. At first, I had known little about CitizenRe and the reputation it was building. I thought i should summarize some of the issues I have been considering and other elements I have learned.

CitizenRe’s offer is very appealing. They claim they will build a plant that can make vastly cheaper solar. Once they do, they will install it on your roof and “rent” it to you. You buy all the power it produces from them at a rate that beats your current grid power cost. Your risks are few — you put down a deposit of $500 to $1500 depending on system size, you must cover any damage to the panels, and they offer removal and replacement for a very modest fee if you need to reroof or even move. You lock in your rate, which is good if grid rates go up and bad if grid rates go down or other solar becomes cheaper, but on the whole it’s a balanced offer.

In fact, it seems too good to be true. It’s way, way cheaper than any offering available today. Because it sounds so good, many people are saying “show me.” I want to see just how they are going to pull that off. Many in the existing solar industry are saying that much louder. They are worried that if CitizenRe fails to deliver, all their customers will have been diverted to a pipedream while they suffer financial ruin. Of course, they are also worried that if CitizenRe does deliver, they will be competed out of business, so they do have a conflict of interest.

Here are some of the things to make me skeptical.  read more »

When should a password be strong

If you’re like me, you select special unique passwords for the sites that count, such as banks, and you use a fairly simple password for things like accounts on blogs and message boards where you’re not particularly scared if somebody learns the password. (You had better not be scared, since most of these sites store your password in the clear so they can mail it to you, which means they learn your standard account/password and could pretend to be you on all the sites you duplicate the password on.) There are tools that will generate a different password for every site you visit, and of course most browsers will remember a complete suite of passwords for you, but neither of these work well when roaming to an internet cafe or friend’s house.

However, every so often you’ll get a site that demands you use a “strong” password, requiring it to be a certain length, to have digits or punctuation, spaces and mixed case, or subsets of rules like these. This of course screws you up if the site is an unimportant site and you want to use your easy to remember password, you must generate a variant of it that meets their rules and remember it. These are usually sites where you can’t imagine why you want to create an account in the first place, such as stores you will shop at once, or blogs you will comment on once and so on.

Strong passwords make a lot of sense in certain situations, but it seems some people don’t understand why. You need a strong password in case it is possible or desireable for an attacker to do a “dictionary” attack on your account. This means they have to try thousands, or even millions of passwords until they hit the one that works. If you use a dictionary word, they can try the most common words in the dictionary and learn your password.  read more »

Anti-gerrymandering formulae

A well known curse of many representative democracies is gerrymandering. People in power draw the districts to assure they will stay in power. There are some particularly ridiculous cases in the USA.

I was recently pointed to a paper on a simple, linear system which tries to divide up a state into districts using the shortest straight line that properly divides the population. I have been doing some thinking of my own in this area so I thought I would share it. The short-line algorithm has the important attribute that it’s fixed and fairly deterministic. It chooses one solution, regardless of politics. It can’t be gamed. That is good, but it has flaws. Its district boundaries pay no attention to any geopolitical features except state borders. Lakes, rivers, mountains, highways, cities are all irrelevant to it. That’s not a bad feature in my book, though it does mean, as they recognize, that sometimes people may have a slightly unusual trek to their polling station.  read more »

Now that virtualizers are here, let's default to letting you run your old system

Virtualizer technology, that lets you create a virtual machine in which to run another “guest” operating system on top of your own, seems to have arrived. It’s common for servers (for security) and for testing, as well as things like running Windows on linux or a Mac. There are several good free ones. One, kvm, is built into the lastest Linux kernel (2.6.20). Microsoft offers their own.

I propose that when an OS distribution does a major upgrade, it encapsulate your old environment as much as possible in a compressed virtualizer disk image. Then it should allow you to bring up your old environment on demand in a virtual machine. This way you can be confident that you can always get back to programs and files from your old machine — in effect, you are keeping it around, virtualized. If things break, you can see how they broke. In an emergency, you can go back and do things within your old machine. It can also allow you to migrate functions from your old machine to your new one more gradually. Virtual machines can have their own IP address (or even have the original one. While they can’t access all the hardware they can do quite a bit.

Of course this takes lots of disk space, but disk space is cheap, and the core of an OS (ie. not including personal user files like photo archives and videos) is usually only a few gigabytes — peanuts by today’s standards. There is a risk here, that if you run the old system and give it access to those personal files (for example run your photo organizer) you could do some damage. OSs don’t get do a great division between “your” files for OS and program config and “your” large data repositories. One could imagine an overlay filesystem which can only read the real files, and puts any writes into an overlay only seen by the virtual mount.

One can also do it the other way — run the new OS in the virtual machine until you have it tested and working, and then “flip the switch” to make the new OS be native and the old OS be virtual at the next boot. However, that means the new OS won’t get native hardware access, which you usually want when installing and configuring an OS upgrade or update.

All this would be particuarly handing if doing an “upgrade” that moves from, say, Fedora to Ubuntu, or more extreme, Windows to Linux. In such cases it is common to just leave the old hard disk partition alone and make a new one, but one must dual boot. Having the automatic ability to virtualize the old OS would be very handy for doing the transition. Microsoft could do the same trick for upgrades from old versions to Vista.

Of course, one must be careful the two machines don’t look too alike. They must not use the same MAC address or IP if they run internet services. They must, temporarily at least, have a different hostname. And they must not make incompatible changes, as I noted, to the same files if they’re going to share any.

Since hard disks keep getting bigger with every upgrade, it’s not out of the question that you might not keep your entire machine history behind as a series of virtual machine images. You could imagine going back to the computer environment you had 20 years ago, on demand, just for fun, or to recover old data formats — you name it. With disks growing as they are, we should not throw anything away, even entire computer environments.

Social networking sites -- accept you won't be the only one, and start interoperating.

So many social networking sites (LinkedIn, Orkut, Friendster, Tribe, Myspace etc.) seem bent on being islands. But there can’t be just one player in this space, not even one player in each niche. But when you join a new one it’s like starting all over again. I routinely get invitations to join new social applications, and I just ignore them. It’s not worth the effort.

At some point, 2 or more of the medium sized ones should realize that the way to beat #1 is to find a way to join forces. To make it possible on service A to tie to a friend on service B, and to get almost all the benefits you would have if both people were on the same service. Then you can pick a home service, and link to people on their home services.

This is a tall order, especially while protecting highly private information. It is not enough to simply define a file format, like the FOAF format, for transporting data from one service to another. At best that’s likely only to get you the intersection of features of all the services using the format, and an aging intersection at that.

How to do this while preserving the business models and uniqueness of the services is challenging. For example, some services want to charge you for distant contacts or certain types of searches of your social network. And what do you do when a FoF involves the first friend being on service B and the FoF being on service C.

Truth is, we all belong to many social networks. They won’t all be in one system, ever.

You can’t just have routine sharing. This is private information, we don’t want spammers or marketers harvesting it.

The interchange format will have to be very dynamic. That means that as soon as one service supports a new feature, it should be possible for the format to start supporting it right away, without a committee having to bless a new standard. That means different people will do the same thing in different ways, and that has to be reconciled nicely in the future, not before we start using it.

Of course, at the same time I remain curious about just what they hope for us to do with these social networks. So far I have mostly seen them as a source of entertainment. Real live-altering experiences are rare. Some are using them for business networking and job hunting. Mailing FoFs didn’t really work out, it quickly became more spam than anything. Searching a network (the ideal app for Google’s Orkut) has not yet been done well.

Perhaps the right answer is to keep the networks simple and then let the applications build on top of them, independent of how the networks themselves are implemented. This means, however, a way to give an individual application access to your social network and — this is tricky — the social networks of your friends. Perhaps what we need is a platform, implemented by many, upon which social applications can then be built by many. However, each one will need to ask for access, which might encourage applications to group together to ask as a group. The platform providers should provide few applications. In effect, even browsing your network is not an application the provider should offer, as that has to travel over many providers.

Once some smaller networks figure this out, the larger ones will have to join or fall. Because I don’t want to have to keep joining different networks, but I will join new applications based on my network.

Farewell, Studio 60 on the Sunset Strip

I’ve decided to stop watching Studio 60. (You probably didn’t even know I was watching it, but I thought it was worthwhile outlining the reasons for not watching it.)

Studio 60 was hailed as the most likely great show of this season, with good reason, since it’s from Aaron Sorkin, creator of one truly great show (the West Wing) and one near-great (Sportsnight.) Sorkin is deservedly hailed for producing TV that’s smart and either amusing or meaningful, and that’s what I seek. But I’m not caring about the characters on Studio 60.

I think Sorkin’s error was a fundamental conceit — that the workings of TV production will be as interesting to the audience as they are to the creators. Now I’m actually more interested than most in this, having come from a TV producing family, and with a particular interest in the world of comedy and Saturday Night Live. It’s not simply that this was a “Mary Sue” where Sorkin tries to tell us how he would do SNL if he were in charge, since I’m not sure that’s what it is.

I fear that he went into the network and said, “Hey! The heroine is the principled network president! The heroes are the show’s executive producers!” and the network drank their own kool-aid. How could they resist?

The West Wing tried to really deal with DC issues we actually care about. We went from seeing Bradley Whitford battle to save the education system to battling to avoid ticking off sponsors. How can that not be a letdown? The only way would be if it were a pure comedy.

It’s possible to do an entertaining show about TV. Sorkin’s own Sportsnight was one, after all. However, you didn’t have to care a whit about sports, or sports TV, or TV production to enjoy that show. Those things were the background, not the foreground of Sportsnight. There have been many great comedies about TV and Radio — Dick Van Dyke, Mary Tyler Moore, SCTV, Home Improvement, Murphy Brown, WKRP etc. However, dramas about TV have rarely worked. The only good one I can think of was Max Headroom, and it was more about a future vision of media than about the TV industry.

Studio 60 is sometimes amusing (though not even as amusing as the West Wing) but surprisingly unfunny. Indeed, the show-within-the-show is also surprisingly unfunny. You would think they could write and present one truly funny sketch a week. SNL has to write over an hour’s worth, and while it often does not succeed, there’s usually one good sketch. Had he wanted a Mary-Sue story, he would have done this.

So let that be a lesson. TV should stick to making fun of itself, not trying to make itself appear heroic. We’re not buying it.

Digital cameras should have built-in tagging

So many people today are using tags to organize photos and to upload them to sites like flickr for people to search. Most types of tagging are easiest to do on a computer, but certain types of tagging would make sense to add to photos right in the camera, as the photos are taken.

For example, if you take a camera to an event, you will probably tag all the photos at the event with a tag for the event. A menu item to turn on such a tag would be handy. If you are always taking pictures of your family or close friends, you could have tags for them preprogrammed to make it easy to add right on the camera, or afterwards during picture review. (Of course the use of facial recognition and GPS and other information is even better.)

Tags from a limited vocabulary can also be set with limited vocabulary speech recognition, which cameras have the CPU and memory to do. Thus taking a picture of a group of friends, one could say their names right as you took the picture and have it tagged.

Of course, entering text on a camera is painful. You don’t want to try to compose a tag with arrow buttons over a keyboard or the alphabet. Some tags would be defined when the camera is connected to the computer (or written to the flash card in a magic file from the computer.) You would get menus of those tags. For a new tag, one would just select something like “New tag 5” from the menu, and later have an interface to rename the tag to something meaningful.

As a cute interface, tag names could also be assigned with pictures. Print the tag name on paper clearly and take a picture of it in “new tag” mode. While one could imagine OCR here, since it doesn’t matter if the OCR does it perfectly at first blush, you don’t actually need it. Just display the cropped handwritten text box in the menus of tags. Convert them to text (via OCR or human typing) when you get to a computer. You can also say sound associations for such tags, or for generic tags.

Cameras have had the ability to record audio with pictures for a while, but listening to all that to transcribe it takes effort. Trained speech recognition would be great here but in fact all we really have to identify is when the same word or phrase is found in several photos as a tag, and then have the person type what they said just once to automatically tag all the photos the word was said on. If the speech interface is done right, menu use would be minimal and might not even be needed.

Updating the Turing Test

Alan Turing proposed a simple test for machine intelligence. Based on a parlour game where players try to tell if a hidden person is a man or a woman just by passing notes, he suggested we define a computer as intelligent if people can’t tell it from a human being through conversations with both over a teletype.

While this seemed like a great test (for those who accept that external equivalence is sufficient) in fact to the surprise of many people, computers passed this test long ago with ordinary, untrained examiners. Today there has been an implicit extension of the test, that the computer must be able to fool a trained examiner, typically an AI researcher or expert in brain sciences or both.

I am going to propose updating it further, in two steps. Turing proposed his test perhaps because at the time, computer speech synthesis did not exist, and video was in the distant future. He probably didn’t imagine that we would solve the problems of speech well before we got handles on actual thought. Today a computer can, with a bit of care in programming inflections and such into the speech, sound very much like a human, and we’re much closer to making that perfect than we are to getting a Turing-level intelligence. Speech recognition is a bit behind, but also getting closer.

So my first updated proposal is to cast aside the teletype, and make it be a phone conversation. It must be impossible to tell the computer from another human over the phone or an even higher fidelity audio channel.

The second update is to add video. We’re not as far along here, but again we see more progress, both in the generation of digital images of people, and in video processing for object recognition, face-reading and the like. The next stage requires the computer to be impossible to tell from a human in a high-fidelity video call. Perhaps with 3-D goggles it might even be a 3-D virtual reality experience.

A third potential update is further away, requiring a fully realistic android body. In this case, however, we don’t wish to constrain the designers too much, so the tester would probably not get to touch the body, or weigh it, or test if it can eat, or stay away from a charging station for days etc. What we’re testing here is the being’s “presence” — fluidity of motion, body language and so on. I’m not sure we need this test as we can do these things in the high fidelity video call too.

Why these updates, which may appear to divert from the “purity” of the text conversation? For one, things like body language, nuance of voice and facial patterns are a large part of human communication and intelligence, so to truly accept that we have a being of human level intelligence we would want to include them.

Secondly, however, passing this test is far more convincing to the general public. While the public is not very sophisticated and thus can even be fooled by an instant messaging chatbot, the feeling of equivalence will be much stronger when more senses are involved. I believe, for example, that it takes a much more sophisticated AI to trick even an unskilled human if presented through video, and not simply because of the problems of rendering realistic video. It’s because these communications channels are important, and in some cases felt more than they are examined. The public will understand this form of turing test better, and more will accept the consequences of declaring a being as having passed it — which might include giving it rights, for example.

Though yes, the final test should still require a skilled tester.

The giant security hole in auto-updating software

It’s more and more common today to see software that is capable of easily or automatically updating itself to a new version. Sometimes the user must confirm the update, in some cases it is fully automatic or manual but non-optional (ie. the old version won’t work any more.) This seems like a valuable feature for fixing security problems as well as bugs.

But rarely do we talk about what a giant hole this is in general computer security. On most computers, programs you run have access to a great deal of the machine, and in the case of Windows, often all of it. Many of these applications are used by millions and in some cases even hundreds of millions of users.

When you install software on almost any machine, you’re trusting the software and the company that made it, and the channel by which you got it — at the time you install. When you have auto-updating software, you’re trusting them on an ongoing basis. It’s really like you’re leaving a copy of the keys to your office at the software vendor, and hoping they won’t do anything bad with them, and hoping that nobody untrusted will get at those keys and so something bad with them.  read more »

Internet oriented supper club

At various times I have been part of dinner groups that meet once a month or once a week at either the same restaurant or a different restaurant every time. There’s usually no special arrangement, but it’s usually good for the restaurant since they get a big crowd on a slow night.

I think there could be ways to make it better for the restaurant as well as the diners — and the rest of the web to boot. I’m imagining an application that coordinates these dinners with diners and the restaurants. The restaurants (especially newer ones) would offer some incentives to the diners, plus some kick back to the web site for organizing it. As part of the deal, the diners would agree to fairly review the restaurant — at first on public restaurant review sites and/or their own blogs, but with time at a special site just for this purpose. Diners would need to review at least 80% of the time to stay in.

Here’s what could be offered the diners:

  • Private rooms or private waiter, with special attention
  • Special menus with special items at reduced prices
  • Special billing, either separate bills or even pay online — no worrying about settling.
  • Advanced online ordering and planning for shared meals, possibly just before heading out.

For the restaurant there’s a lot:

  • A bunch of predictable diners on a slow night
  • If they order from a special menu, it can be easier and cheaper to prepare multiple orders of the same dish.
  • Billing assistance from the web site with online payment
  • A way to get trustable online reviews to bring in business — if the reviews are good.

Now normally a serious restaurant critic would not feel it appropriate to have the restaurant know they are being reviewed. In such cases they will not get typical service and be able to properly review it. However, this can be mitigated a lot if all the restaurants are aware of what’s going on, and if the reviews are comparative. In this case the restaurants are being compared by how they do at their best, rather than for a random diner. The latter is better, but the former is also meaningful. And of course it would be clear to readers that this is what went on.

In particular, I believe the reviewers should not simply give stars or numerical ratings to restaurants. They can do that, but mainly they should just place the restaurants in a ranking with the other restaurants they have scored, once they have done a certain minimal number. This fixes “star inflation.” With most online review sites, you don’t know if a 5-star rating is from somebody who gives everything 4 or 5 stars, or if it’s the only 5-star rating the reviewer ever gave. All these are averaged together.

In addition, the existing online review sites have self-selected reviewers, which is to say people who rate a restaurant only because they feel motivated to do so. Such results can be wildly inaccurate.

Finally, it is widely suspected that some fraction of the reviews on online sites are biased, placed there by the restaurant or friends of the restaurant. There are certainly few mechanisms to stop this at the sites I have seen. Certainly if you see a restaurant with just a few high ratings you don’t know what to think.

This dining system, with the requirement that everybody review, eliminates a good chunk of the self-selection. Members would need to review whether they felt the mood or not. (You could not stop them from not going to a restaurant that does not appeal to them, of course, so there is still some self-selection.) It is possible a restaurant might send its friends to dine at “enemy” restaurants via the club to rate them down, but I think the risk of this is much less than the holes in the other systems.

Restaurants with any confidence in their quality should be motivated to invite such an online dining club, especially new restaurants. Indeed, it’s not uncommon for new restaurants to offer the general public things like 2nd entree free or other discounts to get the public in, with no review bonus. If the site becomes popular, in fact, it might become the case that a new restaurant that doesn’t invite the amateur critics could be suspect, unwilling to risk a bad place in their rankings.

Understand the importance of a key in crypto design

I’ve written before about ZUI (Zero user interface) in crypto, and the need for opportunistic encryption based upon it. Today I want to further enforce the concept by pointing to mistakes we’ve seen in the past.

Many people don’t know it, but our good friends at Microsoft put opportunistic encryption into Outlook Express and other mailers many years ago. And their mailers were and still are the most widely used. Just two checkboxes in MSOE allowed you to ask that it sign all your outgoing mail, and further to encrypt all mail you sent to people whose keys you knew. If they signed their outgoing mail, you automatically learned their keys, and from then on your replies to them were encrypted.

However, it wasn’t just two checkboxes — you also had to get an E-mail certificate. Those are available free from THAWTE, but the process is cumbersome and was a further barrier to adoption of this.

But the real barrier? Microsoft’s code imagined you had one primary private key and certificate. As such, access to that private key was a highly important security act. Use of that private key must be highly protected, after all you might be signing important documents, even cheques with it.

As a result, every time you sent a mail with the “automatic sign” checkbox on, it put up a prompt telling you a program wanted to use your private key, and asked if you would approve that. Every time you received a mail that was encrypted because somebody else knew your key, it likewise prompted you to confirm access should be given to the private key. That’s the right approach on the private key that can spend the money in my bank account (in fact it’s not strong enough even for that) but it’s a disaster if it happens every time you try to read an E-mail!

We see the same with SSL/TLS certificates for web sites. Web sites can pay good money to the blessed CAs for a site certificate, which verifies that a site is the site you entered the domain name of. While these are overpriced, that’s a good purpose. Many people however want a TLS certificate simply to make sure the traffic is encrypted and can’t be spied upon or modified. So many sites use a free self-signed certificate. If you use one, however, the browser pops up a window, warning you about the use of this self-signed certificate, and you must approve its use, and say for how long you will tolerate it.

That’s OK for my own certification of my E-mail server, since only a few people use it, and we can confirm that once without trouble. However, if every time you visit a new web site you have to confirm use of its self-signed key, you’re going to get annoyed. And thus, while the whole web could be encrypted, it’s not, in part due to this.

What was needed was what security experts call an understanding of the “threat model” — what are you scared of, and why, and how much hassle do you want to accept in order to try to be secure?

It would be nice for a TLS certificate to say, “I’m not certifying anything about who this is” and just arrange for encryption. All that would tell you is that the site is the same site you visited before. The Lock icon in the browser would show encryption, but not any authentication. (A good way to show authentication would be to perhaps highlight the authenticated part of the URL in the title bar, which shows you just what was authenticated.)

In E-mail, it is clear what was needed was a different private key, used only to do signing and opportunistic encryption of E-mail, and not used for authorizing cheques. This lesser key could be accessed readily by the mail program, without needing confirmation from the user every time. (You might, if concerned, have it get confirmation or even a pass code on a once a day basis, to stop e-mail worms from sending mail signed as you at surprising times.)

Paranoid users could ask for warnings here too, but most would not need them.

TLS supports client side certificates too. They are almost never used. Clients don’t want to get certificates for most uses, but they might like to be able to tell a site they are the same person as visited before — which is mostly what the login accounts at web sites verify. A few also verify the account is tied to a particular e-mail address, but that’s about it.

Perhaps if we move to get the client part working, we can understand our threat model better.

Hybrid stickers in carpool lane should be sold at dutch auction.

In the SF Bay Area, there are carpool lanes. Drivers of fuel efficient vehicles, which mostly means the Prius and the Honda Civic/Insight Hybrids can apply for a special permit allowing them to drive solo in the carpool lanes. This requires both a slightly ugly yellow sticker on the bumper, and a special transponder for bridges, because the cars are allowed to use the carpool lane on the bridge but don’t get the toll exemption that real carpools get.

I think this is good, as long as there is capacity in the carpool lane, because the two goals of the carpool lane are to reduce congestion and also to reduce pollution. The hybrids do the latter. (Though it is argued that hybrids do their real gas saving on city streets, and only save marginally on the highway, comparable to some highly efficient gasoline vehicles.)

However, oddly, the government decided to allocate a fixed number of stickers (which makes sense) and to release them on a first-come, first-served basis, which makes no sense. After the allocation is issued, new buyers of these cars, or future efficient cars can’t get the stickers. (Or so they say — in fact the allocation has been increased once.)

The knowledge that time was running out to get a Prius with carpool privileges was much talked about. And it’s clear that a lot of people who buy a hybrid rush to get one of the scarce carpool permits simply because they can, even if they will almost never drive on the highways at rush hour with them.

Society seem to love first-come-first-served as a good definition of “fair” but it seems wrong here. At the very least there should be a yearly fee, so that people who truly don’t need the stickers will not get them “just in case.” I would go further and suggest the annual fee be decided by dutch auction. For those not familiar, in a dutch auction, all those who wish to bid submit a single, sealed bid. If there are “N” items then the Nth highest bid becomes the price that the top N bidders all pay. There may be a minimum below which the items are not sold.

This can be slightly complex in that you can do this one of two ways. The first is everybody pays their real bid, and losers and overbidders get a refund. This assures all bidders are serious. The other is to set the price, and then bill the winners. The problem here is people might bid high but then balk when they see the final price. You need a way of enforcing the payment. Credit cards can help here. As can, of course, being the government, which can refuse to licence your car until you pay the agreed fees.

Carpool lanes are a hot topic here, of course. The mere mention of the subject of kidpooling (Counting children to determine if a car is a carpool) makes the blood boil in the local newspapers. People feel remarkable senses of entitlements, and lose focus of the real goals — to reduce congestion and pollution. Emotions would run high here, too.

Tempfailing for spam -- where does it lead

One growing technique for use in anti-spam involves finding ways to “fail” on initial contacts for sending mail. Real, standard conformant mail programs try again in various ways, but spammers, in writing their mail blasters, tend to just have them skip that address and go to the next one in their list.

Two common approaches include simply returning a “temporarily unavailable” status on any initial mail attempt that might be spam. Another approach is to have dead MX records both at the “try first” and “try last” end of the MX chain.

Why does this work? Spammers just want to deliver as much mail as possible given time and bandwidth. If one address fails for any reason, it’s really no different whether you spend your resources trying the address again or in a different way, or just move on to the next address. In fact, since many of the failures are real failures, it’s actually more productive to just move on.

And, I admit, some of the spam filtering tools I make use of use these techniques, and they do help. But what exactly are they doing? For spammers, the limiting factor is bandwidth. Dealing with failures, especially timeouts on dead servers, takes very little of their resources.

It doesn’t reduce the amount of spam they send, at least by much, it just redistributes it to those who don’t use the techniques. For a positive spin, you can liken it to putting up a higher fence than your neighbour, so the criminals attack them and not you. For a negative spin, you can imagine it as being like an air filter that filters out the pollution on air coming into your house, and spews it out the back at your neighbours.

So it’s a touch question. Is this approach a good idea? Especially at the start, it was very effective. Over time if it becomes very common spammers will see a reduction in spam they deliver and make fairly simple moves to compensate for it. Is this fair game or antisocial?

There is an old joke about two hikers who meet a bear. The first sits down and starts putting on his running shoes. The other says, “What are you doing, you can’t outrun a bear!” and the first says, “I don’t have to outrun the bear, I just have to outrun you.”

Are we passing the bear onto our neighbours?

(This is part of a larger question of some of the other negative consequences of anti-spam. For example, as text filters got better, spammers moved to sending their spam as embedded images which filters could not easily decode. The result is more and more bandwidth used, both by spammers and victims. Was it a victory or a loss?)

Replacing the FCC with "don't be spectrum selfish."

Radio technology has advanced greatly in the last several years, and will advance more. When the FCC opened up the small “useless” band where microwave ovens operate to unlicenced use, it generated the greatest period of innovation in the history of radio. As my friend David Reed often points out, radio waves don’t interfere with one another out in the ether. Interference only happens at a receiver, usually due to bad design. I’m going to steal several of David’s ideas here and agree with him that a powerful agency founded on the idea that we absolutely must prevent interference is a bad idea.

My overly simple summary of a replacement regime is just this, “Don’t be selfish.” More broadly, this means, “don’t use more spectrum than you need,” both at the transmitting and receiving end. I think we could replace the FCC with a court that adjudicates problems of alleged interference. This special court would decide which party was being more selfish, and tell them to mend their ways. Unlike past regimes, the part 15 lesson suggests that sometimes it is the receiver who is being more spectrum selfish.

Here are some examples of using more spectrum than you need:

  • Using radio when you could have readily used wires, particularly the internet. This includes mixed mode operations where you need radio at the endpoints, but could have used it just to reach wired nodes that did the long haul over wires.
  • Using any more power than you need to reliably reach your receiver. Endpoints should talk back if they can, over wires or radio, so you know how much power you need to reach them.
  • Using an omni antenna when you could have used a directional one.
  • Using the wrong band — for example using a band that bounces and goes long distance when you had only short-distance, line of sight needs.
  • Using old technology — for example not frequency hopping to share spectrum when you could have.
  • Not being dynamic — if two transmitters who can’t otherwise avoid interfering exist, they should figure out how one of them will fairly switch to a different frequency (if hopping isn’t enough.)

As noted, some of these rules apply to the receiver, not just the transmitter. If a receiver uses an omni antenna when they could be directional, they will lose a claim of interference unless the transmitter is also being very selfish. If a receiver isn’t smart enough to frequency hop, or tell its transmitter what band or power to use, it could lose.

Since some noise is expected not just from smart transmitters, but from the real world and its ancient devices (microwave ovens included) receivers should be expected to tolerate a little interference. If they’re hypersensitive to interference and don’t have a good reason for it, it’s their fault, not necessarily the source’s.  read more »

Now you have to have the right reverse-DNS

Update: Several of the spam bounces of this sort that I got were traced to the same anti-spam system, and the operator says it was not intentional, and has been corrected. So it may not be quite as bad as it seemed quite yet.

I have a social list of people I invite to parties. Every time I mail to it, I feel the impact of spam and anti-spam. Always several people have given up on a mailbox. And I run into new spam filters blocking the mail.

Perhaps I’m an old timer, but I run my own mail server. It’s in my house. I read my mail on that actual machine, and because of that, mail is wicked-fast for me, as fast as instant messaging for many people. (In fact, I never adopted IM the way some people did because E-mail is as fast.)

They’re working to make this harder to do. Many ISPs won’t even let you send mail directly, or demand you make a special request to have the mail port open to you. I’m bothered by the first case, less so by the second, because indeed, zombie PCs send much of the spam we’re now getting.

Because I send mail from the system, I also web surf from it. And while it’s not a serious privacy protection, I decided I would not have a reverse-DNS record for my system. That way people would not see “” in their web logs whenever I surfed. It’s not that you can’t use other techniques to find out that the address is mine, but that requires deliberate thought. Reverse DNS is automatic for many web logs.

Soon more and more sites would not take mail from a system without reverse DNS. Because I get my IP block from a small ISP, he does my reverse DNS, and I asked him to make one. He made one like many ISPs do, built from the IP numbers themselves. As in

But soon I saw bounces that said, “This reverse DNS looks like a dialup user, I won’t take your mail.” So I had him change it to a different string that doesn’t trumpet my name but doesn’t look like a standard anonymous reverse DNS.

But now I’m getting bounces just because the reverse DNS doesn’t match the name my mail server uses. There is no security in this, any spammer can program their mail server to use the reverse DNS name of the system they have taken over. But I guess some don’t, so another wall is thrown up, and those people won’t get invites to my parties.

This one is really stupid because it’s quite common for a single machine to have many names and serve many domains. To correct an earlier note, it is possible for an IP to have more than one PTR reverse DNS record, though I don’t know how many applications deal with that. And that screws these mailers. There is no need to look at reverse DNS at all.


Censored and uncensored soundtrack on the airplane

A recent story that United had removed all instances of the word “God” (not simply Goddamn) from a historical movie reminded me just how much they censor the movies on planes.

Here they have an easy and simple way out. Everybody is on headsets, and they already offer different soundtracks in different languages by dialing the dial. So offer the censored and real soundtrack on two different audio channels. Parents can easily make sure the kids are on whatever soundtrack they have chosen for them, as the number glows on the armrest.

Now most people, given the choice are going to take the real soundtrack. Which is fine, since now they certainly can’t complain if it does offend them. A few will take the censored soundtrack. But most people should be happy. This is not much work since the real work is creating the censored track. Assuming there is room for more tracks on the DVD, keeping the original one is no big deal.

Syndicate content