Social networking sites -- accept you won't be the only one, and start interoperating.

So many social networking sites (LinkedIn, Orkut, Friendster, Tribe, Myspace etc.) seem bent on being islands. But there can’t be just one player in this space, not even one player in each niche. But when you join a new one it’s like starting all over again. I routinely get invitations to join new social applications, and I just ignore them. It’s not worth the effort.

At some point, 2 or more of the medium sized ones should realize that the way to beat #1 is to find a way to join forces. To make it possible on service A to tie to a friend on service B, and to get almost all the benefits you would have if both people were on the same service. Then you can pick a home service, and link to people on their home services.

This is a tall order, especially while protecting highly private information. It is not enough to simply define a file format, like the FOAF format, for transporting data from one service to another. At best that’s likely only to get you the intersection of features of all the services using the format, and an aging intersection at that.

How to do this while preserving the business models and uniqueness of the services is challenging. For example, some services want to charge you for distant contacts or certain types of searches of your social network. And what do you do when a FoF involves the first friend being on service B and the FoF being on service C.

Truth is, we all belong to many social networks. They won’t all be in one system, ever.

You can’t just have routine sharing. This is private information, we don’t want spammers or marketers harvesting it.

The interchange format will have to be very dynamic. That means that as soon as one service supports a new feature, it should be possible for the format to start supporting it right away, without a committee having to bless a new standard. That means different people will do the same thing in different ways, and that has to be reconciled nicely in the future, not before we start using it.

Of course, at the same time I remain curious about just what they hope for us to do with these social networks. So far I have mostly seen them as a source of entertainment. Real live-altering experiences are rare. Some are using them for business networking and job hunting. Mailing FoFs didn’t really work out, it quickly became more spam than anything. Searching a network (the ideal app for Google’s Orkut) has not yet been done well.

Perhaps the right answer is to keep the networks simple and then let the applications build on top of them, independent of how the networks themselves are implemented. This means, however, a way to give an individual application access to your social network and — this is tricky — the social networks of your friends. Perhaps what we need is a platform, implemented by many, upon which social applications can then be built by many. However, each one will need to ask for access, which might encourage applications to group together to ask as a group. The platform providers should provide few applications. In effect, even browsing your network is not an application the provider should offer, as that has to travel over many providers.

Once some smaller networks figure this out, the larger ones will have to join or fall. Because I don’t want to have to keep joining different networks, but I will join new applications based on my network.

The giant security hole in auto-updating software

It’s more and more common today to see software that is capable of easily or automatically updating itself to a new version. Sometimes the user must confirm the update, in some cases it is fully automatic or manual but non-optional (ie. the old version won’t work any more.) This seems like a valuable feature for fixing security problems as well as bugs.

But rarely do we talk about what a giant hole this is in general computer security. On most computers, programs you run have access to a great deal of the machine, and in the case of Windows, often all of it. Many of these applications are used by millions and in some cases even hundreds of millions of users.

When you install software on almost any machine, you’re trusting the software and the company that made it, and the channel by which you got it — at the time you install. When you have auto-updating software, you’re trusting them on an ongoing basis. It’s really like you’re leaving a copy of the keys to your office at the software vendor, and hoping they won’t do anything bad with them, and hoping that nobody untrusted will get at those keys and so something bad with them.  read more »

Online shopping -- set when you need to get it.

I was seduced by Google’s bribe of $20 per $50 or greater order to try their new Checkout service, and did some Christmas shopping on Normally, being based in Southern California, takes only 1 or 2 days by UPS ground to get things to me. So ordering last weekend should have been low risk for items that are “in stock and ship in 1-2 days.” Yes, they cover their asses by putting a longer upper bound on the shipping time, but generally that’s the ship time for people on the other coast.

I got a mail via Google (part of their privacy protection) that the items had been shipped on Tuesday, so all was well. Unfortunately, I didn’t go and immediately check on the tracking info. The new interface with Google Checkout makes that harder to do — normally you can just go to the account page on most online stores and follow links directly to checking. Here the interface requires you to cut and paste order numbers and it’s buggy, reporting incorrect shipper names.

Unfortuantely it’s becoming common for online stores to keep things in different warehouses around the country now. Some items I ordered, it turns out, while shipped quickly, were shipped from far away. They’ll arrive after Christmas. So now I have to go out and buy the items at stores, or different items in some cases, at higher prices, without the seductive $20 discount — and I then need to arrange return of items ordered after they get here. And I’ll probably be out not only the money I paid for shipping (had I wanted them after christmas I would have selected the free saver shipping option of course) but presumably return shipping.

A very unsatisfactory shopping experience.

How could this have been improved (other than by getting the items to me?)

  1. When they e-mail you about shipment, throw in a tracking link and also include the shipper’s expected delivery day. UPS and Fedex both give that, and even with the USPS you can provide decent estimates.
  2. Let me specify in the order, “I need this by Dec 23.” They might be able to say right then and there that “This item is in stock far away. You need to specify air shipping to do that.”
  3. Failing that, they could, when they finally get ready to ship it, look at what the arrival date will be, and, if you’ve set a drop-dead date, cancel the shipment if it won’t get to you on time. Yes, they lose a sale but they avoid a very disappointed customer.

This does not just apply around Christmas. I often go on trips, and know I won’t be home on certain days. I may want to delay delivery of items around such days.

As I blogged earlier, it also would simplify things a lot if you could use the tracking interface of UPS, Fedex and the rest to reject or divert shipments in transit. If I could say “Return to sender” via the web on a shipment I know is a waste of time, the vendor wins, I win, and even the shipping company can probably set a price for this where they win too. The recipient saves a lot of hassle, and the vendor can also be assured the item has not been opened and quickly restock it as new merchandise. If you do a manual return they have to inspect, and even worry about people who re-shrinkwrap returns to cheat them.

Another issue that will no doubt come up — the Google discount was $20 off orders of $50 or more. If I return only some of the items, will they want to charge me the $20? In that case, you might find yourself in a situation where returning an item below $20 would cost you money! In this case I need to return the entire order except one $5 item I tossed on the order, so it won’t be an issue.

Jolly December to all. (Jolly December is my proposal for the Pastafarian year-end holiday greeting, a good salvo in the war on Christmas. If they’re going to invent a war on Christmas, might as well have one.)

Towards a Zero User Interface backup system

I’ve spoken before about ZUI (Zero User Interface) and how often it’s the right interface.

One important system that often has too complex a UI is backup. Because of that, backups often don’t get done. In particular offsite backups, which are the only way to deal with fire and similar catastrophe.

Here’s a rough design for a ZUI offsite backup. The only UI at a basic level is just installing and enabling it — and choosing a good password (that’s not quite zero UI but it’s pretty limited.)

Once enabled, the backup system will query a central server to start looking for backup buddies. It will be particularly interested in buddies on your same LAN (though it will not consider them offsite.) It will also look for buddies on the same ISP or otherwise close by, network-topology wise. For potential buddies, it will introduce the two of you and let you do bandwidth tests to measure your bandwidth.

At night, the tool would wait for your machine and network to go quiet, and likewise the buddy’s machines. It would then do incremental backups over the network. These would be encrypted with secure keys. Those secure keys would in turn be stored on your own machine (in the clear) and on a central server (encrypted by your password.)

The backup would be clever. It would identify files on your system which are common around the network — ie. files of the OS and installed software packages — and know it doesn’t have to back them up directly, it just has to record their presence and the fact that they exist in many places. It only has to transfer your own created files.

Your backups are sent to two or more different buddies each, compressed. Regular checks are done to see if the buddy is still around. If a buddy leaves the net, it quickly will find other buddies to store data on. Alas, some files, like video, images and music are already compressed, so this means twice as much storage is needed for backup as the files took — though only for your own generated files. So you do have to have a very big disk 3 times bigger than you need, because you must store data for the buddies just as they are storing for you. But disk is getting very cheap.

(Another alternative is RAID-5 style. In RAID-5 style, you distribute each file to 3 or more buddies, except in the RAID-5 parity system, so that any one buddy can vanish and you can still recover the file. This means you may be able to get away with much less excess disk space. There are also redundant storage algorithms that let you tolerate the loss of 2 or even 3 of a larger pool of storers, at a much more modest cost than using double the space.)

All this is, as noted, automatic. You don’t have to do anything to make it happen, and if it’s good at spotting quiet times on the system and network, you don’t even notice it’s happening, except a lot more of your disk is used up storing data for others.

It is the automated nature that is so important. There have been other proposals along these lines, such as MNET and some commercial network backup apps, but never an app you just install, do quick setup and then forget about until you need to restore a file. Only such an app will truly get used and work for the user.

Restore of individual files (if your system is still alive) is easy. You have the keys on file, and can pull your file from the buddies and decrypt it with the keys.

Loss of a local disk is more work, but if you have multiple computers in the household, the keys could be stored on other computers on the same LAN (alas this does require UI to approve this) and then you can go to another computer to get the keys to rebuild the lost disk. Indeed, using local computers as buddies is a good idea due to speed, but they don’t provide offsite backup. It would make sense for the system, at the cost of more disk space, to do both same-LAN backup and offsite. Same-LAN for hardware failures, offsite for building-burns-down failures.

In the event of a building-burns-down failure, you would have to go to the central server, and decrypt your keys with that password. Then you can get your keys and find your buddies and restore your files. Restore would not be ZUI, because we need no motiviation to do restore. It is doing regular backups we lack motivation for.

Of course, many people have huge files on disk. This is particularly true if you do things like record video with MythTV or make giant photographs, as I do. This may be too large for backup over the internet.

In this case, the right thing to do is to backup the smaller files first, and have some UI. This UI would warn the user about this, and suggest options. One option is to not back up things like recorded video. Another is to rely only on local backup if it’s available. Finally, the system should offer a manual backup of the large files, where you connect a removable disk (USB disk for example) and transfer the largest files to it. It is up to you to take that offsite on a regular basis if you can.

However, while this has a UI and physical tasks to do, if you don’t do it it’s not the end of the world. Indeed, your large files may get backed up, slowly, if there’s enough bandwidth.

Generic internet appliances

Normally I’m a general-purpose computing guy. I like that the computer that runs my TV with MythTV is a general purpose computer that does far more than a Tivo ever would. My main computer is normally on and ready for me to do a thousand things.

But there is value in specialty internet appliances, especially ones that can be very low power and small. But it doesn’t make sense to have a ton of those either.

I propose a generic internet appliance box. It would be based on the same small single-board computers which run linux that you find in the typical home router and many other small network appliances. It would ideally be so useful that it would be sold in vast quantities, either in its generic form or with minor repurposings.

Here’s what would be in level 1 of the box:

  • A small, single-board linux computer with low power processor such as the ARM
  • Similar RAM and flash to today’s small boxes, enough to run a modest linux.
  • WiFi radio, usually to be a client — but presumably adaptable to make access points (in which case you need ethernet ports, so perhaps not.)
  • USB port
  • Infrared port for remote control or IR keyboard (optionally a USB add-on)

Optional features would include:

  • Audio output with low-fi speaker
  • Small LCD panel
  • DVI output for flat panel display
  • 3 or 4 buttons arranged next to the LCD panel

The USB port on the basic unit provides a handy way to configure the box. On a full PC, write a thumb-drive with the needed configuration (in particular WiFi encryption keys) and then move the thumb drive to the unit. Thumb drives can also provide a complete filesystem, software or can contain photo slide shows in the version with the video output. Thumb drives could in fact contain entire applications, so you insert one and it copies the app to the box’s flash to give it a personality.

Here are some useful applications:

  • In many towns, you can see when a bus or train will arrive at your stop over the internet. Program the appliance with your stop and how long it takes to walk there after a warning. Press a button when you want to leave, and the box announces over the speaker a countdown of when to go to meet the transit perfectly.
  • Email notifier
  • MP3 output to stereo or digital speakers
  • File server (USB connect to external drives — may require full ethernet.)
  • VOIP phone system speakerphone/ringer/announcer
  • Printer server for USB printers
  • Household controller interface (X10, thermostat control, etc.)

Slap on the back of cheap flat panel display mounted on the wall, connected with video cable. Now offer a vast array of applications such as:

  • Slide show
  • Security video (low-res unless there is an mpeg decoder in the box.)
  • Weather/News/Traffic updates
  • With an infrared keyboard, be a complete terminal to other computer apps and a minimal web browser.

There are many more applications people can dream up. The idea is that one cheap box can do all these things, and since it could be made in serious quantities, it could end up cheaper than the slightly more specialized boxes, which themselves retail for well under $50 today. Indeed today’s USB printer servers turn out to be pretty close to this box.

The goal is to get these out and let people dream up the applications.

In Edmonton

I'm in Edmonton. Turns out to be the farthest north I've been on land (53 degrees 37 minutes at the peak) after another turn through the Icefields Parkway, surely one of the most scenic drives on the planet. My 4th time along it, though this time it was a whiteout. Speaking tomorrow at the CIPS ICE conference on privacy, nanotechnology and the future at 10:15.

Idea of the day. I joined Fairmont Hotels President's Club while at the Chateau Lake Louise because it gave me free internet. When I got to the Fairmont Jasper Lodge my laptop just worked with no login, and I was really impressed -- I figured they had recorded my MAC address as belonging to a member of their club, and were going to let me use it with no login. Alas, no, the Jasper lodge internet (only in main lobby) was free for all. But wouldn't that be great if all hotels did that? Do any of the paid wireless roaming networks do this? (I guess they might be afraid of MAC cloning.) It would also allow, with a simple interface, a way for devices like Wifi SIP phones to use networks that otherwise require a login.

Of course, as we all know, the more expensive the hotel, the more likely the internet is not only not included, it's way overpriced. At least Fairmont gave one way around this. Of course I gave them a unique E-mail address created just for them, so if they spam me I can quickly disable them. But once again I, like most of us, find myself giving up privacy for a few hotel perks.

Wire-crawling robot that lays optical fiber

In thinking about how to reduce the cost of bringing fiber to everybody (particulaly for block-area-networks built by neighbours) I have started wondering if we could build a robot that is able to traverse utility poles by crawling along wires — either power, phone or cable-TV wires. The robot would unspool fiber optic cable behind it and deploy wire-ties to keep it attached. Human beings would still have to eventually climb the poles and install taps or junctions and secure these items, but their job would be much easier.

Robots that can crawl along cables already exist. The hard part is traversing the poles. Now it turns out finding live electric wires is something that’s very easy for a robot to do. They stick out like a live wire in the EM spectrum. The poles of course have insulators, junctions, tie downs and other obstacles. Crossing them may be hard in certain cases (in which case a human would have to help, either by tele-operation, or by climbing the pole.) It may be possible to have a very small robot that is able to follow the current (easy to tell the lines to the houses from the main lines) and cross a pole like a bug and then, once safely on the other side, pulls the larger robot with a small tether. Again, it won’t always work but if you can get it to work enough of the time, you can install fiber with far less time and labour than the manual approach. Fiber of course can be tied to power lies because it is non-conductive material, though it’s even better if you can run it along phone or cable lines.

Not that any of these companies will want to give permission to competitors. And you want to pull multiple fibers, not so much for the bandwidth — we can do terabits in a single fiber if we want to — but for the backup when one fiber breaks.

If the robots get good enough, they could even string fiber into rural areas, following long chains of power or phone lines with just a single human assistant. Of course overhead wires are going to be more prone to breakage, but with these robots, repairs could be fast and cheap.

There are already robots out there which can crawl storm sewers to install fiber. This is another alternative, though that’s good too. Indeed, a robot that can even crawl real sewage lines to put in fiber which comes out your household stack is not out of the question, if it’s in a strong enough casing.

Time for RSS and the aggregators to understand small changes

Over 15 years ago I proposed that USENET support the concept of “replacing” an article (which would mean updating it in place, so people who had already read it would not see it again) in addition to superseding an article, which presented the article as new to those who read it before, but not in both versions to those who hadn’t. Never did get that into the standard, but now it’s time to beg for it in USENET’s successor, RSS and cousins.

I’m tired of the fact that my blog reader offers only two choices — see no updates to articles, or see the articles as new when they are updated. Often the updates are trivial — even things like fixing typos — and I should not see them again. Sometimes they are serious additions or even corrections, and people who read the old one should see them.

Because feed readers aren’t smart about this, it not only means annoying minor updates, but also people are hesitant to make minor corrections because they don’t want to make everybody see the article again.

Clearly, we need a checkbox in updates to say if the update is minor or major. More than a checkbox, the composition software should be able to look at the update, and guess a good default. If you add a whole paragraph, it’s major. If you change the spelling of a word, it’s minor. In addition to providing a good guess for the author, it can also store in the RSS feed a tag attempting to quantify the change in terms of how many words were changed. This way feed readers can be told, “Show me only if the author manually marked the change as major, or if it’s more than 20 words” or whatever the user likes.

Wikis have had the idea of a minor change checkbox for a while, it’s time for blogs to have it too.

Of course, perhaps better would be a specific type of update or new post that preserves thread structure, so that a post with an update is a child of a parent. Which means it is seen with the parent by those who have not yet seen the parent, but as an update on its on for those who did see it. For those who skipped the parent (if we know they skipped) the update also need not be shown.

RSS aggregator to pull threads from multiple intertwined blogs

It’s common in the blogosphere for bloggers to comment on the posts of other bloggers. Sometimes blogs show trackbacks to let you see those comments with a posting. (I turned this off due to trackback spam.) In some cases we effectively get a thread, as might appear in a message board/email/USENET, but the individual components of the thread are all on the individual blogs.

So now we need an RSS aggregator to rebuild these posts into a thread one can see and navigate. It’s a little more complex than threading in USENET, because messages can have more than one parent (ie. link to more than one post) and may not link directly at all. In addition, timestamps only give partial clues as to position in a thread since many people read from aggregators and may not have read a message that was posted an hour ago in their “thread.”

At a minimum, existing aggregators (like bloglines) could spot sub-threads existing entirely among your subscribed feeds, and present those postings to you. You could also define feeds which are unsubscribed but which you wish to see or be informed of postings from in the event of a thread. (Or you might have a block-list of feeds you don’t want to see contributions from.) They could just have a little link saying, “There’s a thread including posts from other blogs on this message” which you could expand, and that would mark those items as read when you came to the other blog.

Blog search tools, like Technoratti could also spot these threads, and present a typical thread interface for perusing them. Both readers and bloggers would be interested in knowing how deep the threads go.

Better handling of reading news/blogs after being away

I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.

You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.

When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.

There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.

I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.

It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.

For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.

Anti-Phishing -- warn if I send a password somewhere I've never sent it

There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. “Pet names” given to web sites so you can confirm you’re where you were before.

I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults. Now I don’t store my most important passwords (bank, etc.) in my password vault, but I do store most medium importance ones there (accounts at various billing entities etc.) I just use a simple common password for web boards, blogs and other places where the damage from compromise is nil to minimal.

So when I go to such a site, I expect the password vault to fill in the password. If it doesn’t, that’s a big warning flag for me. And so I can’t easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to (Two “v”s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.

The browser should store passwords in the vault, and even the “don’t store this” passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is “secretword” and I use it on, no problem. “secretword” isn’t stored in my password vault, but the hash of it is. If I ever type in “secretword” to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I’ll do a good job of examining — it’s still possible I’ll be fooled by, but other tricks won’t fool me.

The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at “who-cares” sites, we don’t want to be warned all the time. The second time we use that password, we’ll get a warning, and we need a box to say, “Don’t warn me about re-use of this password.”

Read on for subtleties…  read more »

No, senator Stevens was misquoted...

Everybody in the blogosphere has heard something about Alaska’s Ted Stevens calling the internet a series of tubes.

They just heard him wrong. His porn filters got turned off and he discovered the internet was a series of pubes.

(And, BTW, I think we’ve been unfair to Stevens. While it wasn’t high traffic that delayed his E-mail — “an internet” — a few days, his description wasn’t really that bad… for a senator.)

Judge allows EFF's AT&T lawsuit to go forward

Big news today. Judge Walker has denied the motions — particularly the one by the federal government — to dismiss our case against AT&T for cooperative with the NSA on warrantless surveillance of phone traffic and records.

The federal government, including the heads of the major spy agencies, had filed a brief demanding the case be dismissed on “state secrets” grounds. This common law doctrine, which is often frighteningly successful, allows cases to be dismissed, even if they are of great merit, if following through would reveal state secrets.

Here is our brief note which as a link to the decision.

This is a great step. Further application of the state secrets rule would have made legal oversight of surveillance by spy agencies moot. We can write all the laws we want governing how spies may operate, and how surveillance is to be regulated, but if nobody can sue over violations of those laws, what purpose do they really have? Very little.

Now our allegations can be tested in court.

On the refutation of Metcalfe's law

Recently IEEE Spectrum published a paper on a refutation of Metcalfe’s law — an observation (not really a law) by Bob Metcalfe — that the “value” of a network incrased with the square of the number of people/nodes on it. I was asked to be a referee for this paper, and while they addressed some of my comments, I don’t think they addressed the principle one, so I am posting my comments here now.

My main contention is that in many cases the value of a network actually starts declining after a while and becomes inversely proportional to the number of people on it. That’s because noise (such as flamage and spam) and unmanageable signal (too many serious messages) rises with the size and eventually reaches a level where the cost of dealing with it surpasses the value of the network. I’m thinking mailing lists in particular here.

You can read my referee’s comments on Metcalfe’s law though note that these comments were written on the original article, before some corrections were made.

How only Google can pull off pay-to-perform ads

Bruce Schneier today compliments Google on trying out pay-to-perform ads as a means around click-fraud, but worries that this is risky because you become a partner with the advertiser. If their product doesn’t sell, you don’t make money.

And that’s a reasonable fear for any small site accepting pay-to-perform ads. If the product isn’t very good, you aren’t going to get a cut of much. Many affiliate programs really perform poorly for the site, though a few rare ones do well.

However, Google has a way around this. While the first step on Google’s path to success was to make a search engine that gave better results, how they did advertising was just as important. At a time when everybody was desperate for web advertising, and sites were willing to accept annoying flash animations, pop-ups and pop-unders and even adware, Google introduced ads that were purely text. In addition, they had the audacity, it seemed, to insist that pay-per-click bidding advertisers provide popular ads people would actually click through. If people are not clicking on your ad, Google stops running it. They even do this if there are not other ads to place on the page. They had the guts to say, “We’ll sell pay per click, but if your ad isn’t good, we won’t run it.” Nobody was turning down business then, and few are now.

Sites of course don’t want to be paid per click, or a cut of sales. They want a CPM, and that’s about all they want, as long as the ads are otherwise a good match for the site. Per-click costs and percentages are just a means to figuring out a CPM. Advertisers don’t want to pay CPMs, they want to pay for results, like clicks or sales.

Google found a great way to combine the two. They offered pay per click, but they insisted that the clicks generate enough CPM to keep them happy.

The same will apply here. They will offer pay for performance, but those ads will be competing with bidders who are bidding pay-per-click. Google will run, as it always has, the type of ad that gets the highest results. If you bid pay per performance, and the PPCs are bidding higher, your ad won’t run. And even if there are not higher PPCs, if your ad isn’t working and convering into sales and generating revenue for Google, I suspect they will just not run it. They can afford to do this, they are Google.

And so they will get the best of both worlds again. Advertisers who can come up with products that can sell through ads will pay for actual sales, and love how they can calculate how well it does for them. Google will continue to get good CPMs, which is what they care about, and what Adsense partners (including myself) care about. And they will have eliminated clickfraud at least on these types of ads. Once again they stay on top.

(Disclaimer: I am a consultant to Google, and am in their Adsense program. If you aren’t in it, there is a link in the right-hand bar you can use to join that program. I get a pay for performance credit if you do. Unlike Google’s PPC ads, where Adsense members are forbidden by contract from encouraging people to click on the ads, there is no need for such strictures against pay for performance ads, in fact there’s evey reason to encourage it.)

PayPal should partner with UPS and other shippers

You’ve seen me write before of a proposal I call addresscrow to promote privacy when items are shipped to you. Today I’ll propose something more modest, with non-privacy applications.

I would like PayPal, and other payment systems (Visa/MC/Google Checkout) to partner with the shipping companies such as UPS that ship the products bought with these payment systems.

They would produce a very primative escrow, so that payment to the seller was transferred upon delivery confirmation by the shipper. If there is no delivery, the money is not transferred, and is eventually refunded. When you sign for the package (or if you have delivery without signature, when it’s dropped off) that’s when the money would be paid to the vendor. You, on the other hand, would pay the money immediately, and the seller would be notified you had paid and the money was waiting pending receipt. The payment company would get to hold the money for a few days, and make some money on the float, if desired, to pay for this service.

Of course, sellers could ship you a lump of coal and you would still pay for it by signing for it. However, this is a somewhat more overt fraud that, like all fraud, must be dealt with in other ways. This system would instead help eliminate delays in shipping, since vendors would be highly motivated to get things shipped and delivered, and it would eliminate any communications problems standing in the way of getting the order processed. There is nothing much in it for the vendor, of course, other than a means to make customers feel more comfortable about paying up front. But making customers feel more comfortable is no small thing.

Extended, the data from this could go into reptuation systems like eBay’s feedback, so that it could report for buyers how promptly they paid, and for sellers how promptly they shipped or delivered. (The database would know both when an item was shipped and when it was received.) eBay has resisted the very obvious idea of having feedback show successful PayPal payment, so I doubt they will rush to do this either.

EBay: Sniping good or bad or just a change of balance?

Ebayers are familiar with what is called bid “sniping.” That’s placing your one, real bid, just a few seconds before auction close. People sometimes do it manually, more often they use auto-bidding software which performs the function. If you know your true max value, it makes sense.

However, it generates a lot of controversy and anger. This is for two reasons. First, there are many people on eBay who like to play the auction as a game over time, bidding, being out bid and rebidding. They either don’t want to enter a true-max bid, or can’t figure out what that value really is. They are often outbid by a sniper, and feel very frustrated, because given the time they feel they would have bid higher and taken the auction.

This feeling is vastly strengthened by the way eBay treats bids. The actual buyer pays not the price they entered, but the price entered by the 2nd place bidder, plus an increment. This makes the 2nd place buyer think she lost the auction by just the increment, but in fact that’s rarely likely to be true. But it still generates great frustration.

The only important question about bid sniping is, does it benefit the buyers who use it? If it lets them take an auction at a lower price, because a non-sniper doesn’t get in the high bid they were actually willing to make, then indeed it benefits the buyer, and makes the seller (and interestingly, eBay, slightly less.)

There are many ways to write the rules of an auction. They all tend to benefit either the buyer or the seller by some factor. A few have benefits for both, and a few benefit only the auction house. Most are a mix. In most auction houses, like eBay, the auction house takes a cut of the sale, and so anything that makes sellers get higher prices makes more money on such auctions for the auction house.

Read on…  read more »

Travel laptop for couples

We often travel as a couple, and of course both have the same e-mail and web addictions that all of you probably have. Indeed, these days if you don’t get to your e-mail and other stuff for a long period, it becomes unmanageable when you return. For this reason, we bring at least one, and often two laptops on trips.

When we bring one, it becomes a time-waster. Frankly, our goal is to spend as little time in our hotel room on the net as possible, but it’s still very useful not just for e-mail but also travel bookings and research, where to eat etc. When we have only one computer — or when we have two but the hotel only provides a connection for one — it means we have to spend much more time in the hotel room.

It would be nice to see a laptop adopted for couple’s use. In many cases, this could be just a little software. Many laptops already can go “dual head”, putting out a different screen on their VGA connector than goes to the built-in panel. So a USB keyboard and a super-thin laptop sized flat panel would be all you need, along with power for the panel. In the future, as more and more hotel rooms adopt HDTVs, one could use that instead of the display.

Of course desktop flat panels are bigger than laptops, this would need to be a modified version of the same panels put into laptops, which are readily available. A special connector for it, with power, would make this even better. The goal is something not much larger than a clipboard and mini-keyboard. It could even be put in an ultrathin laptop case (with no motherboard, drives or even battery.)

Now, as to software. In Linux, having two users on two screens is already pretty easy. It’s just a bit of configuration. I would hope the BSD based Mac is the same. Windows is more trouble, since it really doesn’t have as much of a concept of two desktops with two users logged in. (Indeed, I have wondered why we haven’t seen a push for dual-user desktop computers, since it’s not at all uncommon to see an home office with two computers in it for two members of the family, but for which both are used together only rarely.)

On Windows, you would probably need to just have one user logged in, and both people would be that user to Windows. However, you would have different instances of Firefox/Mozilla, for example, which can use different profiles so each person has their own browser settings and bookmarks, their own e-mail settings etc. It would be harder to have both people run their own MS Word, but it might be doable.

Some variants of the idea include making a “thin client” box that plugs into the main computer via USB or even talks bluetooth to it, and has its own power supply. It might do something as simple as VNC to a virtual screen on the main box. Or of course it could plug into ethernet but that’s often taken on the main box to talk to the hotel network if the hotel has a wired connection. (More often they have wireless now.) The thin client could also act as a hub to fix this.

If you want to bring two laptops, you can make things work by using internet connection sharing over wired or wireless ad-hoc network, though it’s much more work than it should be to set up. But my goal is to avoid the weight, size and price of a 2nd laptop, though price is not that big an issue because I am presuming one has other uses for it.

IMAP server should tell you your SMTP parameters

When you set up a mail client, you have to configure mail reading servers (either IMAP or POP) and also a mail sending server (SMTP). In the old days you could just configure one SMTP server, with no userid or password. Due to spam-blocking, roaming computers have it hard, and either must change SMTP servers as they roam, or use one that has some sort of authentication scheme that opens it up to you and not everybody.

Worse, many ISPs now block outgoing SMTP traffic, insisting you use their SMTP server (usually without a password.) Sometimes your home site has to run an SMTP server at a non-standard port to get you past this.

I propose that IMAP (and possibly POP) include an extension so that the IMAP server can offer your client information on how to send mail. At the very least, it simplifies configuration for users, who now only have to provide one server identity. From there the system configures itself. (Of course, the other way to do this is to identify such servers in DHCP.)

This also simplifies the situation where you want to use a different SMTP server based on which mail account you are working on, something DHCP can't handle.

The IMAP server would offer a list of means to send mail. These could include a port number, and a protocol, which could be plain SMTP, or SMTP over SSL or TLS, or even some new protocol down the road. And it could also offer authentication, because you have already authenticated to the IMAP server with your userid and password. It could tell you a permanent userid and password you can use with the SMTP server, or it could tell you that you don't need one (because your IP address has been enabled for the duration of your IMAP session in the IMAP-before-SMTP approach.) It could also offer a temporary authentication token, which is good only for that session or some period of time after it. Ideally we would have IMAP over SSL/TLS, and so these passwords and tokens would not be sent in the clear.

With a list of possible methods, the client could chose the best one. Or, of course, it could chose one that was programmed in by a user who did custom configure their own SMTP information.

It's also worth noting that it would be possible, down the road, to use the very same IMAP port for a slightly modified SMTP session to an IMAP server set up to handle this. This could handle firewalls that block all but that port. However, the main benefit is to the user with simpler configuration.

Web sites -- stop being clever about some structured data

A lot of the time, on web forms, you will see some sort of structured field, like an IP address, or credit card number, or account number, broken up into a series of field boxes. You see this is in program GUIs as well.

On the surface it makes sense. Never throw away structure information. If you’re parsing a human name, it may be impossible to parse it as well from a plain string compared to a set of boxes for first, last and middle names.

But this does not make sense if the string can always be reliably parsed, as is the case for IP addresses and account numbers and WEP keys and the rest. Using multiple boxes just means users can’t cut and paste. And it’s also hard to type unless you are ready to hit TAB at a point your mind wants to type something else. Some sites use javascript to auto-forward you to the next box when you’ve entered enough in one box, but it’s never perfect and usually doesn’t do backspace well.

Think about it. The multi box idea, expressed to extremes would have every form enter an e-mail address with a username box and a domain name box, with an @ printed between them. This would stop you from entering e-mail addresses without at signs. But fortunately nobody does it. We can always parse an E-mail and we don’t want to subject people to the pains of typing it in a strange way.

Now I have to admit I’ve been tempted sometimes on international phone numbers, because parsing them is hard. The number of digits in the various components, be they area codes or exchanges, varies from region to region and I am not sure anybody has written a perfect parser. But nor do people want to enter phone numbers with tabs. And they want to cut and paste. Remember this when designing your next web form.

Syndicate content