Internet

Time for RSS and the aggregators to understand small changes

Over 15 years ago I proposed that USENET support the concept of “replacing” an article (which would mean updating it in place, so people who had already read it would not see it again) in addition to superseding an article, which presented the article as new to those who read it before, but not in both versions to those who hadn’t. Never did get that into the standard, but now it’s time to beg for it in USENET’s successor, RSS and cousins.

I’m tired of the fact that my blog reader offers only two choices — see no updates to articles, or see the articles as new when they are updated. Often the updates are trivial — even things like fixing typos — and I should not see them again. Sometimes they are serious additions or even corrections, and people who read the old one should see them.

Because feed readers aren’t smart about this, it not only means annoying minor updates, but also people are hesitant to make minor corrections because they don’t want to make everybody see the article again.

Clearly, we need a checkbox in updates to say if the update is minor or major. More than a checkbox, the composition software should be able to look at the update, and guess a good default. If you add a whole paragraph, it’s major. If you change the spelling of a word, it’s minor. In addition to providing a good guess for the author, it can also store in the RSS feed a tag attempting to quantify the change in terms of how many words were changed. This way feed readers can be told, “Show me only if the author manually marked the change as major, or if it’s more than 20 words” or whatever the user likes.

Wikis have had the idea of a minor change checkbox for a while, it’s time for blogs to have it too.

Of course, perhaps better would be a specific type of update or new post that preserves thread structure, so that a post with an update is a child of a parent. Which means it is seen with the parent by those who have not yet seen the parent, but as an update on its on for those who did see it. For those who skipped the parent (if we know they skipped) the update also need not be shown.

RSS aggregator to pull threads from multiple intertwined blogs

It’s common in the blogosphere for bloggers to comment on the posts of other bloggers. Sometimes blogs show trackbacks to let you see those comments with a posting. (I turned this off due to trackback spam.) In some cases we effectively get a thread, as might appear in a message board/email/USENET, but the individual components of the thread are all on the individual blogs.

So now we need an RSS aggregator to rebuild these posts into a thread one can see and navigate. It’s a little more complex than threading in USENET, because messages can have more than one parent (ie. link to more than one post) and may not link directly at all. In addition, timestamps only give partial clues as to position in a thread since many people read from aggregators and may not have read a message that was posted an hour ago in their “thread.”

At a minimum, existing aggregators (like bloglines) could spot sub-threads existing entirely among your subscribed feeds, and present those postings to you. You could also define feeds which are unsubscribed but which you wish to see or be informed of postings from in the event of a thread. (Or you might have a block-list of feeds you don’t want to see contributions from.) They could just have a little link saying, “There’s a thread including posts from other blogs on this message” which you could expand, and that would mark those items as read when you came to the other blog.

Blog search tools, like Technoratti could also spot these threads, and present a typical thread interface for perusing them. Both readers and bloggers would be interested in knowing how deep the threads go.

Better handling of reading news/blogs after being away

I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.

You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.

When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.

There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.

I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.

It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.

For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.

Anti-Phishing -- warn if I send a password somewhere I've never sent it

There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. “Pet names” given to web sites so you can confirm you’re where you were before.

I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults. Now I don’t store my most important passwords (bank, etc.) in my password vault, but I do store most medium importance ones there (accounts at various billing entities etc.) I just use a simple common password for web boards, blogs and other places where the damage from compromise is nil to minimal.

So when I go to such a site, I expect the password vault to fill in the password. If it doesn’t, that’s a big warning flag for me. And so I can’t easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to bankofthevvest.com (Two “v”s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.

The browser should store passwords in the vault, and even the “don’t store this” passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is “secretword” and I use it on bankofthewest.com, no problem. “secretword” isn’t stored in my password vault, but the hash of it is. If I ever type in “secretword” to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I’ll do a good job of examining — it’s still possible I’ll be fooled by bankofthevvest.com, but other tricks won’t fool me.

The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at “who-cares” sites, we don’t want to be warned all the time. The second time we use that password, we’ll get a warning, and we need a box to say, “Don’t warn me about re-use of this password.”

Read on for subtleties…  read more »

No, senator Stevens was misquoted...

Everybody in the blogosphere has heard something about Alaska’s Ted Stevens calling the internet a series of tubes.

They just heard him wrong. His porn filters got turned off and he discovered the internet was a series of pubes.

(And, BTW, I think we’ve been unfair to Stevens. While it wasn’t high traffic that delayed his E-mail — “an internet” — a few days, his description wasn’t really that bad… for a senator.)

Judge allows EFF's AT&T lawsuit to go forward

Big news today. Judge Walker has denied the motions — particularly the one by the federal government — to dismiss our case against AT&T for cooperative with the NSA on warrantless surveillance of phone traffic and records.

The federal government, including the heads of the major spy agencies, had filed a brief demanding the case be dismissed on “state secrets” grounds. This common law doctrine, which is often frighteningly successful, allows cases to be dismissed, even if they are of great merit, if following through would reveal state secrets.

Here is our brief note which as a link to the decision.

This is a great step. Further application of the state secrets rule would have made legal oversight of surveillance by spy agencies moot. We can write all the laws we want governing how spies may operate, and how surveillance is to be regulated, but if nobody can sue over violations of those laws, what purpose do they really have? Very little.

Now our allegations can be tested in court.

On the refutation of Metcalfe's law

Recently IEEE Spectrum published a paper on a refutation of Metcalfe’s law — an observation (not really a law) by Bob Metcalfe — that the “value” of a network incrased with the square of the number of people/nodes on it. I was asked to be a referee for this paper, and while they addressed some of my comments, I don’t think they addressed the principle one, so I am posting my comments here now.

My main contention is that in many cases the value of a network actually starts declining after a while and becomes inversely proportional to the number of people on it. That’s because noise (such as flamage and spam) and unmanageable signal (too many serious messages) rises with the size and eventually reaches a level where the cost of dealing with it surpasses the value of the network. I’m thinking mailing lists in particular here.

You can read my referee’s comments on Metcalfe’s law though note that these comments were written on the original article, before some corrections were made.

How only Google can pull off pay-to-perform ads

Bruce Schneier today compliments Google on trying out pay-to-perform ads as a means around click-fraud, but worries that this is risky because you become a partner with the advertiser. If their product doesn’t sell, you don’t make money.

And that’s a reasonable fear for any small site accepting pay-to-perform ads. If the product isn’t very good, you aren’t going to get a cut of much. Many affiliate programs really perform poorly for the site, though a few rare ones do well.

However, Google has a way around this. While the first step on Google’s path to success was to make a search engine that gave better results, how they did advertising was just as important. At a time when everybody was desperate for web advertising, and sites were willing to accept annoying flash animations, pop-ups and pop-unders and even adware, Google introduced ads that were purely text. In addition, they had the audacity, it seemed, to insist that pay-per-click bidding advertisers provide popular ads people would actually click through. If people are not clicking on your ad, Google stops running it. They even do this if there are not other ads to place on the page. They had the guts to say, “We’ll sell pay per click, but if your ad isn’t good, we won’t run it.” Nobody was turning down business then, and few are now.

Sites of course don’t want to be paid per click, or a cut of sales. They want a CPM, and that’s about all they want, as long as the ads are otherwise a good match for the site. Per-click costs and percentages are just a means to figuring out a CPM. Advertisers don’t want to pay CPMs, they want to pay for results, like clicks or sales.

Google found a great way to combine the two. They offered pay per click, but they insisted that the clicks generate enough CPM to keep them happy.

The same will apply here. They will offer pay for performance, but those ads will be competing with bidders who are bidding pay-per-click. Google will run, as it always has, the type of ad that gets the highest results. If you bid pay per performance, and the PPCs are bidding higher, your ad won’t run. And even if there are not higher PPCs, if your ad isn’t working and convering into sales and generating revenue for Google, I suspect they will just not run it. They can afford to do this, they are Google.

And so they will get the best of both worlds again. Advertisers who can come up with products that can sell through ads will pay for actual sales, and love how they can calculate how well it does for them. Google will continue to get good CPMs, which is what they care about, and what Adsense partners (including myself) care about. And they will have eliminated clickfraud at least on these types of ads. Once again they stay on top.

(Disclaimer: I am a consultant to Google, and am in their Adsense program. If you aren’t in it, there is a link in the right-hand bar you can use to join that program. I get a pay for performance credit if you do. Unlike Google’s PPC ads, where Adsense members are forbidden by contract from encouraging people to click on the ads, there is no need for such strictures against pay for performance ads, in fact there’s evey reason to encourage it.)

PayPal should partner with UPS and other shippers

You’ve seen me write before of a proposal I call addresscrow to promote privacy when items are shipped to you. Today I’ll propose something more modest, with non-privacy applications.

I would like PayPal, and other payment systems (Visa/MC/Google Checkout) to partner with the shipping companies such as UPS that ship the products bought with these payment systems.

They would produce a very primative escrow, so that payment to the seller was transferred upon delivery confirmation by the shipper. If there is no delivery, the money is not transferred, and is eventually refunded. When you sign for the package (or if you have delivery without signature, when it’s dropped off) that’s when the money would be paid to the vendor. You, on the other hand, would pay the money immediately, and the seller would be notified you had paid and the money was waiting pending receipt. The payment company would get to hold the money for a few days, and make some money on the float, if desired, to pay for this service.

Of course, sellers could ship you a lump of coal and you would still pay for it by signing for it. However, this is a somewhat more overt fraud that, like all fraud, must be dealt with in other ways. This system would instead help eliminate delays in shipping, since vendors would be highly motivated to get things shipped and delivered, and it would eliminate any communications problems standing in the way of getting the order processed. There is nothing much in it for the vendor, of course, other than a means to make customers feel more comfortable about paying up front. But making customers feel more comfortable is no small thing.

Extended, the data from this could go into reptuation systems like eBay’s feedback, so that it could report for buyers how promptly they paid, and for sellers how promptly they shipped or delivered. (The database would know both when an item was shipped and when it was received.) eBay has resisted the very obvious idea of having feedback show successful PayPal payment, so I doubt they will rush to do this either.

EBay: Sniping good or bad or just a change of balance?

Ebayers are familiar with what is called bid “sniping.” That’s placing your one, real bid, just a few seconds before auction close. People sometimes do it manually, more often they use auto-bidding software which performs the function. If you know your true max value, it makes sense.

However, it generates a lot of controversy and anger. This is for two reasons. First, there are many people on eBay who like to play the auction as a game over time, bidding, being out bid and rebidding. They either don’t want to enter a true-max bid, or can’t figure out what that value really is. They are often outbid by a sniper, and feel very frustrated, because given the time they feel they would have bid higher and taken the auction.

This feeling is vastly strengthened by the way eBay treats bids. The actual buyer pays not the price they entered, but the price entered by the 2nd place bidder, plus an increment. This makes the 2nd place buyer think she lost the auction by just the increment, but in fact that’s rarely likely to be true. But it still generates great frustration.

The only important question about bid sniping is, does it benefit the buyers who use it? If it lets them take an auction at a lower price, because a non-sniper doesn’t get in the high bid they were actually willing to make, then indeed it benefits the buyer, and makes the seller (and interestingly, eBay, slightly less.)

There are many ways to write the rules of an auction. They all tend to benefit either the buyer or the seller by some factor. A few have benefits for both, and a few benefit only the auction house. Most are a mix. In most auction houses, like eBay, the auction house takes a cut of the sale, and so anything that makes sellers get higher prices makes more money on such auctions for the auction house.

Read on…  read more »

Travel laptop for couples

We often travel as a couple, and of course both have the same e-mail and web addictions that all of you probably have. Indeed, these days if you don’t get to your e-mail and other stuff for a long period, it becomes unmanageable when you return. For this reason, we bring at least one, and often two laptops on trips.

When we bring one, it becomes a time-waster. Frankly, our goal is to spend as little time in our hotel room on the net as possible, but it’s still very useful not just for e-mail but also travel bookings and research, where to eat etc. When we have only one computer — or when we have two but the hotel only provides a connection for one — it means we have to spend much more time in the hotel room.

It would be nice to see a laptop adopted for couple’s use. In many cases, this could be just a little software. Many laptops already can go “dual head”, putting out a different screen on their VGA connector than goes to the built-in panel. So a USB keyboard and a super-thin laptop sized flat panel would be all you need, along with power for the panel. In the future, as more and more hotel rooms adopt HDTVs, one could use that instead of the display.

Of course desktop flat panels are bigger than laptops, this would need to be a modified version of the same panels put into laptops, which are readily available. A special connector for it, with power, would make this even better. The goal is something not much larger than a clipboard and mini-keyboard. It could even be put in an ultrathin laptop case (with no motherboard, drives or even battery.)

Now, as to software. In Linux, having two users on two screens is already pretty easy. It’s just a bit of configuration. I would hope the BSD based Mac is the same. Windows is more trouble, since it really doesn’t have as much of a concept of two desktops with two users logged in. (Indeed, I have wondered why we haven’t seen a push for dual-user desktop computers, since it’s not at all uncommon to see an home office with two computers in it for two members of the family, but for which both are used together only rarely.)

On Windows, you would probably need to just have one user logged in, and both people would be that user to Windows. However, you would have different instances of Firefox/Mozilla, for example, which can use different profiles so each person has their own browser settings and bookmarks, their own e-mail settings etc. It would be harder to have both people run their own MS Word, but it might be doable.

Some variants of the idea include making a “thin client” box that plugs into the main computer via USB or even talks bluetooth to it, and has its own power supply. It might do something as simple as VNC to a virtual screen on the main box. Or of course it could plug into ethernet but that’s often taken on the main box to talk to the hotel network if the hotel has a wired connection. (More often they have wireless now.) The thin client could also act as a hub to fix this.

If you want to bring two laptops, you can make things work by using internet connection sharing over wired or wireless ad-hoc network, though it’s much more work than it should be to set up. But my goal is to avoid the weight, size and price of a 2nd laptop, though price is not that big an issue because I am presuming one has other uses for it.

IMAP server should tell you your SMTP parameters

When you set up a mail client, you have to configure mail reading servers (either IMAP or POP) and also a mail sending server (SMTP). In the old days you could just configure one SMTP server, with no userid or password. Due to spam-blocking, roaming computers have it hard, and either must change SMTP servers as they roam, or use one that has some sort of authentication scheme that opens it up to you and not everybody.

Worse, many ISPs now block outgoing SMTP traffic, insisting you use their SMTP server (usually without a password.) Sometimes your home site has to run an SMTP server at a non-standard port to get you past this.

I propose that IMAP (and possibly POP) include an extension so that the IMAP server can offer your client information on how to send mail. At the very least, it simplifies configuration for users, who now only have to provide one server identity. From there the system configures itself. (Of course, the other way to do this is to identify such servers in DHCP.)

This also simplifies the situation where you want to use a different SMTP server based on which mail account you are working on, something DHCP can't handle.

The IMAP server would offer a list of means to send mail. These could include a port number, and a protocol, which could be plain SMTP, or SMTP over SSL or TLS, or even some new protocol down the road. And it could also offer authentication, because you have already authenticated to the IMAP server with your userid and password. It could tell you a permanent userid and password you can use with the SMTP server, or it could tell you that you don't need one (because your IP address has been enabled for the duration of your IMAP session in the IMAP-before-SMTP approach.) It could also offer a temporary authentication token, which is good only for that session or some period of time after it. Ideally we would have IMAP over SSL/TLS, and so these passwords and tokens would not be sent in the clear.

With a list of possible methods, the client could chose the best one. Or, of course, it could chose one that was programmed in by a user who did custom configure their own SMTP information.

It's also worth noting that it would be possible, down the road, to use the very same IMAP port for a slightly modified SMTP session to an IMAP server set up to handle this. This could handle firewalls that block all but that port. However, the main benefit is to the user with simpler configuration.

Web sites -- stop being clever about some structured data

A lot of the time, on web forms, you will see some sort of structured field, like an IP address, or credit card number, or account number, broken up into a series of field boxes. You see this is in program GUIs as well.

On the surface it makes sense. Never throw away structure information. If you’re parsing a human name, it may be impossible to parse it as well from a plain string compared to a set of boxes for first, last and middle names.

But this does not make sense if the string can always be reliably parsed, as is the case for IP addresses and account numbers and WEP keys and the rest. Using multiple boxes just means users can’t cut and paste. And it’s also hard to type unless you are ready to hit TAB at a point your mind wants to type something else. Some sites use javascript to auto-forward you to the next box when you’ve entered enough in one box, but it’s never perfect and usually doesn’t do backspace well.

Think about it. The multi box idea, expressed to extremes would have every form enter an e-mail address with a username box and a domain name box, with an @ printed between them. This would stop you from entering e-mail addresses without at signs. But fortunately nobody does it. We can always parse an E-mail and we don’t want to subject people to the pains of typing it in a strange way.

Now I have to admit I’ve been tempted sometimes on international phone numbers, because parsing them is hard. The number of digits in the various components, be they area codes or exchanges, varies from region to region and I am not sure anybody has written a perfect parser. But nor do people want to enter phone numbers with tabs. And they want to cut and paste. Remember this when designing your next web form.

Sudden web traffic not so great with Adsense

As I’ve written before, Google’s Adsense program is for many people bringing about the dream of having a profitable web publication. I have a link on the right of the blog for those who want to try it. I’ve been particularly impressed with the CPMs this blog earns, which can be as much as $15. The blog has about 1000 pageviews/day (I don’t post every day) and doesn’t make enough to be a big difference, but a not impossible 20-fold increase could provide a living wage for blogging. Yahoo publisher’s blog ads, which some of you are seeing in the RSS feed have been a miserable failure, and will be removed next software upgrade. They are poorly targetted and have earned me, literally, not even a dollar.

Recently however I noticed a way in which the Google targetting engine is too good, from my standpoint. From time to time my web sites or blog will get linked from a very high traffic site. This week the 4th amendment shipping tape was a popular stumble-upon, for example. I’ve also been featured from time to time in Slashdot, boingboing and various other popular sites.

When this happens, it’s not a money maker because the click-throughs and CPMs drop way down. This is not too surprising. The people following a quick link are less likely to be looking for the products Google picks to advertise. However, more recently I saw high traffic bringing down not just the CPM, but even the total dollars! I theorize that Google, seeing poor clickthrough, cycles out the normally lucrative ads to try others. So even the normal visitors, who have not gone away, are seeing more poorly chosen ads. Or it could just be randomness that I’m seeing a pattern in.

Solution: Consider the referer when placing ads. If the clickthrough is poor on a given referer (like slashdot or boingboing) then play with the ads to hunt for better clickthrough. For the more regular referers (which are typically internal, the result of searches and regular readers) stick to the ads that typically perform well with that group.

Give us TVoIP, not IPTV

A buzzword in the cable/ilec world is IPTV, a plan to deliver TV over IP. Microsoft and several other companies have built IPTV offerings, to give phone and cable companies what they like to call a “triple play” (voice, video and data) and be the one-stop communications company.

IPTV offerings have you remotely control an engine at the central office of your broadband provider which generates a TV stream which is fed to your TV set. Like having the super set-top box back at the cable office instead of in your house. Of course it requires enough dedicated bandwidth to deliver good quality TV video. That’s 1.5 to 2 megabits for regular TV, 5 to 10 for HDTV with MP4.

Many of the offerings look slick. Some are a basic “network PVR” (try to look like a Tivo that’s outsourced) and Microsoft’s includes the ability to do things you can’t do at your own house, like tune 20 channels at once and have them all be live in small boxes.

I’m at the pulver.com Von conference where people are pushing this, notably the BellSouth exec who just spoke.

But they’ve got it wrong. We don’t need IPTV. We want TVoIP or perhaps more accurately Vid-o-IP. That’s a box at your house that plays video, and uses the internet to suck it down. It may also tune and record regular TV signals (like MythTV or Windows Media Center.)

Now it turns out that’s more expensive. You have to have a box, and a hard drive and a powerful processor. The IPTV approach puts all that equipment at the central office where it’s shared, and gets economies of scale. How can that not be the winner?

Well for one, TVoIP doesn’t require quality bandwidth. You can even use it with less bandwidth than a live stream takes. That’s because after people get TVoIP/PVR, they don’t feel inclined to surf. IPTV is still too much in the “watch live TV” world with surfing. TVoIP is in the poor-man’s video on demand world (like NetFlix and Tivo) where you pick what you might want to see in advance, and later go to the TV to pick something from the list of what’s shown up. Tuns out that’s 95% as good as Video on Demand, but much cheaper.

But more importantly, it’s under your control. Time and time again, the public has picked a clunkier, more expensive, harder to maintain box that’s under their own control over a slick, cheap service that is under the control of some bureaucracy. PCs over mainframes. PCs over Network Computers and Timesharing and SunRays. Sometimes it’s hard to explain why they did this for economic reasons, or even for quality reasons.

They did it because of choice. The box in your own house is, ideally, a platform you own. One that you can add new things to because you want them, and 3rd party vendors can add things to because you demand them. Central control means central choice of what innovations are important. And that never works. Even when it’s cheaper.

If the set top box were to remain a set top box, a box you can’t control, then IPTV would make good sense. But we don’t want it to be that. It’s now time to make it more, and companies are starting to offer products to make it more. We want a platform. Few people want to program it themselves, but we all want great small companies innovating and coming up with the next new thing. Which TVoIP can give us and IPTV won’t. Of course, there are locked TVoIP boxes, like the Akimbo and others, but they won’t win. Indeed, some efforts, like the trusted computing one, seek to make the home box locked, instead of an open platform, when it comes to playing media (and thus locking linux out of the game.) A truly open platform would see the most innovation for the user.

Disclaimer, I am involved with BitTorrent, which makes the most popular software used for downloading video over the internet.

Browsers: Time to have a default margin

In most browsers, the default style presents text adjecent to all sides of the browser window, with no margin. This is a throwback to early days of screen design, when screen real estate was considered so valuable that deliberately wasting it with whitespace was sacrilige.

Of course, in centuries of design on paper, nobody ever put text right up to the margins. Everybody knows it’s ugly and not what the eye wants. Thus, when you see a web page using the default style, which I end up with myself out of laziness, people have a reaction to it as ugly.

Screens are now big enough that it’s time to change the default style to be one that is easier to read. And that means margins. If a page designer wants to put stuff up against the edges, they can easily define their own stylesheets now to do this, so let them do it. I doubt they ever will put text there, though they might put graphics or their own custom margins. If text to the edges is a choice that nobody would make if given the option, it sure seems like silly default to have. It won’t break anything, you can just make the window wider, or make it a user option (which I believe it is in some browsers, but rarely set).

And then more people could use the default for quick pages without having to think about style every time they spit out a web page.

How web sites can do a much smarter 'pledge drive'

There is buzz about how Jason Kottke, of kottke.org, has abandoned his experiment of micropayment donations to support his full-time blogging. He pulled in $40,000 in the year, almost all of it during his 3 week pledge drive, but that's hardly enough. Now I think he should try adsense, but I doubt he hasn't heard that suggestion before.

However, PBS/NPR are able to get a large part of their budgets through pledge drives, so it's possible to make this happen. I think we should be able to do it better on the web.

For example, on PBS/NPR, when they start the pledge drive, they get into a pretty boring endless repeat of the basic message. They tell you that if they reach the goal, they can end the pledge drive early. But this rarely happens, and even when it does, if you pledge early, it doesn't stop the begging.

On the web it could. You could do a pledge drive here where, after a person donates, the drive is over for them. This is not the same as sites that simply charge a subscription fee to get past the ads (such as Salon and Slashdot). This would be an organized pledge drive which is over for everybody after a set period, but over even sooner for those who donate. (There's a touch of work to do for people who use multiple machines, of course.)

Indeed you could even have a "turn off pledge drive I'm never going to give" button for the freeloaders as an experiment. Or it might turn it down a notch. Hard to say if this would work. Of course, people could also write filters for web begging if you make the drives too long. Of course, the drive could even be started at an individual time for the less frequent visitors, though that punishes those who disable cookies or switch machines.

Wanted: A google/yahoo/etc. ad optimizer

Yahoo is now entering the context-driven ad field to compete with Adsense, and that’s good for publishers and web authors. I have had great luck with adsense, and it provides serious money for this blog and my other web sites, which is why I have the affiliate link on the right bar encouraging you to join adsense — though I won’t mind the affiliate fee as well, of course.

But I’m trying Yahoo now, and soon MSN will enter the fray. However, it seems to me that no one network will be best for a diverse site. Each network will have different advertisers bidding up certain topic areas. In an efficient market, advertisers would quickly shift to the networks that give them the best performance (cheapest price, most qualified clicks) but in practice this won’t happen very often.

So it would make sense for somebody to build a web site optimizing engine. This engine would automate the task of switching various pages on a site between one network and another, and measuring performance. Over time it would determine which network is performing the best for each page or each section of the site and switch the pages to use the best network. It might run further tests to see how things change.

Such optimizations could take place even during the day. (Yahoo doesn’t have much intraday reporting yet.) For example, Google does better in the morning than it does in the evening. I guess that this is because advertisers have set a daily budget, and more of them hit their budget as the day goes on. My CPMs usually start high and then sink in the later hours. It might make sense to switch from Google to Yahoo as the CPM drops. However, Yahoo’s advertisers will have their own budget limits so this may not help.

Another interesting optimization might be to present different ads depending on whether the user came in from the associated search engine. Theory: If the user searched for “copyright” on Google to come to my copyright myths page the chances are they already saw a lot of copyright related adwords ads. Might make more sense to show a different set of ads from another network. Likewise if they came in from Yahoo, might be best to show the Google ads. If they come in from elsewhere, use the best performing network. This would be generated live, based on the Referer field. Hard to say if the search engines would like it or not

Experimenting with Yahoo Publisher for RSS

While I have been using Google ads on the blog for some time (and they do quite well), they don’t yet do RSS ads outside of a more limited beta program. So I’m trying Yahoo’s ads, also in beta but I’m on the list.

They just went live, and all that’s showing right now is a generic ad, presumably until they spider the site and figure out what ads to run. Ideally it will be ads as relevant as Google Adsense does.

Competition between Google and Yahoo will be good for publishers. Just on basic click-rates, one will tend to do better than the other, presumably. If one is consistently doing not as well, they will lose all the partners, who will flock to the other. The only way to fix that will be to increase the percentage of the money they pay out, until they get to a real efficient market percentage they can’t go above.

Read on for examination of the economics of RSS ads.  read more »

Deep bookmarks in the browser

In playing with a few firefox extensions that display things like my cellular minutes used, I realized they were really performing a limited part of something that could be really useful — deep bookmarks which can go past login screens and other forms to go directly to a web page.

So many web sites won’t let you bookmark a page that you must log-in to see, and they time out your login session after a short time. The browser will remember my password for the login screen, but it won’t log me in and go to the page I want. Likewise, pages only available through a POST form can’t be boomarked.

A deep bookmark would be made by going to a page, then using the BACK tool to go back to the entry page before it, which may be more than simply the previous page. You would then ask for a deep bookmark, and it would record the entire path from entry/login page to most forward page, including items posted to forms. Passwords would be recorded in the protected password database of course.

This would work in many cases, but not always. Some deep URLs include a session ID, and that must explicitly not be recorded as the target, as the session will have expired. In a few cases the user might have to identify the session key but many are obvious. And of course in some cases the forms may change from time to time and thus not be recordable. Handling them would require a complex UI but I think they are rare.

This would allow quick bookmarks to check balances, send paypal money and more. There is some risk to this, but in truth you’ve already taken the risk with the passwords stored in the password database, and of course these bookmarks would not work unless you have entered the master decryption password for the password database some time recently.

Syndicate content