Submitted by brad on Sat, 2006-12-16 03:15.
I’ve spoken before about ZUI (Zero User Interface) and how often it’s the right interface.
One important system that often has too complex a UI is backup. Because of that, backups
often don’t get done. In particular offsite backups, which are the only way to deal with
fire and similar catastrophe.
Here’s a rough design for a ZUI offsite backup. The only UI at a basic level is just
installing and enabling it — and choosing a good password (that’s not quite zero UI but
it’s pretty limited.)
Once enabled, the backup system will query a central server to start looking for backup
buddies. It will be particularly interested in buddies on your same LAN (though it will
not consider them offsite.) It will also look for buddies on the same ISP or otherwise close
by, network-topology wise. For potential buddies, it will introduce the two of you and let
you do bandwidth tests to measure your bandwidth.
At night, the tool would wait for your machine and network to go quiet, and likewise the
buddy’s machines. It would then do incremental backups over the network. These would
be encrypted with secure keys. Those secure keys would in turn be stored on your own
machine (in the clear) and on a central server (encrypted by your password.)
The backup would be clever. It would identify files on your system which are common
around the network — ie. files of the OS and installed software packages — and know it
doesn’t have to back them up directly, it just has to record their presence and the
fact that they exist in many places. It only has to transfer your own created files.
Your backups are sent to two or more different buddies each, compressed. Regular checks
are done to see if the buddy is still around. If a buddy leaves the net, it quickly
will find other buddies to store data on. Alas, some files, like video, images and
music are already compressed, so this means twice as much storage is needed for backup
as the files took — though only for your own generated files. So you do have to
have a very big disk 3 times bigger than you need, because you must store data for
the buddies just as they are storing for you. But disk is getting very cheap.
(Another alternative is RAID-5 style. In RAID-5 style, you distribute each
file to 3 or more buddies, except in the RAID-5 parity system, so that any
one buddy can vanish and you can still recover the file. This means you
may be able to get away with much less excess disk space. There are also
redundant storage algorithms that let you tolerate the loss of 2 or even 3
of a larger pool of storers, at a much more modest cost than using double
All this is, as noted, automatic. You don’t have to do anything to make it happen,
and if it’s good at spotting quiet times on the system and network, you don’t even
notice it’s happening, except a lot more of your disk is used up storing data for
It is the automated nature that is so important. There have been other proposals
along these lines, such as MNET and some commercial network backup apps, but never an app you
just install, do quick setup and then forget about until you need to restore a
file. Only such an app will truly get used and work for the user.
Restore of individual files (if your system is still alive) is easy. You have
the keys on file, and can pull your file from the buddies and decrypt it with
Loss of a local disk is more work, but if you have multiple computers in
the household, the keys could be stored on other computers on the same
LAN (alas this does require UI to approve this) and then you can go to
another computer to get the keys to rebuild the lost disk. Indeed, using
local computers as buddies is a good idea due to speed, but they don’t
provide offsite backup. It would make sense for the system, at the cost of
more disk space, to do both same-LAN backup and offsite. Same-LAN for
hardware failures, offsite for building-burns-down failures.
In the event of a building-burns-down failure, you would have to go
to the central server, and decrypt your keys with that password. Then you can get your
keys and find your buddies and restore your files. Restore would not
be ZUI, because we need no motiviation to do restore. It is doing regular
backups we lack motivation for.
Of course, many people have huge files on disk. This is particularly true
if you do things like record video with MythTV or make giant photographs,
as I do. This may be too large for backup over the internet.
In this case, the right thing to do is to backup the smaller files first,
and have some UI. This UI would warn the user about this, and suggest
options. One option is to not back up things like recorded video. Another
is to rely only on local backup if it’s available. Finally, the system
should offer a manual backup of the large files, where you connect a
removable disk (USB disk for example) and transfer the largest files to
it. It is up to you to take that offsite on a regular basis if you can.
However, while this has a UI and physical tasks to do, if you don’t do
it it’s not the end of the world. Indeed, your large files may get
backed up, slowly, if there’s enough bandwidth.
Submitted by brad on Wed, 2006-12-13 00:54.
Normally I’m a general-purpose computing guy. I like that the computer that runs my TV with MythTV is a general purpose computer that does far more than a Tivo ever would. My main computer is normally on and ready for me to do a thousand things.
But there is value in specialty internet appliances, especially ones that can be very low power and small. But it doesn’t make sense to have a ton of those either.
I propose a generic internet appliance box. It would be based on the same small single-board computers which run linux that you find in the typical home router and many other small network appliances. It would ideally be so useful that it would be sold in vast quantities, either in its generic form or with minor repurposings.
Here’s what would be in level 1 of the box:
- A small, single-board linux computer with low power processor such as the ARM
- Similar RAM and flash to today’s small boxes, enough to run a modest linux.
- WiFi radio, usually to be a client — but presumably adaptable to make access points (in which case you need ethernet ports, so perhaps not.)
- USB port
- Infrared port for remote control or IR keyboard (optionally a USB add-on)
Optional features would include:
- Audio output with low-fi speaker
- Small LCD panel
- DVI output for flat panel display
- 3 or 4 buttons arranged next to the LCD panel
The USB port on the basic unit provides a handy way to configure the box. On a full PC, write a thumb-drive with the needed configuration (in particular WiFi encryption keys) and then move the thumb drive to the unit. Thumb drives can also provide a complete filesystem, software or can contain photo slide shows in the version with the video output. Thumb drives could in fact contain entire applications, so you insert one and it copies the app to the box’s flash to give it a personality.
Here are some useful applications:
- In many towns, you can see when a bus or train will arrive at your stop over the internet. Program the appliance with your stop and how long it takes to walk there after a warning. Press a button when you want to leave, and the box announces over the speaker a countdown of when to go to meet the transit perfectly.
- Email notifier
- MP3 output to stereo or digital speakers
- File server (USB connect to external drives — may require full ethernet.)
- VOIP phone system speakerphone/ringer/announcer
- Printer server for USB printers
- Household controller interface (X10, thermostat control, etc.)
Slap on the back of cheap flat panel display mounted on the wall, connected with video cable. Now offer a vast array of applications such as:
- Slide show
- Security video (low-res unless there is an mpeg decoder in the box.)
- Weather/News/Traffic updates
- With an infrared keyboard, be a complete terminal to other computer apps and a minimal web browser.
There are many more applications people can dream up. The idea is that one cheap box can do all these things, and since it could be made in serious quantities, it could end up cheaper than the slightly more specialized boxes, which themselves retail for well under $50 today. Indeed today’s USB printer servers turn out to be pretty close to this box.
The goal is to get these out and let people dream up the applications.
Submitted by brad on Sun, 2006-11-05 23:00.
I'm in Edmonton. Turns out to be the farthest north I've been on land (53 degrees 37 minutes at the peak) after another turn through the Icefields Parkway, surely one of the most scenic drives on the planet. My 4th time along it, though this time it was a whiteout. Speaking tomorrow at the CIPS ICE conference on privacy, nanotechnology and the future at 10:15.
Idea of the day. I joined Fairmont Hotels President's Club while at the Chateau Lake Louise because it gave me free internet. When I got to the Fairmont Jasper Lodge my laptop just worked with no login, and I was really impressed -- I figured they had recorded my MAC address as belonging to a member of their club, and were going to let me use it with no login. Alas, no, the Jasper lodge internet (only in main lobby) was free for all. But wouldn't that be great if all hotels did that? Do any of the paid wireless roaming networks do this? (I guess they might be afraid of MAC cloning.) It would also allow, with a simple interface, a way for devices like Wifi SIP phones to use networks that otherwise require a login.
Of course, as we all know, the more expensive the hotel, the more likely the internet is not only not included, it's way overpriced. At least Fairmont gave one way around this. Of course I gave them a unique E-mail address created just for them, so if they spam me I can quickly disable them. But once again I, like most of us, find myself giving up privacy for a few hotel perks.
Submitted by brad on Wed, 2006-10-25 23:13.
In thinking about how to reduce the cost of bringing fiber to everybody (particulaly for block-area-networks built by neighbours) I have started wondering if we could build a robot that is able to traverse utility poles by crawling along wires — either power, phone or cable-TV wires. The robot would unspool fiber optic cable behind it and deploy wire-ties to keep it attached. Human beings would still have to eventually climb the poles and install taps or junctions and secure these items, but their job would be much easier.
Robots that can crawl along cables already exist. The hard part is traversing the poles. Now it turns out finding live electric wires is something that’s very easy for a robot to do. They stick out like a live wire in the EM spectrum. The poles of course have insulators, junctions, tie downs and other obstacles. Crossing them may be hard in certain cases (in which case a human would have to help, either by tele-operation, or by climbing the pole.)
It may be possible to have a very small robot that is able to follow the current (easy to tell the lines to the houses from the main lines) and cross a pole like a bug and then, once safely on the other side, pulls the larger robot with a small tether. Again, it won’t always work but if you can get it to work enough of the time, you can install fiber with far less time and labour than the manual approach. Fiber of course can be tied to power lies because it is non-conductive material, though it’s even better if you can run it along phone or cable lines.
Not that any of these companies will want to give permission to competitors. And you want to pull multiple fibers, not so much for the bandwidth — we can do terabits in a single fiber if we want to — but for the backup when one fiber breaks.
If the robots get good enough, they could even string fiber into rural areas, following long chains of power or phone lines with just a single human assistant. Of course overhead wires are going to be more prone to breakage, but with these robots, repairs could be fast and cheap.
There are already robots out there which can crawl storm sewers to install fiber. This is another alternative, though that’s good too. Indeed, a robot that can even crawl real sewage lines to put in fiber which comes out your household stack is not out of the question, if it’s in a strong enough casing.
Submitted by brad on Mon, 2006-10-23 18:22.
Over 15 years ago I proposed that USENET support the concept of “replacing” an article (which would mean updating it in place, so people who had already read it would not see it again) in addition to superseding an article, which presented the article as new to those who read it before, but not in both versions to those who hadn’t. Never did get that into the standard, but now it’s time to beg for it in USENET’s successor, RSS and cousins.
I’m tired of the fact that my blog reader offers only two choices — see no updates to articles, or see the articles as new when they are updated. Often the updates are trivial — even things like fixing typos — and I should not see them again. Sometimes they are serious additions or even corrections, and people who read the old one should see them.
Because feed readers aren’t smart about this, it not only means annoying minor updates, but also people are hesitant to make minor corrections because they don’t want to make everybody see the article again.
Clearly, we need a checkbox in updates to say if the update is minor or major. More than a checkbox, the composition software should be able to look at the update, and guess a good default. If you add a whole paragraph, it’s major. If you change the spelling of a word, it’s minor. In addition to providing a good guess for the author, it can also store in the RSS feed a tag attempting to quantify the change in terms of how many words were changed. This way feed readers can be told, “Show me only if the author manually marked the change as major, or if it’s more than 20 words” or whatever the user likes.
Wikis have had the idea of a minor change checkbox for a while, it’s time for blogs to have it too.
Of course, perhaps better would be a specific type of update or new post that preserves thread structure, so that a post with an update is a child of a parent. Which means it is seen with the parent by those who have not yet seen the parent, but as an update on its on for those who did see it. For those who skipped the parent (if we know they skipped) the update also need not be shown.
Submitted by brad on Sun, 2006-09-17 10:34.
It’s common in the blogosphere for bloggers to comment on the posts of other bloggers. Sometimes blogs show trackbacks to let you see those comments with a posting. (I turned this off due to trackback spam.) In some cases we effectively get a thread, as might appear in a message board/email/USENET, but the individual components of the thread are all on the individual blogs.
So now we need an RSS aggregator to rebuild these posts into a thread one can see and navigate. It’s a little more complex than threading in USENET, because messages can have more than one parent (ie. link to more than one post) and may not link directly at all. In addition, timestamps only give partial clues as to position in a thread since many people read from aggregators and may not have read a message that was posted an hour ago in their “thread.”
At a minimum, existing aggregators (like bloglines) could spot sub-threads existing entirely among your subscribed feeds, and present those postings to you. You could also define feeds which are unsubscribed but which you wish to see or be informed of postings from in the event of a thread. (Or you might have a block-list of feeds you don’t want to see contributions from.) They could just have a little link saying, “There’s a thread including posts from other blogs on this message” which you could expand, and that would mark those items as read when you came to the other blog.
Blog search tools, like Technoratti could also spot these threads, and present a typical thread interface for perusing them. Both readers and bloggers would be interested in knowing how deep the threads go.
Submitted by brad on Wed, 2006-09-06 11:54.
I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.
You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.
When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.
There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.
I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.
It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.
For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.
Submitted by brad on Wed, 2006-08-02 18:28.
There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. “Pet names” given to web sites so you can confirm you’re where you were before.
I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults.
Now I don’t store my most important passwords (bank, etc.) in my password vault, but I do store most
medium importance ones there (accounts at various billing entities etc.) I just use a simple common
password for web boards, blogs and other places where the damage from compromise is nil to minimal.
So when I go to such a site, I expect the password vault to fill in the password. If it doesn’t, that’s a big warning flag for me. And so I can’t easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to bankofthevvest.com (Two “v”s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.
The browser should store passwords in the vault, and even the “don’t store this” passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is “secretword” and I use it on bankofthewest.com, no problem. “secretword” isn’t stored in my password vault, but the hash of it is. If I ever type in “secretword” to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I’ll do a good job of examining — it’s still possible I’ll be fooled by bankofthevvest.com, but other tricks won’t fool me.
The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at “who-cares” sites, we don’t want to be warned all the time. The second time we use that password, we’ll get a warning, and we need a box to say, “Don’t warn me about re-use of this password.”
Read on for subtleties… read more »
Submitted by brad on Mon, 2006-07-24 22:40.
Everybody in the blogosphere has heard something about Alaska’s Ted Stevens calling the internet a series of tubes.
They just heard him wrong. His porn filters got turned off and he discovered the internet was a series of pubes.
(And, BTW, I think we’ve been unfair to Stevens. While it wasn’t high traffic that delayed his E-mail — “an internet” — a
few days, his description wasn’t really that bad… for a senator.)
Submitted by brad on Thu, 2006-07-20 14:46.
Big news today. Judge Walker has denied the motions — particularly the one by the federal government — to dismiss our case against AT&T for cooperative with the NSA on warrantless surveillance of phone traffic and records.
The federal government, including the heads of the major spy agencies, had filed a brief demanding the case be dismissed on “state secrets” grounds. This common law doctrine, which is often frighteningly successful, allows cases to be dismissed, even if they are of great merit, if following through would reveal state secrets.
Here is our brief note which as a link to the decision.
This is a great step. Further application of the state secrets rule would have made legal oversight of
surveillance by spy agencies moot. We can write all the laws we want governing how spies may operate, and how surveillance is to be regulated, but if nobody can sue over violations of those laws, what purpose do they really have? Very little.
Now our allegations can be tested in court.
Submitted by brad on Fri, 2006-07-14 12:32.
Recently IEEE Spectrum published a paper on a refutation of Metcalfe’s law — an observation (not really a law) by Bob Metcalfe — that the “value” of a network incrased with the square of the number of people/nodes on it. I was asked to be a referee for this paper, and while they addressed some of my comments, I don’t think they addressed the principle one, so I am posting my comments here now.
My main contention is that in many cases the value of a network actually starts declining after a while and becomes inversely proportional to the number of people on it. That’s because noise (such as flamage and spam) and unmanageable signal (too many serious messages) rises with the size and eventually reaches a level where the cost of dealing with it surpasses the value of the network. I’m thinking mailing lists in particular here.
You can read my referee’s comments on Metcalfe’s law though note that these comments were written on the original article, before some corrections were made.
Submitted by brad on Thu, 2006-07-13 18:58.
Bruce Schneier today compliments Google on trying out pay-to-perform ads as a means around click-fraud, but worries that this is risky because you become a partner with the advertiser. If their product doesn’t sell, you don’t make money.
And that’s a reasonable fear for any small site accepting pay-to-perform ads. If the product isn’t very good, you aren’t going to get a cut of much. Many affiliate programs really perform poorly for the site, though a few rare ones do well.
However, Google has a way around this. While the first step on Google’s path to success was to make a search engine that gave better results, how they did advertising was just as important. At a time when everybody was desperate for web advertising, and sites were willing to accept annoying flash animations, pop-ups and pop-unders and even adware, Google introduced ads that were purely text. In addition, they had the audacity, it seemed, to insist that pay-per-click bidding advertisers provide popular ads people would actually click through. If people are not clicking on your ad, Google stops running it. They even do this if there are not other ads to place on the page. They had the guts to say, “We’ll sell pay per click, but if your ad isn’t good, we won’t run it.” Nobody was turning down business then, and few are now.
Sites of course don’t want to be paid per click, or a cut of sales. They want a CPM, and that’s about all they want, as long as the ads are otherwise a good match for the site. Per-click costs and percentages are just a means to figuring out a CPM. Advertisers don’t want to pay CPMs, they want to pay for results, like clicks or sales.
Google found a great way to combine the two. They offered pay per click, but they insisted that the clicks generate enough CPM to keep them happy.
The same will apply here. They will offer pay for performance, but those ads will be competing with bidders who are bidding pay-per-click. Google will run, as it always has, the type of ad that gets the highest results. If you bid pay per performance, and the PPCs are bidding higher, your ad won’t run. And even if there are not higher PPCs, if your ad isn’t working and convering into sales and generating revenue for Google, I suspect they will just not run it. They can afford to do this, they are Google.
And so they will get the best of both worlds again. Advertisers who can come up with products that can sell through ads will pay for actual sales, and love how they can calculate how well it does for them. Google will continue to get good CPMs, which is what they care about, and what Adsense partners (including myself) care about. And they will have eliminated clickfraud at least on these types of ads. Once again they stay on top.
(Disclaimer: I am a consultant to Google, and am in their Adsense program. If you aren’t in it, there is a link in the right-hand bar you can use to join that program. I get a pay for performance credit if you do. Unlike Google’s PPC ads, where Adsense members are forbidden by contract from encouraging people to click on the ads, there is no need for such strictures against pay for performance ads, in fact there’s evey reason to encourage it.)
Submitted by brad on Thu, 2006-07-06 19:19.
You’ve seen me write before of a proposal I call addresscrow to promote privacy when items are shipped to you. Today I’ll propose something more modest, with non-privacy applications.
I would like PayPal, and other payment systems (Visa/MC/Google Checkout) to partner with the shipping companies such as UPS that ship the products bought with these payment systems.
They would produce a very primative escrow, so that payment to the seller was transferred upon delivery confirmation by the shipper. If there is no delivery, the money is not transferred, and is eventually refunded. When you sign for the package (or if you have delivery without signature, when it’s dropped off) that’s when the money would be paid to the vendor. You, on the other hand, would pay the money immediately, and the seller would be notified you had paid and the money was waiting pending receipt. The payment company would get to hold the money for a few days, and make some money on the float, if desired, to pay for this service.
Of course, sellers could ship you a lump of coal and you would still pay for it by signing for it. However, this is a somewhat more overt fraud that, like all fraud, must be dealt with in other ways. This system would instead help eliminate delays in shipping, since vendors would be highly motivated to get things shipped and delivered, and it would eliminate any communications problems standing in the way of getting the order processed. There is nothing much in it for the vendor, of course, other than a means to make customers feel more comfortable about paying up front. But making customers feel more comfortable is no small thing.
Extended, the data from this could go into reptuation systems like eBay’s feedback, so that it could report for buyers how promptly they paid, and for sellers how promptly they shipped or delivered. (The database would know both when an item
was shipped and when it was received.) eBay has resisted the very obvious idea of having feedback show successful PayPal payment, so I doubt they will rush to do this either.
Submitted by brad on Sat, 2006-06-10 16:37.
Ebayers are familiar with what is called bid “sniping.” That’s placing your one, real bid, just a few seconds before auction close. People sometimes do it manually, more often they use auto-bidding software which performs the function. If you know your true max value, it makes sense.
However, it generates a lot of controversy and anger. This is for two reasons. First, there are many people on eBay who like to play the auction as a game over time, bidding, being out bid and rebidding. They either don’t want to enter a true-max bid, or can’t figure out what that value really is. They are often outbid by a sniper, and feel very frustrated, because given the time they feel they would have bid higher and taken the auction.
This feeling is vastly strengthened by the way eBay treats bids. The actual buyer pays not the price they entered, but the price entered by the 2nd place bidder, plus an increment. This makes the 2nd place buyer think she lost the auction by just the increment, but in fact that’s rarely likely to be true. But it still generates great frustration.
The only important question about bid sniping is, does it benefit the buyers who use it? If it lets them take an auction at a lower price, because a non-sniper doesn’t get in the high bid they were actually willing to make, then indeed it benefits the buyer, and makes the seller (and interestingly, eBay, slightly less.)
There are many ways to write the rules of an auction. They all tend to benefit either the buyer or the seller by some factor. A few have benefits for both, and a few benefit only the auction house. Most are a mix. In most auction houses, like eBay, the auction house takes a cut of the sale, and so anything that makes sellers get higher prices makes more money on such auctions for the auction house.
Read on… read more »
Submitted by brad on Wed, 2006-06-07 09:12.
We often travel as a couple, and of course both have the same e-mail and web addictions that all of you probably have. Indeed, these days if you don’t get to your e-mail and other stuff for a long period, it becomes unmanageable when you return. For this reason, we bring at least one, and often two laptops on trips.
When we bring one, it becomes a time-waster. Frankly, our goal is to spend as little time in our hotel room on the net as possible, but it’s still very useful not just for e-mail but also travel bookings and research, where to eat etc. When we have only one computer — or when we have two but the hotel only provides a connection for one — it means we have to spend much more time in the hotel room.
It would be nice to see a laptop adopted for couple’s use. In many cases, this could be just a little software. Many laptops already can go “dual head”, putting out a different screen on their VGA connector than goes to the built-in panel. So a USB keyboard and a super-thin laptop sized flat panel would be all you need, along with power for the panel. In the future, as more and more hotel rooms adopt HDTVs, one could use that instead of the display.
Of course desktop flat panels are bigger than laptops, this would need to be a modified version of the same panels put into laptops, which are readily available. A special connector for it, with power, would make this even better. The goal is something not much larger than a clipboard and mini-keyboard. It could even be put in an ultrathin laptop case (with no motherboard, drives or even battery.)
Now, as to software. In Linux, having two users on two screens is already pretty easy. It’s just a bit of configuration. I would hope the BSD based Mac is the same. Windows is more trouble, since it really doesn’t have as much of a concept of two desktops with two users logged in. (Indeed, I have wondered why we haven’t seen a push for dual-user desktop computers, since it’s not at all uncommon to see an home office with two computers in it for two members of the family, but for which both are used together only rarely.)
On Windows, you would probably need to just have one user logged in, and both people would be that user to Windows. However, you would have different instances of Firefox/Mozilla, for example, which can use different profiles so each person has their own browser settings and bookmarks, their own e-mail settings etc. It would be harder to have both people run their own MS Word, but it might be doable.
Some variants of the idea include making a “thin client” box that plugs into the main computer via USB or even talks bluetooth to it, and has its own power supply. It might do something as simple as VNC to a virtual screen on the main box. Or of course it could plug into ethernet but that’s often taken on the main box to talk to the hotel network if the hotel has a wired connection. (More often they have wireless now.) The thin client could also act as a hub to fix this.
If you want to bring two laptops, you can make things work by using internet connection sharing over wired or wireless ad-hoc network, though it’s much more work than it should be to set up.
But my goal is to avoid the weight, size and price of a 2nd laptop, though price is not that big an issue because I am presuming one has other uses for it.
Submitted by brad on Mon, 2006-05-15 14:52.
When you set up a mail client, you have to configure mail reading servers (either IMAP or POP) and also a mail sending server (SMTP). In the old days you could just configure one SMTP server, with no userid or password. Due to spam-blocking, roaming computers have it hard, and either must change SMTP servers as they roam, or use one that has some sort of authentication scheme that opens it up to you and not everybody.
Worse, many ISPs now block outgoing SMTP traffic, insisting you use their SMTP server (usually without a password.) Sometimes your home site has to run an SMTP server at a non-standard port to get you past this.
I propose that IMAP (and possibly POP) include an extension so that the IMAP server can offer your client information on how to send mail. At the very least, it simplifies configuration for users, who now only have to provide one server identity. From there the system configures itself. (Of course, the other way to do this is to identify such servers in DHCP.)
This also simplifies the situation where you want to use a different SMTP server based on which mail account you are working on, something DHCP can't handle.
The IMAP server would offer a list of means to send mail. These could include a port number, and a protocol, which could be plain SMTP, or SMTP over SSL or TLS, or even some new protocol down the road. And it could also offer authentication, because you have already authenticated to the IMAP server with your userid and password. It could tell you a permanent userid and password you can use with the SMTP server, or it could tell you that you don't need one (because your IP address has been enabled for the duration of your IMAP session in the IMAP-before-SMTP approach.) It could also offer a temporary authentication token, which is good only for that session or some period of time after it. Ideally we would have IMAP over SSL/TLS, and so these passwords and tokens would not be sent in the clear.
With a list of possible methods, the client could chose the best one. Or, of course, it could chose one that was programmed in by a user who did custom configure their own SMTP information.
It's also worth noting that it would be possible, down the road, to use the very same IMAP port for a slightly modified SMTP session to an IMAP server set up to handle this. This could handle firewalls that block all but that port. However, the main benefit is to the user with simpler configuration.
Submitted by brad on Thu, 2006-05-11 00:46.
A lot of the time, on web forms, you will see some sort of structured field, like an IP address, or credit card number, or account number, broken up into a series of field boxes. You see this is in program GUIs as well.
On the surface it makes sense. Never throw away structure information. If you’re parsing a human name, it may be impossible to parse it as well from a plain string compared to a set of boxes for first, last and middle names.
Think about it. The multi box idea, expressed to extremes would have every form enter an e-mail address with a username box and a domain name box, with an @ printed between them. This would stop you from entering e-mail addresses without at signs. But fortunately nobody does it. We can always parse an E-mail and we don’t want to subject people to the pains of typing it in a strange way.
Now I have to admit I’ve been tempted sometimes on international phone numbers, because parsing them is hard. The number of digits in the various components, be they area codes or exchanges, varies from region to region and I am not sure anybody has written a perfect parser. But nor do people want to enter phone numbers with tabs. And they want to cut and paste. Remember this when designing your next web form.
Submitted by brad on Fri, 2006-03-24 15:00.
As I’ve written before, Google’s Adsense program is for many people bringing about the dream of having a profitable web publication. I have a link on the right of the blog for those who want to try it. I’ve been particularly impressed with the CPMs this blog earns, which can be as much as $15. The blog has about 1000 pageviews/day (I don’t post every day) and doesn’t make enough to be a big difference, but a not impossible 20-fold increase could provide a living wage for blogging. Yahoo publisher’s blog ads, which some of you are seeing in the RSS feed have been a miserable failure, and will be removed next software upgrade. They are poorly targetted and have earned me, literally, not even a dollar.
Recently however I noticed a way in which the Google targetting engine is too good, from my standpoint. From time to time my web sites or blog will get linked from a very high traffic site. This week the 4th amendment shipping tape was a popular stumble-upon, for example. I’ve also been featured from time to time in Slashdot, boingboing and various other popular sites.
When this happens, it’s not a money maker because the click-throughs and CPMs drop way down. This is not too surprising. The people following a quick link are less likely to be looking for the products Google picks to advertise. However, more recently I saw high traffic bringing down not just the CPM, but even the total dollars! I theorize that Google, seeing poor clickthrough, cycles out the normally lucrative ads to try others. So even the normal visitors, who have not gone away, are seeing more poorly chosen ads. Or it could just be randomness that I’m seeing a pattern in.
Solution: Consider the referer when placing ads. If the clickthrough is poor on a given referer (like slashdot or boingboing) then play with the ads to hunt for better clickthrough. For the more regular referers (which are typically internal, the result of searches and regular readers) stick to the ads that typically perform well with that group.
Submitted by brad on Tue, 2006-03-14 22:18.
A buzzword in the cable/ilec world is IPTV, a plan to deliver TV over IP. Microsoft and several other companies have built IPTV offerings, to give phone and cable companies what they like to call a “triple play” (voice, video and data) and be the one-stop communications company.
IPTV offerings have you remotely control an engine at the central office of your broadband provider which generates a TV stream which is fed to your TV set. Like having the super set-top box back at the cable office instead of in your house. Of course it requires enough dedicated bandwidth to deliver good quality TV video. That’s 1.5 to 2 megabits for regular TV, 5 to 10 for HDTV with MP4.
Many of the offerings look slick. Some are a basic “network PVR” (try to look like a Tivo that’s outsourced) and Microsoft’s includes the ability to do things you can’t do at your own house, like tune 20 channels at once and have them all be live in small boxes.
I’m at the pulver.com Von conference where people are pushing this, notably the BellSouth exec who just spoke.
But they’ve got it wrong. We don’t need IPTV. We want TVoIP or perhaps more accurately Vid-o-IP.
That’s a box at your house that plays video, and uses the internet to suck it down. It may also tune and record regular TV signals (like MythTV or Windows Media Center.)
Now it turns out that’s more expensive. You have to have a box, and a hard drive and a powerful processor. The IPTV approach puts all that equipment at the central office where it’s shared, and gets economies of scale. How can that not be the winner?
Well for one, TVoIP doesn’t require quality bandwidth. You can even use it with less bandwidth than a live stream takes. That’s because after people get TVoIP/PVR, they don’t feel inclined to surf. IPTV is still too much in the “watch live TV” world with surfing. TVoIP is in the poor-man’s video on demand world (like NetFlix and Tivo) where you pick what you might want to see in advance, and later go to the TV to pick something from the list of what’s shown up. Tuns out that’s 95% as good as Video on Demand, but much cheaper.
But more importantly, it’s under your control. Time and time again, the public has picked a clunkier, more expensive, harder to maintain box that’s under their own control over a slick, cheap service that is under the control of some bureaucracy. PCs over mainframes. PCs over Network Computers and Timesharing and SunRays. Sometimes it’s hard to explain why they did this for economic reasons, or even for quality reasons.
They did it because of choice. The box in your own house is, ideally, a platform you own. One that you can add new things to because you want them, and 3rd party vendors can add things to because you demand them. Central control means central choice of what innovations are important. And that never works. Even when it’s cheaper.
If the set top box were to remain a set top box, a box you can’t control, then IPTV would make good sense. But we don’t want it to be that. It’s now time to make it more, and companies are starting to offer products to make it more. We want a platform. Few people want to program it themselves, but we all want great small companies innovating and coming up with the next new thing. Which TVoIP can give us and IPTV won’t. Of course, there are locked TVoIP boxes, like the Akimbo and others, but they won’t win. Indeed, some efforts, like the trusted computing one, seek to make the home box locked, instead of an open platform, when it comes to playing media (and thus locking linux out of the game.) A truly open platform would see the most innovation for the user.
Disclaimer, I am involved with BitTorrent, which makes the most popular software used for downloading video over the internet.
Submitted by brad on Sat, 2006-03-11 16:42.
In most browsers, the default style presents text adjecent to all sides of the browser window, with no margin. This is a throwback to early days of screen design, when screen real estate was considered so valuable that deliberately wasting it with whitespace was sacrilige.
Of course, in centuries of design on paper, nobody ever put text right up to the margins. Everybody knows it’s ugly and not what the eye wants. Thus, when you see a web page using the default style, which I end up with myself out of laziness, people have a reaction to it as ugly.
Screens are now big enough that it’s time to change the default style to be one that is easier to read. And that means margins. If a page designer wants to put stuff up against the edges, they can easily define their own stylesheets now to do this, so let them do it. I doubt they ever will put text there, though they might put graphics or their own custom margins. If text to the edges is a choice that nobody would make if given the option, it sure seems like silly default to have. It won’t break anything, you can just make the window wider, or make it a user option (which I believe it is in some browsers, but rarely set).
And then more people could use the default for quick pages without having to think about style every time they spit out a web page.