Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

Math getting better? -- CitizenRe

(Note: I have posted a followup article on CitizenRe as a result of this thread. Also a solar economics spreadsheet.)

I’ve been writing about the economics of green energy and solar PV, and have been pointed to a very interesting company named CitizenRe. Their offering suggests a major cost reduction to make solar workable.

They’re selling PV solar in a new way. Once they go into operation, they install and own the PV panels on your roof, and you commit to buy their output at a rate below your current utility rate. Few apparent catches, though there are some risks if you need to move (though they try to make that easy and will move the system once for those who do a long term contract.) You are also responsible for damage, so you either take the risk of panel damage or insure against it. Typically they provide an underpowered system and insist you live where you can sell back excess to the utility, which makes sense.

But my main question is, how can they afford to do it? They claim to be making their own panels and electrical equipment. Perhaps they can do this at such a better price they can make this affordable. Of course they take the rebates and tax credits which makes a big difference. Even so, they seem to offer panels even in lower-insolation places like New England, and to beat the prices of cheaper utilities which only charge around 8 cents/kwh.

My math suggests that with typical numbers of 2 khw/peak watt/year, to deliver 8 cents/kwh for 25 years requires an installed cost of under $2/peak watt — even less in the less sunny places. Nobody is even remotely close to this in cost, so this must require considerable reduction from rebates and tax credits.

A few other gotchas — if you need to re-roof, you must pay about $500 to temporarily remove up to 5kw of panels. And there is the risk that energy will get cheaper, leaving you locked in at a higher rate since you commit to buy all the power from the panels. While many people fear the reverse — grid power going up in price, where this is a win — in fact I think that energy getting cheaper is actually a significant risk as more and more money goes into cleantech and innovation in solar and other forms of generation.

It’s interesting that they are offering a price to compete with your own local utility. That makes sense in a “charge what the market will bear” style, but it would make more sense to market only to customers buying expensive grid power in states with high insolation (ie. the southwest.)

Even with the risks this seems like a deal with real potential — if it’s real — and I’ll be giving it more thought. Of course, for many, the big deal is that not only do they pay a competitive price, they are much greener, and even provide back-up power during the daytime. I would be interested if any readers know more about this company and their economics.

Update: There is a really detailed comment thread on this post. However, I must warn CitizenRe affiliates that while they must disclose their financial connection, they must also not provide affiliate URLs. Posts with affiliate URLs will be deleted. Some salient details: There is internal dissent. I and many others wonder why an offer this good sounding would want to stain itself by being an MLM-pyramid. Much stuff still undisclosed, some doubt on when installs will take place.


3-D printing is getting cheaper. This week I saw a story about producing a hacked together 3-D printer that could print in unusual cheap materials like play-doh and chocolate frosting for $2,000. Soon, another 3-D technology will get cheap — the 3-D body scan.

I predict soon we’ll see 3-D scanning and reproduction become a consumer medium. It might be common to be able to pop into a shop and get a quick scan and lifelike statue of yourself, a pet or any object. Professional photographers will get them — it will become common, perhaps, to have a 3-D scan done of the happy couple at the wedding, with resultant statue. Indeed, soon we’ll see this before the wedding, where the couple on the wedding cake are detailed statues of the bridge and groom.

And let’s not forget baby “portraits” (though many of today’s scanning processes require the subject to be still.) At least small children can be immortalized. Strictly this requires the scanners to get cheap first, because you can send the statue back later in the main from a central 3-D printer if it’s not made of food.

The scanners may never become easily portable, since they need to scan from all sides or rotate the subject, but they will also eventually become used by serious amateur photographers, and posing for a portrait may commonly also include a statue, or at least a 3-d model in a computer (with textures and colours added) that you can spin around.

This will create a market for software that can take 3-D scans and easily make you look better. Thinner, of course, but perhaps even more muscular or with better posture. Many of us would be a bit shocked to see ourselves in 3-D, since few of us are models. As we’ll quickly have more statues than we know what to do with, we may get more interested in the computer models, or in ephemeral materials (like frosting) for these photostatuary.

This was all possible long ago if you could hire an artist, and many a noble had a bust of himself in the drawing room. But what will happen when it gets democratized?

Virtual right-of-way alternatives for BRT

In one of my first blog posts, I wrote about virtual right-of-way, a plan to create dedicated right of way for surface rail and bus transit, but to allow cars to use the RoW as long as they stay behind, and never in front of the transit vehicle.

I proposed one simple solution, that if the driver has to step on the brakes because of a car in the way, a camera photographs the car and plate, and the driver gets a fat ticket in the mail. People would learn you dare not get into the right-of-way if you can see a bus/train in your rearview mirror.

However, one downside stuck with me, which is that people might be so afraid of the ticket that they make unsafe lane changes in a hurry to get out of the way of the bus, and cause accidents. Even a few accidents might dampen enthusiasm for the plan, which is a shame because why leave the RoW vacant so much of the time?

San Francisco is planning BRT (Bus Rapid Transit) which gives buses dedicated lanes and nice “stations” for Geary St., its busiest bus corridor. However, that’s going to cut tremendously into Geary’s car capacity, which will drive traffic onto other streets. Could my plan for V-Row (Virtual Right of Way) help?

My new thought is to make travel in the V-Row a privilege, rather than something any car can do as long as it stays out of the way of the bus. To do that, car owners would need to sign up for V-Row access, and purchase a small receiver to mount in their car. The receiver would signal when a bus is approaching with a nice wide margin, to tell the driver to leave the lane. It would get louder and more annoying the closer the bus got. The ticket would still be processed by a camera on the front of the bus triggered by the brakes.

Non-registered drivers could still enter the V-Row, whether it was technically legal or not. If they got a photo-ticket, it might be greater than the one for registered drivers who have the alerting device.

I’ve thought of a few ways to do the alert. If there are small, short range radio transmitters dotted along the route, the bus could tell them to issue the warning as they approach. They could also flash LEDs (avoiding the need for the special receiver.) Indeed, they could even broadcast on a very low power open FM channel, again obviating the need for a special device if you don’t mind not running your stereo for something else. (The broadcast would be specific, “Bus approaching on Westbound Geary at 4th ave” so you are not confused if you hear a signal from another line or another direction.) Infrared or microwave short-range transmission would also be good. The transmitters would include actual lat/long coordinates of the zones to clear, so cars with their own GPS could get an even more accurate warning.

It might even be possible to have an infrared or ultra-high-frequency radio transmitter on the front of the bus, which would naturally only transmit to people in front of the bus. IR could be blocked by fog, radio could leak to other zones, so there might be bugs to work out there. The receiver could at least know what direction it is going (compass, if not GPS) and know to ignore signals from perpendicular vehicles.

Each bus of course, via GPS will know where it is and where it’s going, giving it the ability to transmit via radio or even IR the warning to clear the V-Row ahead of it.

V-Row is vastly, vastly cheaper than other forms of rapid transit. In fact, my proposal here might even make it funded by the drivers who are eager to make use of the lane. Many would, because going behind the bus will be a wonderful experience compared to rush hour traffic, since the traffic lights are going to be synchronized to the bus in most BRT plans.

One could even imagine, at higher pavement cost, a lane to pass the bus when it stops. Then the cars go even faster, but the driver signals a few seconds before she’s going to pull out and all cars in the lane would stop to wait for it, then follow it. Careful algorithms could plan things properly based on bus spacing and wait times and signals from drivers or sensors to identify where the gaps in the V-Row are that will allow cars are, and to signal cars that they can enter. (The sensor would also, when you’re not in the V-Row, tell you when it’s acceptable to enter it.)

During periods of heavy congestion, however, there may not be anywhere for cars leaving the V-Row to go in the regular lanes without congesting them more. However, it’s not going to be worse there than it is with no cars allowed in the bus right-of-way, at most it gets that bad. It may be the case the bus drivers could command all cars out of the V-Row (even behind the bus) because congestion is too high, or transit vehicles are getting too closely spaced due to the usual variations of transit. (In most cases detecting transit vehicles that are very close would be automatic and cars would be commanded not to enter those zones.)

There are many other applications for a receiver in a car to receive information on where to drive, including automatic direction around congestion, accidents and construction. I can think of many reasons to get one.

Some BRT plans call for the dedicated right-of-way to have very few physical connections with the ordinary streets. This might appear to make V-Row harder, but in fact it might make planning easier. Cars could be allowed in at controlled points, like metering lights, and commanded to leave at controlled points. In that case there would be no tickets, except for cars that pass an exit point they were commanded to leave at. If the system told you to stay in a lane and was wrong, and a bus came up behind you, it would not be your fault, but nor would it be so frequent as to slow the bus system much.

16 years of EFF next Thursday

Join me next Thursday (one-eleven) at the one-eleven Minna gallery in San Francisco to celebrate EFF’s 16th year. From 7 to 10pm. Suggested donation $20. Stop by if you’re at Macworld.

Details at

More eBay feedback

A recent Forbes items pointed to my earlier posts on eBay Feedback so I thought it was time to update them. Note also the eBay tag for all posts on eBay including comments on the new non-feedback rules.

I originally mused about blinding feedback or detecting revenge feedback. It occurs to me there is a far, far simpler solution. If the first party leaves negative feedback, the other party can’t leave feedback at all. Instead, the negative feedback is displayed both in the target’s feedback profile and also in the commenter’s profile as a “negative feedback left.” (I don’t just mean how you can see it in the ‘feedback left for others’ display. I mean it would show up in your own feedback that you left negative feedback on a transaction as a buyer or seller. It would not count in your feedback percentage, but it would display in the list a count of negatives you left, and the text response to the negative made by the other party if any.)

Why? Well, once the first feedbacker leaves a negative, how much information is there, really, in the response feedback? It’s a pretty rare person who, having been given a negative feedback is going to respond with a positive! Far more likely they will not leave any feedback at all if they admit the problem was their fault. Or that they will leave revenge. So if there’s no information, it’s best to leave it out of the equation.

This means you can leave negatives without fear of revenge, but it will be clearly shown to people who look at your profile whether you leave a lot of negatives or not, and they can judge from comments if you are spiteful or really had some problems. This will discourage some negative feedback, since people will not want a more visible reputation of giving lots of negatives. A typical seller will expect to have given a bunch of negatives to deadbeat buyers who didn’t pay, and the comments will show that clearly. If, however, they have an above average number of disputes over little things, that might scare customers off — and perhaps deservedly.

I don’t know if eBay will do this so I’ve been musing that it might be time for somebody to make an independent reputation database for eBay, and tie it in with a plugin like ShortShip. This database could spot revenge feedbacks, note the order of feedbacks, and allow more detailed commentary. Of course if eBay tries to stop it, it has to be a piece of software that does all the eBay fetching from user’s machines rather then a central server.

Rebate experiences

I wrote earlier about the controversial topic of discriminatory pricing, where vendors try to charge different customers different prices, usually based on what they can afford or will tolerate. One particularly vexing type of such pricing is the mail-in rebate. Mail in rebates do two things. In their pure form, they give a lower price to people willing to spend some time on the bureaucracy. As such, they would work at charging richer customers more because richer customers tend to value time more than money compared to poorer customers.

However, they are rarely that simple. Some products offer ridiculously low rebates it’s not worth anybody’s time to process — they are not much better than a trick. With higher rebates, often the full price is inflated to make the discount appear larger than it is. This can also be a trick. A person who has decided she will not do rebates should normally never buy such a product, however, in many cases people do buy them, and never get around to processing the rebate.

While the vendors never release figures, clearly many people never get their rebate. Companies that manage rebates can in fact make fairly realiable promises about how many of the rebates will actually be redeemed. While I suspect the largest reason for non-redemption is “not getting around to it,” in many cases rebate programs work to make it hard to redeem. They will make the redemption process as complex as possible, and not redeem on any little error. Some companies have even been found to have fraudulently failed to redeem correctly prepared rebate forms, waiting for customers to complain and paying only if they do. Of course, few customers complain, as it’s even more work, and of those who do, few retain the documentation necessary for a complaint. In many cases, customers do not even keep note of what rebate requests they sent out. Rebate companies tend to deliberately take as long as possible — usually several months — to process rebates. This is partly to keep the float on the money, but also I suspect to make people forget about what they are waiting for.

As such, I avoid most rebates, but I do do some of them. In particular, if I can do rebates in bulk, it can be worthwhile. In this case (usually around the holidays) I will gather together many rebates and fill them out all at once. I took a sheet of laser printer address labels and printed out stickers with all the common items desired on rebate forms, including name/address stickers which I already have, and stickers with a special E-mail address and free voicemail only phone number ( to speed up the process.

This year, several rebates now “offered” online processing. This turns out to save time for the company, not for you. You fill in the information (saving them data entry work) and it prints out your rebate form, which you must still mail in along with the original UPC and some form of original receipt. (Fry’s has automated their end of the rebate process, printing rebate receipts and rebate forms on thermal printers at the cash register.)

One of the companies, seemed like an even nastier trick. On my first visit the site was incredibly slow, taking 30 seconds per page in a multi-step process. However, a later visit was OK. However, they of course do nothing to make things easier, like re-use of data on a second rebate (including some of the famous “double rebate” products.) One thing they offer which is very positive is payment of your rebate via paypal, which has two giant benefits — no need for a trip to the bank, and easy tracking of when you are repaid. In addition, it eliminates the common trick of printing rebate cheques with “not valid after…” legends set for the very near future, another way they block redemption.

Onrebate also offers quick payment, if you let them keep about 10% of your rebate. Of course this is a bad deal to just get money 2 months sooner, but we know people fall for it. As an experiment, I filed two rebates with them, one with the instant payment and one without. I got the notice of processing on the instant payment one first, saying I would be paid within a couple of weeks. On the other hand I got the money on the other one first! E-mail notification is positive for tracking, of course. Some companies go the other way. I received a $10 rebate check recently with no indication, other than the name of the general rebate processing company, of what it was for. This helps confuse people about what rebates they have received and not received.

Even with the streamlined bulk process, however, it took too much time this year. One needs to check that one has followed all the rules, which often vary. Some demand signatures, some demand emails, some demand phone numbers. Some demand copies of receipts, some demand originals. Some demand web processing. Almost all demand original UPCs which can be hard work to cut out of products. Some demand copies. A quick and easy idea for “copies” is to use a digital camera to take pictures of the various items. This also is a quick record you can go back and check should you have the inclination. It doesn’t say the copies have to be very good. Most households don’t have photocopiers any more, but almost all have digital cameras and printers, which is even easier than a scanner.

(I also have a small sheetfed scanner I use for my paperless home efforts, but it has problems with thermal paper receipts.)

We’ll never see this become easy because of course the rebate management companies want the redemption rate to stay low. I presume some of them even market to the vendors the low rates, otherwise we would not see the “free after rebate” concept that has become more common.

I filed claims for $290 in rebates this December. So far one $60 (paypal) and one $10 have come in, and the expedited paypal rebates have not. I don’t expect to see much before late February, however.

Outside of bulk processing or very good rebate deals, the non-redemption rate seems to make it better to always check if there is a non-rebated product at a good price. Figure out your own “discount rate” for how often you personally complete rebates and how often you actually receive money. I doubt many get 100%. Then factor in a value on your time — what do you get paid per hour, figuring 2000 to 2500 hours for salaried people? Expect to spend 10 to 30 minutes on a rebate form, including post office trips, bank trips etc. (This is much lower if you regularly go to these places, or have at-home mail pickup as I do.)

Of course, you may not even agree with the company’s original goal — to find a way to charge more to people who value time over money, and thus less to those who value money over time. It is interesting, however, to speculate on what other systems might be devised to reach this goal that are not so random and bureaucratic as the rebate system. For example, use of the web only became practical once you could presume the “money > time” crowd had web access — a system that allows discounts only for the rich is not going to be very effective. I am interested in alternative ideas.

One might be to offer the rebates to those who agree to take a web journey that exposes them to advertising. This both assures they value money over time but actually sells that attention. A web process, upon which you are paid by paypal at the end, could be highly reliable without the “lottery” factor. Vendors could even start including tokens in products with one-time-use numbers on them which people could type in rather than having to mail physical UPC codes. (However, the mailing of the UPC code, aside from adding work and cost to the process also is important for disallowing returns of products after rebates are filed. Stores would need to check that the token number was present, and not used, before doing a return.) Stores could also print a similar magic number on sales receipts.

The work associated in the logistics of rebates can’t be eliminated by the web, though. The goal, after all, is to make the process time consuming, so you can only shift work from one place to another. But it can be made less random, which would actually encourage more people to buy rebated products if they truly believe they will offer up their time and attention.

A linux distro for making digital picture frames

I’ve thought digital picture frames were a nice idea for a while, but have not yet bought one. The early generation were vastly overpriced, and the current cheaper generation still typically only offer 640x480 resolution. I spend a lot to produce quality, high-res photography, and while even a megapixel frame would be showing only a small part of my available resolution, 1/4 megapixel is just ridiculous.

I’ve written before that I think a great product would either be flat panels that come with or can use a module to provide 802.11 and a simple protocol for remote computers to display stuff on them. Or I have wished for a simple and cheap internet appliance that would feature 802.11 and a VGA output to do the job. 1280x1024 flat panels now sell for under $150, and it would not take much in the way of added electronics to turn them into an 802.11 or even USB-stick/flashcard based digital photo frame with 4 times the resolution of the similarly priced dedicated frames.

One answer many people have tried is to convert an old laptop to a digital photo frame. 800x600 laptops are dirt cheap, and in fact I have some that are too slow to use for much else. 1024x768 laptops can also be had for very low prices on ebay, especially if you will take a “broken” one that’s not broken when it comes to being a frame — for example if it’s missing the hard disk, or the screen hinges (but not the screen) are broken. A web search will find you several tutorials on converting a laptop.

To make it really easy, what would be great is a ready to go small linux distribution aimed at this purpose. Insert a CD or flash card with the distribution on it and be ready to go as a picture frame.

Ideally, this distro would be set to run without a hard disk. You don’t want to spin the hard disk since that makes noise and generates heat. Some laptops won’t boot from USB or flash, so you might need a working hard drive to get booted, but ideally you would unmount it and spin it down after booting.

Having a flash drive is possible with just about all laptops, because PCMCIA compact flash adapters can be had for under $10. Laptops with USB can use cheaply available thumb-drives. PCMCIA USB adapters are also about $10, but beware that really old laptops won’t take the newer-generation “cardbus” models.

While some people like to put pictures into the frame using a flash card or stick, and this can be useful, I think the ideal way to do it is to use 802.11. And this is for the grandmother market. One of the interesting early digital picture frames had a phone plug on it. The frame would dial out by modem to download new pictures that you uploaded to the vendor’s site. The result was that grandma could see new pictures on a regular basis without doing anything. The downside was this meant an annoying monthly fee to cover the modem costs.

But today 802.11 is getting very common. Indeed, even if grandma is completely internet-phobic, there’s probably a neighbour’s 802.11 visible in her house, and what neighbour would not be willing to give permission for a function such as this. Then the box can be programmed to download and display photos from any typical photo web site, and family members can quickly upload or email photos to that web site.

Of course if there is no 802.11 then flash is the way to do it. USB sticks are ideal as they are cheap and easy to insert and remove, even for the computer-phobic. I doubt you really want to just stick a card out of a camera, people want to prepare their slideshows. (In particular, you want to pre-scale the images down to screen size for quick display and to get many more in the memory.) 800x600 pictures are in fact so small — 50kb can be enough — that you could even build the frame with no flash, just an all-ram linux that loads from flash, CD or spun-down hard drive, and keeps a 100 photos in spare ram, and sucks down new ones over the network as needed. This mode eliminates the need for worrying about drivers for flash or USB. The linux would run in frame-buffer mode, there would be no X server needed.

The key factor is that the gift giver prepares the box and mounts it on the wall, plugged in. After that the recipient need do nothing but look at it, while new photos arrive from time to time. While remote controls are nice (and can be done on the many laptops that feature infrared sensors) the zero-user-interface (ZUI) approach does wonders with certain markets.

Update: I’ve noticed that adapters for Laptop mini-IDE to compact flash are under $10. So you can take any laptop that’s missing a drive and insert a flash card as the drive, with no worries about whether you can boot from a removable device. You might still want an external flash card slot if it’s not going to be wifi, but you can get a silent computer easily and cheaply this way. (Flash disk is slower than HDD to read by has no seek time.)

Even for the builder the task could be very simple.

  • Unscrew or break the hinges to fold the screen against the bottom of the laptop (with possible spacer for heat)
  • Install, if needed, 802.11 card, USB card or flash slot and flash — or flash IDE.
  • Install linux distro onto hard disk, CD or flash
  • Configure by listing web URL where new photo information will be found, plus URL for parameters such as speed of slideshow, fade modes etc.
  • Configure 802.11 parameters
  • Put it in a deep picture frame
  • Set bios to auto turn-on after power failure if possible
  • Mount on wall or table and plug in.

Another war tragedy -- the solar opportunity in Iraq

While I’ve written before about the trouble in making solar competitive with grid power, this is not true when the grid is being blown up by geurilla fighters on a regular basis. Over the past couple of years, Bechtel has been paid over 2 billion dollars, mostly to try to rebuild the Iraq electrical infrastructure. Perhaps it’s not their fault that power is only on in Bagdadh for 2 hours a day after these billions have been spent — but their might have been a better way.

Imagine if that billion had been directed at building a solar power system, with a lower-power grid for night power. A billion would have provided major stimulus to the solar industry, of course, and helped the companies that are working at making PV cost-effective. But it also would have generated a power infrastructure that was much harder to destroy in a civil war. Yes, they might take down sections of the grid, but these would only have been there for night and brownout power. Without them, people would still have had more power. And not just during the day. Mini “neighbourhood grid” systems could allow small areas to have backup diesel generators. Not quite as efficient as the big generators but much more difficult to take down. The “value” targets would still see their local panels and generators under attack, but that’s the way of it.

It seems odd to think of this in a country with so much oil. But doing this would have also had a major effect on greenhouse gas emissions. Putting solar into Iraq would have made the US responsible for major emission cuts. Cutting emissions there so we don’t have to cut them here.

Something to think about next time your country goes and destroys a foreign country’s power grid and then works to rebuild it. (Of course, ideally that’s never.)

Online shopping -- set when you need to get it.

I was seduced by Google’s bribe of $20 per $50 or greater order to try their new Checkout service, and did some Christmas shopping on Normally, being based in Southern California, takes only 1 or 2 days by UPS ground to get things to me. So ordering last weekend should have been low risk for items that are “in stock and ship in 1-2 days.” Yes, they cover their asses by putting a longer upper bound on the shipping time, but generally that’s the ship time for people on the other coast.

I got a mail via Google (part of their privacy protection) that the items had been shipped on Tuesday, so all was well. Unfortunately, I didn’t go and immediately check on the tracking info. The new interface with Google Checkout makes that harder to do — normally you can just go to the account page on most online stores and follow links directly to checking. Here the interface requires you to cut and paste order numbers and it’s buggy, reporting incorrect shipper names.

Unfortuantely it’s becoming common for online stores to keep things in different warehouses around the country now. Some items I ordered, it turns out, while shipped quickly, were shipped from far away. They’ll arrive after Christmas. So now I have to go out and buy the items at stores, or different items in some cases, at higher prices, without the seductive $20 discount — and I then need to arrange return of items ordered after they get here. And I’ll probably be out not only the money I paid for shipping (had I wanted them after christmas I would have selected the free saver shipping option of course) but presumably return shipping.

A very unsatisfactory shopping experience.

How could this have been improved (other than by getting the items to me?)

  1. When they e-mail you about shipment, throw in a tracking link and also include the shipper’s expected delivery day. UPS and Fedex both give that, and even with the USPS you can provide decent estimates.
  2. Let me specify in the order, “I need this by Dec 23.” They might be able to say right then and there that “This item is in stock far away. You need to specify air shipping to do that.”
  3. Failing that, they could, when they finally get ready to ship it, look at what the arrival date will be, and, if you’ve set a drop-dead date, cancel the shipment if it won’t get to you on time. Yes, they lose a sale but they avoid a very disappointed customer.

This does not just apply around Christmas. I often go on trips, and know I won’t be home on certain days. I may want to delay delivery of items around such days.

As I blogged earlier, it also would simplify things a lot if you could use the tracking interface of UPS, Fedex and the rest to reject or divert shipments in transit. If I could say “Return to sender” via the web on a shipment I know is a waste of time, the vendor wins, I win, and even the shipping company can probably set a price for this where they win too. The recipient saves a lot of hassle, and the vendor can also be assured the item has not been opened and quickly restock it as new merchandise. If you do a manual return they have to inspect, and even worry about people who re-shrinkwrap returns to cheat them.

Another issue that will no doubt come up — the Google discount was $20 off orders of $50 or more. If I return only some of the items, will they want to charge me the $20? In that case, you might find yourself in a situation where returning an item below $20 would cost you money! In this case I need to return the entire order except one $5 item I tossed on the order, so it won’t be an issue.

Jolly December to all. (Jolly December is my proposal for the Pastafarian year-end holiday greeting, a good salvo in the war on Christmas. If they’re going to invent a war on Christmas, might as well have one.)

More on finding the lost

Last week, I wrote about new ideas for finding the lost. One I’ve done some follow-up on is the cell phone approach. While it’s not hard to design a good emergency rescue radio if you are going to explicitly carry a rescue device when you get lost, the key to cell phones is that people are already carrying them without thinking about it — even when going places with no cell reception since they want the phone with them when they return to reception.

Earlier I proposed a picocell to be mounted in a light plane (or even drone) that would fly over the search area and try to ping the phone and determine where it is. That would work with today’s phones. It might have found the 3 climbers, now presumed dead, on Mt. Hood because one of them definitely had a cell phone. It would also have found James Kim because they had a car battery, on which a cell phone can run for a long time.

My expanded proposal is for a deliberate emergency rescue mode on cell phones. It’s mostly software (and thus not expensive to add) but people would even pay for it. You could explicitly put your phone into emergency rescue mode, or have it automatically enter it if it’s out of range for a long time. (For privacy reasons you would want to be able to disable any automatic entry into such a mode, or at least be warned about it.)

What you do in this mode depends on how accurate a clock you have. Many modern phones have a very accurate clock, either from the last time they saw the cell network, or from GPS receivers inside the phone. If you have an accurate clock, then you can arrange to wake up and listen for signals from rescue planes at very precise times, and the planes will know those times exactly as well. So you can be off most of the time and thus do this with very low power consumption. It need not be a plane — it’s not out of the question to have a system with a highly directional antenna in some point that can scan the area.

If you don’t know the exact time, you can still listen at intervals while you have power. As your battery dies, the intervals between wakeups have to get longer. Once they get down to long periods like hours, the rescue crews can’t tell exactly when you will transmit and just have to run all the time.

If you know the exact time a phone will be on, you can even pull tricks like have other transmitters cut out briefly at that time (most protocols can tolerate sub-second outages) to make the radio spectrum quieter.

At first, you can actually listen quite often. The owner of the phone, if conscious might even make the grim evaluation of how long they can hold out and tell the phone to budget power for that many days.

When the phone hears the emergency ping (which quite possibly will be at above-normal power) it can also respond at above normal power, if it feels it has the power budget for it. It can also beep to the owner to get input on that question. (Making the searcher’s ping more powerful can actually be counterproductive as it could make the phone respond when it can’t possibly be received. The ping could indicate what its transmit power was, allowing the phone to judge whether its signal could possibly make it back to a good receiver.)

Of course if the phone has a GPS, once it does sync up with the picocell, it could provide its exact location. Otherewise it could do a series of blips to allow direction finding or fly-over signal strength location of the phone.

In most cases, if we know who the missing person is we’ll know their cell phone number, and thus their phone carrier and in most cases the model of phone they have. So searchers would know exactly what to look for, and whether the phone supports any emergency protocol or just has to be searched for with standard tech.

I’ve brought some of these ideas up with friends at Qualcomm. We’ll see if something can come of it.

Update: Lucent does have a picocell that was deployed in some rescue operations in New Orleans. Here’s a message discussing it

A real life Newcomb's Paraodox

This week I participated in this thread on Newcomb’s Paraodox which was noted on BoingBoing.

The paradox:

A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only. But there’s a catch.

The being claims that he is able to predict what any human being will decide to do. If he predicted you would take only the closed box, then he placed a million dollars in it. But if he predicted you would take both boxes, he left the closed box empty. Furthermore, he has run this experiment with 999 people before, and has been right every time.

What do you do?

A short version of my answer: The parodox confuses people because it stipulates you are a highly predictable being to the alien, then asks you to make a choice. But in fact you don’t make a choice, you are a choice. Your choice derives from who you are, not the logic you go through before the alien. The alien’s power dictates you already either are or aren’t the sort of person who picks one box or two, and in fact the alien is the one who made the choice based on that — you just imagine you could do differently than predicted.

Those who argue that since the money is already in the boxes, you should always take both miss the point of the paradox. That view is logically correct, but those who hold that view will not become millionaires, and this was set by the fact they hold the view. It isn’t that there’s no way the contents of the boxes can change because of your choice, it’s that there isn’t a million there if you’re going to think that way.

Of course people don’t like that premise of predictability and thus, as you will see in the thread, get very involved in the problem.

In thinking about this, it came to me that the alien is not so hypothetical. As you may know from reading this blog, I was once administered Versed, a sedative that also blocks your ability to form long term memories. I remember the injection, but not the things I said and did afterwards.

In my experiment we recruit subjects to test the paradox. They come in and an IV drip is installed, though they are not told about Versed. (Some people are not completely affected by Versed but assume our subjects are.) We ask subjects to give a deliberated answer, not to just try to be random, flip a coin or whatever.

So we administer the drug and present the problem, and see what you do. The boxes are both empty — you won’t remember that we cheated you. We do it a few times if necessary to see how consistent you are. I expect that most people would be highly consistent, but I think it would be a very interesting thing to research! If a few are not consistent, I suspect they may be deliberately being random, but again it would be interesting to find out why.

We videotape the final session, where there is money in the boxes. (Probably not a million, we can’t quite afford that.) Hypothetically, it would be even better to find another drug that has the same sedative effects of Versed so you can’t tell it apart and don’t reason differently under it, but which allows you to remember the final session — the one where, I suspect, we almost invariably get it right.

Each time you do it, however, you think you’re doing it for the first time. However, at first you probably (and correctly) won’t want to believe in our amazing predictive powers. There is no such alien, after all. That’s where it becomes important to videotape the last session or even better, have a way to let you remember it. Then we can have auditors you trust completely audit the experimenter’s remarkable accuracy (on the final round.) We don’t really have to lie to the auditors, they can know how we do it. We just need a way for them to swear truthfully that on the final round, we are very, very accurate, without conveying to the subject that there are early, unremembered rounds where we are not accurate. Alas, we can’t do that for the initial subjects — another reason we can’t put a million in.

Still, I suspect that most people would be fairly predictable and that many would find this extremely disturbing. We don’t like determinism in any form. Certainly there are many choices that we imagine as choices but which are very predictable. Unless you are bi, you might imagine you are choosing the sex of your sexual partners — that you could, if it were important, choose differently — but in fact you always choose the same.

What I think is that having your choices be inherent in your makeup is not necessarily a contradiction to the concept of free will. You have a will, and you are free to exercise it, but in many cases that will is more a statement about who you are than what you’re thinking at the time. The will was exercised in the past, in making you the sort of mind you are. It’s still your will, your choices. In the same way I think that entirely deterministic computers can also make choices and have free will. Yes, their choices are entirely the result of their makeup. But if they rate being an “actor” then the choices are theirs, even if the makeup’s initial conditions came from a creator. We are created by our parents and environment (and some think by a deity) but that’s just the initial conditions. Quickly we become something unto ourselves, even if there is only one way we could have done that. We are not un-free, we just are what we are.

Fixing upgrades -- a database recording ease-of-upgrade

I’ve been writing recently about the linux upgrade nightmares that continue to trouble the world. The next in my series of ideas is a suggestion that we try to measure how well upgrades go, and make a database of results available.

Millions of people are upgrading packages every day. And it usually goes smoothly. However, when it doesn’t, it would be nice if that were recorded and shared. Over time, one could develop an idea of which upgrades are safer than others. Thus, when it’s time to upgrade many packages, the system could know which ones always go well, and which ones might deserve a warning, or should only be done if you don’t have something critical coming up that day.

We already know some of these. Major packages like Apache are often a chore, though they’ve done a lot more by using a philosophy of configuration files I heartily approve of — dividing up configuration to put config by different people in different files.

Some detection is automated. For example, the package tools detect if a configuration file is being upgraded after it’s been changed and offer the user a chance to keep the new one, their old one, or hand-mix them. What choice the user makes could be noted to measure how well the upgrades go. Frankly, any upgrade that even presents the user with questions should get some minor points against it, but if a user has to do a hand merge it should get lots of negative points.

Upgrades that got no complaint should be recorded, and upgrades that get an explicit positive comment (ie. the user actively says it went great) should also be noted. Of course, any time a user does an explicit negative comment that’s the most useful info of all. Users should be able to browse a nice GUI of all their recent upgrades — even months later — and make notes on how well things are going. If you discover something broken, it should be easy to make the report.

Then, when it comes time to do a big upgrade, such as a distribution upgrade, certain of the upgrades can be branded as very, very safe, and others as more risky. In fact, users could elect to just do only the safe ones. Or they could even elect to automatically do safe upgrades, particularly if there are lots of safety reports on their exact conditions (former and current version, dependencies in place.) Automatic upgrading is normally a risky thing, it can generate the risk of a problem accidentally spreading like wildfire, but once you have lots of reports about how safe it is, you can make it more and more automatic.

Thus the process might start with upgrading the 80% of packages that are safe, and then the 15% that are mostly safe. Then allocate some time and get ready for the ones that probably will involve some risk or work. Of course, if everything depends on a risky change (such as a new libc) you can’t get that order, but you can still improve things.

There is a risk of people gaming the database, though in non-commercial environments that is hopefully small. It may be necessary to have reporters use IDs that get reputations. For privacy reasons, however, you want to anonymize data after verifying it.

Towards a Zero User Interface backup system

I’ve spoken before about ZUI (Zero User Interface) and how often it’s the right interface.

One important system that often has too complex a UI is backup. Because of that, backups often don’t get done. In particular offsite backups, which are the only way to deal with fire and similar catastrophe.

Here’s a rough design for a ZUI offsite backup. The only UI at a basic level is just installing and enabling it — and choosing a good password (that’s not quite zero UI but it’s pretty limited.)

Once enabled, the backup system will query a central server to start looking for backup buddies. It will be particularly interested in buddies on your same LAN (though it will not consider them offsite.) It will also look for buddies on the same ISP or otherwise close by, network-topology wise. For potential buddies, it will introduce the two of you and let you do bandwidth tests to measure your bandwidth.

At night, the tool would wait for your machine and network to go quiet, and likewise the buddy’s machines. It would then do incremental backups over the network. These would be encrypted with secure keys. Those secure keys would in turn be stored on your own machine (in the clear) and on a central server (encrypted by your password.)

The backup would be clever. It would identify files on your system which are common around the network — ie. files of the OS and installed software packages — and know it doesn’t have to back them up directly, it just has to record their presence and the fact that they exist in many places. It only has to transfer your own created files.

Your backups are sent to two or more different buddies each, compressed. Regular checks are done to see if the buddy is still around. If a buddy leaves the net, it quickly will find other buddies to store data on. Alas, some files, like video, images and music are already compressed, so this means twice as much storage is needed for backup as the files took — though only for your own generated files. So you do have to have a very big disk 3 times bigger than you need, because you must store data for the buddies just as they are storing for you. But disk is getting very cheap.

(Another alternative is RAID-5 style. In RAID-5 style, you distribute each file to 3 or more buddies, except in the RAID-5 parity system, so that any one buddy can vanish and you can still recover the file. This means you may be able to get away with much less excess disk space. There are also redundant storage algorithms that let you tolerate the loss of 2 or even 3 of a larger pool of storers, at a much more modest cost than using double the space.)

All this is, as noted, automatic. You don’t have to do anything to make it happen, and if it’s good at spotting quiet times on the system and network, you don’t even notice it’s happening, except a lot more of your disk is used up storing data for others.

It is the automated nature that is so important. There have been other proposals along these lines, such as MNET and some commercial network backup apps, but never an app you just install, do quick setup and then forget about until you need to restore a file. Only such an app will truly get used and work for the user.

Restore of individual files (if your system is still alive) is easy. You have the keys on file, and can pull your file from the buddies and decrypt it with the keys.

Loss of a local disk is more work, but if you have multiple computers in the household, the keys could be stored on other computers on the same LAN (alas this does require UI to approve this) and then you can go to another computer to get the keys to rebuild the lost disk. Indeed, using local computers as buddies is a good idea due to speed, but they don’t provide offsite backup. It would make sense for the system, at the cost of more disk space, to do both same-LAN backup and offsite. Same-LAN for hardware failures, offsite for building-burns-down failures.

In the event of a building-burns-down failure, you would have to go to the central server, and decrypt your keys with that password. Then you can get your keys and find your buddies and restore your files. Restore would not be ZUI, because we need no motiviation to do restore. It is doing regular backups we lack motivation for.

Of course, many people have huge files on disk. This is particularly true if you do things like record video with MythTV or make giant photographs, as I do. This may be too large for backup over the internet.

In this case, the right thing to do is to backup the smaller files first, and have some UI. This UI would warn the user about this, and suggest options. One option is to not back up things like recorded video. Another is to rely only on local backup if it’s available. Finally, the system should offer a manual backup of the large files, where you connect a removable disk (USB disk for example) and transfer the largest files to it. It is up to you to take that offsite on a regular basis if you can.

However, while this has a UI and physical tasks to do, if you don’t do it it’s not the end of the world. Indeed, your large files may get backed up, slowly, if there’s enough bandwidth.

Something isn't CLEAR about airport line-jumping program

A new program has appeared at San Jose Airport, and a few other airports like Orlando. It’s called “Clear” and is largely the product of the private company Clear at But something smells very wrong.

To get the Clear card, you hand over $99/year. The private company keeps 90% and the TSA gets the small remainder. You then have to provide a fingerprint, an iris scan and your SSN, among other things.

What do you get for this? You get to go to the front of the security line, past all the hoi polloi. But that’s it. Once at the front of the line, you still go through the security scan the same as anybody else. Which is, actually, the right thing to do since “trusted traveller” programs which actually let you bypass the security procedure are in fact bad for security compared to random screening.

But what doesn’t make sense is — why all the background checks and biometrics just to go to the head of the line? Why wouldn’t an ordinary photo ID card work? It doesn’t matter who you are. You could be Usama bin Ladin because all you did was not wait in line.

So what gives? Is this just an end run to get people more used to handing over fingerprints and other information as a natural consequence of flying? Is it a plan to change the program to one that lets the “clear” people actually avoid being x-rayed. As it stands, it certainly makes no sense.

Note that it’s not paying to get to the front of the line that makes no sense, though it’s debatable why the government should be selling such privileges. It’s the pointless security check and privacy invasion. For some time United Airlines at their terminal in SFO has had a shorter security line for their frequent flyers. But it doesn’t require any special check on who you are. If you have status or a 1st class ticket, you’re in the short line.

Generic internet appliances

Normally I’m a general-purpose computing guy. I like that the computer that runs my TV with MythTV is a general purpose computer that does far more than a Tivo ever would. My main computer is normally on and ready for me to do a thousand things.

But there is value in specialty internet appliances, especially ones that can be very low power and small. But it doesn’t make sense to have a ton of those either.

I propose a generic internet appliance box. It would be based on the same small single-board computers which run linux that you find in the typical home router and many other small network appliances. It would ideally be so useful that it would be sold in vast quantities, either in its generic form or with minor repurposings.

Here’s what would be in level 1 of the box:

  • A small, single-board linux computer with low power processor such as the ARM
  • Similar RAM and flash to today’s small boxes, enough to run a modest linux.
  • WiFi radio, usually to be a client — but presumably adaptable to make access points (in which case you need ethernet ports, so perhaps not.)
  • USB port
  • Infrared port for remote control or IR keyboard (optionally a USB add-on)

Optional features would include:

  • Audio output with low-fi speaker
  • Small LCD panel
  • DVI output for flat panel display
  • 3 or 4 buttons arranged next to the LCD panel

The USB port on the basic unit provides a handy way to configure the box. On a full PC, write a thumb-drive with the needed configuration (in particular WiFi encryption keys) and then move the thumb drive to the unit. Thumb drives can also provide a complete filesystem, software or can contain photo slide shows in the version with the video output. Thumb drives could in fact contain entire applications, so you insert one and it copies the app to the box’s flash to give it a personality.

Here are some useful applications:

  • In many towns, you can see when a bus or train will arrive at your stop over the internet. Program the appliance with your stop and how long it takes to walk there after a warning. Press a button when you want to leave, and the box announces over the speaker a countdown of when to go to meet the transit perfectly.
  • Email notifier
  • MP3 output to stereo or digital speakers
  • File server (USB connect to external drives — may require full ethernet.)
  • VOIP phone system speakerphone/ringer/announcer
  • Printer server for USB printers
  • Household controller interface (X10, thermostat control, etc.)

Slap on the back of cheap flat panel display mounted on the wall, connected with video cable. Now offer a vast array of applications such as:

  • Slide show
  • Security video (low-res unless there is an mpeg decoder in the box.)
  • Weather/News/Traffic updates
  • With an infrared keyboard, be a complete terminal to other computer apps and a minimal web browser.

There are many more applications people can dream up. The idea is that one cheap box can do all these things, and since it could be made in serious quantities, it could end up cheaper than the slightly more specialized boxes, which themselves retail for well under $50 today. Indeed today’s USB printer servers turn out to be pretty close to this box.

The goal is to get these out and let people dream up the applications.

A first solution to linux dependencies part 2 -- yes, service packs

Last week I wrote about linux’s problems with dependencies and upgrades and promised some suggestions this week.

There are a couple of ideas here to be stolen from (sacrilige) windows which could be a start here, though they aren’t my long term solution.

Microsoft takes a different approach to updates, which consists of little patches and big service packs. The service packs integrate a lot of changes, including major changes, into one upgrade. They are not very frequent, and in some ways akin to the major distribution releases of systems like Ubuntu (but not its parent Debian ), Fedora Core and SuSE.

Installing a service pack is certainly not without risks, but the very particular combination of new libraries and changed apps in a service pack is extensively tested together, as is also the case for a major revision of a linux distribution. Generally installing one of these packs has been a safe procedure. Most windows programs also do not use hand-edited configuration files for local changes, and so don’t suffer from the upgrade problems associated with this particular technique nearly as much.  read more »

Let the world search for the lost

There is a story that Ikonos is going to redirect a satellite to do a high-res shot of the area where CNet editor James Kim is missing in Oregon. That’s good, though sadly, too late, but they also report not knowing what to do with the data.

I frankly think that while satellite is good, for something like this, traditional aerial photography is far better, because it’s higher resolution, higher contrast, can be done under clouds, can be done at other than a directly overhead angle, is generally cheaper and on top of all this can possibly be done from existing searchplanes.

But what to do with such hi-res data? Load it into a geo-browsing system like Google Earth or Google Maps or Microsoft Live. Let volunteers anywhere in the world comb through the images and look for clues about the missing person or people. Ideally, allow the map to be annotated so that people don’t keep reporting the same clues or get tricked by the same mistakes. (In addition to annotation, you would want to track which areas had been searched the most, and offer people suggested search patterns that cover unsearched territory or special territory of interest.)

These techniques are too late for Kim, but the tools could be ready for the next missing person, so that a plane could be overflying an area on short notice, and the data processed and up within just minutes of upload and stitching.

Right now Google’s tools don’t have any facility for looking at shots from an angle, while Microsoft’s do but without the lovely interface of Keyhole/Google Earth. Angle shots can do things like see under some trees, which could be important. This would be a great public service for some company to do, and might actually make searches far faster and cheaper. Indeed, in time, people who are lost might learn that, if they can’t flash a mirror at a searchplane, they should find a spot with a view of the sky and build some sort of artificial glyph on the ground. If there were a standard glyph, algorithms could even be written to search for it in pictures. With high-res aerial photography the glyph need not be super large.

Update: It’s also noted the Kims had a cell phone, and were found because their phone briefly synced with a remote tower. They could have been found immediately if rescue crews had a small mini-cell base station (for all cell technologies) that could be mounted in a regular airplane and flown over the area. People might even know to turn on their cell phone if they are conserving power if they heard a plane. (In a car with a car charger, you can leave the phone on.) As soon as the plane gets within a few miles (range is very good for sky-based antenna) you could just call and ask “where are you?” or, in the sad case where they can’t answer, find it with signal strength or direction finding. There are plans to build cell stations to be flown over disaster areas, but this would be just a simple unit able to handle just one call. It could be a good application for software radio, which is able to receive on all bands at once with simple equipment, at a high cost in power. No problem on a plane.

Speaking of rescue, I should describe one of my father’s inventions from the 70s. He designed a very simple “sight” to be placed on a mirror. First you got a mirror (or piece of foil) and punched a hole in it you could look through. In his fancy version, he had a tube connected to the mirror with wires, but it could be handheld. The tube itself had a smaller exit hole (like a washer glued to the end of a toilet paper cardboard tube.)

Anyway, you could look through the hole in your mirror, sight the searchplane through the washer in the cardboard tube and adust the mirror so the back of the washer is illumnated by the sunlight from the mirror. Thus you could be sure you were flashing sunlight at the plane on a regular basis. He tried to sell military on putting a folded mirror and sighting tube in soldier’s rescue kits. You could probably do something with your finger in a pinch though, just put your finger next to the plane and move the mirror so your finger lights up. Kim didn’t think of it, but taking one of the mirrors off his car would have been a good idea as he left on his trek.

Avoid thermal printers for long-term uses

We still see a lot of thermal printers out there, particularly for printing labels, receipts and the like. They are cheap, of course, though the paper costs extra so it's not always a long term win.

However, I am seeing them used for receipts that people may need to use some time later, and the problem is they fade. They definitely fade if you put them in a wallet or anywhere else that will be kept on your body. For my prepaid cell phone in Canada, for example, I need to buy the vouchers in advance so I can refill over the web before I travel back to Canada, and the most recent purchase came on thermal paper that is already faded partly and will be gone soon. I wrote down the number for protection, but it's just 3 weeks later.

So let's see a move away from thermal printers for receipts. They are OK for mailing labels which are very short lived, or places that will never see exposure to heat, or accidentally being left in the sun, but inkjets are so cheap now that there's not much excuse. (Though I realize inkjets have more moving parts.)

I also find for some reason that the thin thermal paper they use at Fry's for their receipts confuses the sheetfed scanner I use to scan receipts. It's not always sure there is paper in the scanner. I suppose that's mostly the scanner's fault, but it wouldn't happen if Fry's used a better paper or process.

The linux package upgrade nightmare, part 1

We all spend far too much of our time doing sysadmin. I’m upgrading and it’s as usual far more work than it should be. I have a long term plan for this but right now I want to talk about one of Linux’s greatest flaws — the dependencies in the major distributions.

When Unix/Linux began, installing free software consisted of downloading it, getting it to compile on your machine, and then installing it, hopefully with its install scripts. This always works but much can go wrong. It’s also lots of work and it’s too disconnected a process. Linuxes, starting with Red Hat, moved to the idea of precompiled binary packages and a package manager. That later was developed into an automated system where you can just say, “I want package X” and it downloads and installs that program and everything else it needs to run with a single command. When it works, it “just works” which is great.

When you have a fresh, recent OS, that is. Because when packagers build packages, they usually do so on a recent machine, typically fully updated. And the package tools then decide the new package “depends” on the latest version of all the libraries and other tools it uses. You can’t install it without upgrading all the other tools, if you can do this at all.

This would make sense if the packages really depended on the very latest libraries. Sometimes they do, but more often they don’t. However, nobody wants to test extensively with old libraries, and serious developers don’t want to run old distributions, so this is what you get.

So as your system ages, if you don’t keep it fully up to date, you run into a serious problem. At first you will find that if you want to install some new software, or upgrade to the lastest version to get a fix, you also have to upgrade a lot of other stuff that you don’t know much about. Most of the time, this works. But sometimes the other upgrades are hard, or face a problem, one you don’t have time to deal with.

However, as your system ages more, it gets worse. Once you are no longer running the most recent distribution release, nobody is even compiling for your old release any more. If you need the latest release of a program you care about, in order to fix a bug or get a new feature, the package system will no longer help you. Running that new release or program requires a much more serious update of your computer, with major libraries and more — in many ways the entire system. And so you do that, but you need to be careful. This often goes wrong in one way or another, so you must only do it at a time when you would be OK not having your system for a day, and taking a day or more to work on things. No, it doesn’t usually take a day — but it might. And you have to be ready for that rare contingency. Just to get the latest version of a program you care about.

Compare this to Windows. By and large, most binary software packages for windows will install on very old versions of Windows. Quite often they will still run on Windows 95, long ago abandoned by Microsoft. Win98 is still supported. Of late, it has been more common to get packages that insist on 7 year old Windows 2000. It’s fairly rare to get something that insists on 5-year-old Windows XP, except from Microsoft itself, which wants everybody to need to buy upgrades.

Getting a new program for your 5 year old Linux is very unlikley. This is tolerated because Linux is free. There is no financial reason not to have the latest version of any package. Windows coders won’t make their program demand Windows XP because they don’t want to force you to buy a whole new OS just to run their program. Linux coders forget that the price of the OS is often a fairly small part of the cost of an upgrade.

Systems have gotten better at automatic upgrades over time, but still most people I know don’t trust them. Actively used systems acquire bit-rot over time, things start going wrong. If they’re really wrong you fix them, but after a while the legacy problems pile up. In many cases a fresh install is the best solution. Even though a fresh install means a lot of work recreating your old environment. Windows fresh installs are terrible, and only recently got better.

Linux has been much better at the incremental upgrade, but even there fresh installs are called for from time to time. Debian and its children, in theory, should be able to just upgrade forever, but in practice only a few people are that lucky.

One of the big curses (one I hope to have a fix for) is the configuration file. Programs all have their configuration files. However, most software authors pre-load the configuration file with helpful comments and default configurations. The user, after installing, edits the configuration file to get things as they like, either by hand, or with a GUI in the program. When a new version of the program comes along, there is a new version of the “default” configuration file, with new comments, and new default configuration. Often it’s wrong to run your old version, or doing so will slowly build more bit-rot, so your version doesn’t operate as nicely as a fresh one. You have to go in and manually merge the two files.

Some of the better software packages have realized they must divide the configuration — and even the comments — made by the package author or the OS distribution editor from the local changes made by the user. Better programs have their configuration file “include” a normally empty local file, or even better all files in a local directory. This does not allow comments but it’s a start.

Unfortunately the programs that do this are few, and so any major upgrade can be scary. And unfortunately, the more you hold off on upgrading the scarier it will be. Most individual package upgrades go smoothly, most of the time. But if you leave it so you need to upgrade 200 packages at once, the odds of some problem that diverts you increase, and eventually they become close to 100%.

Ubuntu, which is probably my favourite distribution, has announced that their “Dapper Drake” distribution, from mid 2006, will be supported for desktop use for 3 years, and 5 years for server use. I presume that means they will keep compiling new packages to run on the older base of Dapper, and test all upgrades. This is great, but it’s thanks to the generousity of Mark Shuttleworth, who uses his internet wealth to be a fabulous sugar daddy to the Linux and Ubuntu movements. Already the next release is out, “Edgy” and it’s newer and better than Dapper, but with half the support promise. It will be interesting to see what people choose.

When it comes to hardware, Linux is even worse. Each driver works with precisely one kernel it is compiled for. Woe onto you once you decide to support some non-standard hardware in your Linux box that needs a special driver. Compiling a new driver isn’t hard once, until you realize you must do it all again any time you would like to slightly upgrade your kernel. Most users simply don’t upgrade their kernels unless they face a screaming need, like fixing a major bug, or buying some new hardware. Linux kernels come out every couple of weeks for the eager, but few are so eager.

As I get older, I find I don’t have the time to compile everything from source, or to sysadmin every piece of software I want to use. I think there are solutions to some of these problems, and a simple first one will be talked about in the next installment, namely an analog of Service Packs

Flying Cars -- Airport Carshare system

Parking at airports seems a terrible waste — expensive parking and your car sits doing nothing. I first started thinking about the various Car Share companies (City CarShare, ZipCar, FlexCar — effectively membership based hourly car rentals which include gas/insurance and need no human staff) and why one can’t use them from the airport. Of course, airports are full of rental car companies, which is a competitive problem, and parking space there is at a premium.

Right now the CarShare services tend to require round-trip rentals, but for airports the right idea would be one-way rentals — one member drives the car to the airport, and ideally very shortly another member drives the car out of the airport. In an ideal situation, coordinated by cell phone, the 2nd member is waiting at the curb, and you would just hand off the car once it confirms their membership for you. (Members use a code or carry a key fob.) Since you would know in advance before you entered the airport whether somebody is ready, you would know whether to go to short term parking or the curb — or a planned long-term parking lot with a bit more advance notice so you allocate the extra time for that.

Of course the 2nd member might not want to go to the location you got the car from, which creates the one-way rental problem that carshares seem to need to avoid. Perhaps better balancing algorithms could work, or at worst case, the car might have to wait until somebody from your local depot wants to go there. That’s wasteful, though. However, I think this could be made to work as long as the member base is big enough that some member is going in and out of the airport.

I started thinking about something grander though, namely being willing to rent your own private car out to bonded members of a true car sharing service. This is tougher to do but easier to make efficient. The hard part is bonding reliability on the part of all concerned.

Read on for more thinking on it…  read more »

Syndicate content