Submitted by brad on Sun, 2007-03-25 13:56.
One of my current peeves is just how much time we spend maintaining and upgrading computer operating systems, even as ordinary users. The workload for this is unacceptably high, though it’s not as though people are unaware of the problem.
Right now I’m updating one system to the beta of the new Ubuntu Feisty Fawn. (Ubuntu is the Linux distro I currently recommend.) They have done some work on building a single upgrader, which is good, but I was shocked to see an old problem resurface. In a 2 hour upgrade process, it asked me questions it didn’t need to ask me, and worse, it asked them at different times in the process. read more »
Submitted by brad on Mon, 2007-03-12 19:08.
I've ranted before about just how hard it has become to configure and administer computers. And there are services where you can hire sysadmins to help you, primarily aimed at novice users.
But we advanced users often need help today, too. Mostly when we run into problems we go to message boards, or do web searches and find advice on what to do. And once we get good on a package we can generally fix problems with it in no time.
I would love a service where I can trade my skill with some packages for help from others on other packages. There are some packages I know well, and could probably install for you or fix for you in a jiffy. Somebody else can do the same favour for me. In both cases we would explain what we did so the other person learned.
All of this would take place remotely, with VNC or ssh. Of course, this opens up a big question about trust. A reputation system would be a big start, but might not be enough. Of course you would want a complete log of all files changed, and how they were changed -- this service might apply more to just editing scripts and not compiling new binaries. Best of all, you could arrange to have a virtualized version of your machine around for the helper to use. After examining the differences you could apply to them to your real machine. Though in the end, you still need reputations so that people wanting to hack machines would not get into the system. They might have to be vetted as much as any outside consultant you would hire for money.
There seems a real efficiency to be had if this could be made to work. How often have you pounded for hours on something that a person skilled with the particular software could fix in minutes? How often could you do the same for others? Indeed, in many cases the person helping you might well be one of the developers of a system, who also would be learning about user problems. (Admittedly those developers would quickly earn enough credit to not have to maintain any other part of their system.)
The real tool would be truly secure operating systems where you can trust a stranger to work on one component.
Submitted by brad on Sun, 2007-03-04 19:50.
Most of us, when we travel, put appointments we will have while on the road into our calendars. And we usually enter them in local time. ie. if I have a 1pm appointment in New York, I set it for 1pm not 10am in my Pacific home time zone. While some calendar programs let you specify the time zone for an event, most people don't, and many people also don't change the time zone when they cross a border, at least not right away. (I presume that some cell phone PDAs pick up the new time from the cell network and import it into the PDA, if the network provides that.) Many PDAs don't really even let you set the time zone, just the time.
Here's an idea that's simple for the user. Most people put their flights into their calendars. In fact, most of the airline web sites now let you download your flight details right into your calendar. Those flight details include flight times and the airport codes.
So the calendar software should notice the flight, look up the destination airport code, and trigger a time zone change during the flight. This would also let the flight duration look correct in the calendar view window, though it would mean some "days" would be longer than others, and hours would repeat or be missing in the display.
You could also manually enter magic entries like "TZ to PST" or similar which the calendar could understand as a command to change the zone at that time.
Of course, I could go on many long rants about the things lacking from current calendar software, and perhaps at some point I will, but this one struck me as interesting because, in the downloaded case, the UI for the user is close to invisible, and I always like that.
It becomes important when we start importing our "presence" from our calendar, or get alerts from our devices about events, we don't want these things to trigger in the wrong time zone.
Submitted by brad on Wed, 2007-02-14 20:12.
Virtualizer technology, that lets you create a virtual machine in which to run another “guest” operating system on top of your own, seems to have arrived. It’s common for servers (for security) and for testing, as well as things like running Windows on linux or a Mac. There are several good free ones. One, kvm, is built into the lastest Linux kernel (2.6.20). Microsoft offers their own.
I propose that when an OS distribution does a major upgrade, it encapsulate your old environment as much as possible in a compressed virtualizer disk image. Then it should allow you to bring up your old environment on demand in a virtual machine. This way you can be confident that you can always get back to programs and files from your old machine — in effect, you are keeping it around, virtualized. If things break, you can see how they broke. In an emergency, you can go back and do things within your old machine. It can also allow you to migrate functions from your old machine to your new one more gradually. Virtual machines can have their own IP address (or even have the original one. While they can’t access all the hardware they can do quite a bit.
Of course this takes lots of disk space, but disk space is cheap, and the core of an OS (ie. not including personal user files like photo archives and videos) is usually only a few gigabytes — peanuts by today’s standards. There is a risk here, that if you run the old system and give it access to those personal files (for example run your photo organizer) you could do some damage. OSs don’t get do a great division between “your” files for OS and program config and “your” large data repositories. One could imagine an overlay filesystem which can only read the real files, and puts any writes into an overlay only seen by the virtual mount.
One can also do it the other way — run the new OS in the virtual machine until you have it tested and working, and then “flip the switch” to make the new OS be native and the old OS be virtual at the next boot. However, that means the new OS won’t get native hardware access, which you usually want when installing and configuring an OS upgrade or update.
All this would be particuarly handing if doing an “upgrade” that moves from, say, Fedora to Ubuntu, or more extreme, Windows to Linux. In such cases it is common to just leave the old hard disk partition alone and make a new one, but one must dual boot. Having the automatic ability to virtualize the old OS would be very handy for doing the transition. Microsoft could do the same trick for upgrades from old versions to Vista.
Of course, one must be careful the two machines don’t look too alike. They must not use the same MAC address or IP if they run internet services. They must, temporarily at least, have a different hostname. And they must not make incompatible changes, as I noted, to the same files if they’re going to share any.
Since hard disks keep getting bigger with every upgrade, it’s not out of the question that you might not keep your entire machine history behind as a series of virtual machine images. You could imagine going back to the computer environment you had 20 years ago, on demand, just for fun, or to recover old data formats — you name it. With disks growing as they are, we should not throw anything away, even entire computer environments.
Submitted by brad on Mon, 2007-01-22 15:02.
Radio technology has advanced greatly in the last several years, and will advance more. When the FCC opened up the small “useless” band where microwave ovens operate to unlicenced use, it generated the greatest period of innovation in the history of radio. As my friend David Reed often points out, radio waves don’t interfere with one another out in the ether. Interference only happens at a receiver, usually due to bad design. I’m going to steal several of David’s ideas here and agree with him that a powerful agency founded on the idea that we absolutely must prevent interference is a bad idea.
My overly simple summary of a replacement regime is just this, “Don’t be selfish.” More broadly, this means, “don’t use more spectrum than you need,” both at the transmitting and receiving end. I think we could replace the FCC with a court that adjudicates problems of alleged interference. This special court would decide which party was being more selfish, and tell them to mend their ways. Unlike past regimes, the part 15 lesson suggests that sometimes it is the receiver who is being more spectrum selfish.
Here are some examples of using more spectrum than you need:
- Using radio when you could have readily used wires, particularly the internet. This includes mixed mode operations where you need radio at the endpoints, but could have used it just to reach wired nodes that did the long haul over wires.
- Using any more power than you need to reliably reach your receiver. Endpoints should talk back if they can, over wires or radio, so you know how much power you need to reach them.
- Using an omni antenna when you could have used a directional one.
- Using the wrong band — for example using a band that bounces and goes long distance when you had only short-distance, line of sight needs.
- Using old technology — for example not frequency hopping to share spectrum when you could have.
- Not being dynamic — if two transmitters who can’t otherwise avoid interfering exist, they should figure out how one of them will fairly switch to a different frequency (if hopping isn’t enough.)
As noted, some of these rules apply to the receiver, not just the transmitter. If a receiver uses an omni antenna when they could be directional, they will lose a claim of interference unless the transmitter is also being very selfish. If a receiver isn’t smart enough to frequency hop, or tell its transmitter what band or power to use, it could lose.
Since some noise is expected not just from smart transmitters, but from the real world and its ancient devices (microwave ovens included) receivers should be expected to tolerate a little interference. If they’re hypersensitive to interference and don’t have a good reason for it, it’s their fault, not necessarily the source’s. read more »
Submitted by brad on Thu, 2007-01-04 14:21.
A recent Forbes items pointed to my earlier posts on eBay Feedback so I thought it was time to update them. Note also the eBay tag for all posts on eBay including comments on the new non-feedback rules.
I originally mused about blinding feedback or detecting revenge feedback. It occurs to me there is a far, far simpler solution. If the first party leaves negative feedback, the other party can’t leave feedback at all. Instead, the negative feedback is displayed both in the target’s feedback profile and also in the commenter’s profile as a “negative feedback left.” (I don’t just mean how you can see it in the ‘feedback left for others’ display. I mean it would show up in your own feedback that you left negative feedback on a transaction as a buyer or seller. It would not count in your feedback percentage, but it would display in the list a count of negatives you left, and the text response to the negative made by the other party if any.)
Why? Well, once the first feedbacker leaves a negative, how much information is there, really, in the response feedback? It’s a pretty rare person who, having been given a negative feedback is going to respond with a positive! Far more likely they will not leave any feedback at all if they admit the problem was their fault. Or that they will leave revenge. So if there’s no information, it’s best to leave it out of the equation.
This means you can leave negatives without fear of revenge, but it will be clearly shown to people who look at your profile whether you leave a lot of negatives or not, and they can judge from comments if you are spiteful or really had some problems. This will discourage some negative feedback, since people will not want a more visible reputation of giving lots of negatives. A typical seller will expect to have given a bunch of negatives to deadbeat buyers who didn’t pay, and the comments will show that clearly. If, however, they have an above average number of disputes over little things, that might scare customers off — and perhaps deservedly.
I don’t know if eBay will do this so I’ve been musing that it might be time for somebody to make an independent reputation database for eBay, and tie it in with a plugin like ShortShip. This database could spot revenge feedbacks, note the order of feedbacks, and allow more detailed commentary. Of course if eBay tries to stop it, it has to be a piece of software that does all the eBay fetching from user’s machines rather then a central server.
Submitted by brad on Sat, 2006-12-30 13:29.
I’ve thought digital picture frames were a nice idea for a while, but have not yet bought one. The early generation were vastly overpriced, and the current cheaper generation still typically only offer 640x480 resolution. I spend a lot to produce quality, high-res photography, and while even a megapixel frame would be showing only a small part of my available resolution, 1/4 megapixel is just ridiculous.
I’ve written before that I think a great product would either be flat panels that come with or can use a module to provide 802.11 and a simple protocol for remote computers to display stuff on them. Or I have wished for a simple and cheap internet appliance that would feature 802.11 and a VGA output to do the job. 1280x1024 flat panels now sell for under $150, and it would not take much in the way of added electronics to turn them into an 802.11 or even USB-stick/flashcard based digital photo frame with 4 times the resolution of the similarly priced dedicated frames.
One answer many people have tried is to convert an old laptop to a digital photo frame. 800x600 laptops are dirt cheap, and in fact I have some that are too slow to use for much else. 1024x768 laptops can also be had for very low prices on ebay, especially if you will take a “broken” one that’s not broken when it comes to being a frame — for example if it’s missing the hard disk, or the screen hinges (but not the screen) are broken. A web search will find you several tutorials on converting a laptop.
To make it really easy, what would be great is a ready to go small linux distribution aimed at this purpose. Insert a CD or flash card with the distribution on it and be ready to go as a picture frame.
Ideally, this distro would be set to run without a hard disk. You don’t want to spin the hard disk since that makes noise and generates heat. Some laptops won’t boot from USB or flash, so you might need a working hard drive to get booted, but ideally you would unmount it and spin it down after booting.
Having a flash drive is possible with just about all laptops, because PCMCIA compact flash adapters can be had for under $10. Laptops with USB can use cheaply available thumb-drives. PCMCIA USB adapters are also about $10, but beware that really old laptops won’t take the newer-generation “cardbus” models.
While some people like to put pictures into the frame using a flash card or stick, and this can be useful, I think the ideal way to do it is to use 802.11. And this is for the grandmother market. One of the interesting early digital picture frames had a phone plug on it. The frame would dial out by modem to download new pictures that you uploaded to the vendor’s site. The result was that grandma could see new pictures on a regular basis without doing anything. The downside was this meant an annoying monthly fee to cover the modem costs.
But today 802.11 is getting very common. Indeed, even if grandma is completely internet-phobic, there’s probably a neighbour’s 802.11 visible in her house, and what neighbour would not be willing to give permission for a function such as this. Then the box can be programmed to download and display photos from any typical photo web site, and family members can quickly upload or email photos to that web site.
Of course if there is no 802.11 then flash is the way to do it. USB sticks are ideal as they are cheap and easy to insert and remove, even for the computer-phobic. I doubt you really want to just stick a card out of a camera, people want to prepare their slideshows. (In particular, you want to pre-scale the images down to screen size for quick display and to get many more in the memory.) 800x600 pictures are in fact so small — 50kb can be enough — that you could even build the frame with no flash, just an all-ram linux that loads from flash, CD or spun-down hard drive, and keeps a 100 photos in spare ram, and sucks down new ones over the network as needed. This mode eliminates the need for worrying about drivers for flash or USB. The linux would run in frame-buffer mode, there would be no X server needed.
The key factor is that the gift giver prepares the box and mounts it on the wall, plugged in. After that the recipient need do nothing but look at it, while new photos arrive from time to time. While remote controls are nice (and can be done on the many laptops that feature infrared sensors) the zero-user-interface (ZUI) approach does wonders with certain markets.
Update: I’ve noticed that adapters for Laptop mini-IDE to compact flash are under $10. So you can take any laptop that’s missing a drive and insert a flash card as the drive, with no worries about whether you can boot from a removable device. You might still want an external flash card slot if it’s not going to be wifi, but you can get a silent computer easily and cheaply this way. (Flash disk is slower than HDD to read by has no seek time.)
Even for the builder the task could be very simple.
- Unscrew or break the hinges to fold the screen against the bottom of the laptop (with possible spacer for heat)
- Install, if needed, 802.11 card, USB card or flash slot and flash — or flash IDE.
- Install linux distro onto hard disk, CD or flash
- Configure by listing web URL where new photo information will be found, plus URL for parameters such as speed of slideshow, fade modes etc.
- Configure 802.11 parameters
- Put it in a deep picture frame
- Set bios to auto turn-on after power failure if possible
- Mount on wall or table and plug in.
Submitted by brad on Mon, 2006-12-18 02:57.
I’ve been writing recently about the linux upgrade nightmares that continue to trouble the world. The next in my series of ideas is a suggestion that we try to measure how well upgrades go, and make a database of results available.
Millions of people are upgrading packages every day. And it usually goes smoothly. However, when it doesn’t, it would be nice if that were recorded and shared. Over time, one could develop an idea of which upgrades are safer than others. Thus, when it’s time to upgrade many packages, the system could know which ones always go well, and which ones might deserve a warning, or should only be done if you don’t have something critical coming up that day.
We already know some of these. Major packages like Apache are often a chore, though they’ve done a lot more by using a philosophy of configuration files I heartily approve of — dividing up configuration to put config by different people in different files.
Some detection is automated. For example, the package tools detect if a configuration file is being upgraded after it’s been changed and offer the user a chance to keep the new one, their old one, or hand-mix them. What choice the user makes could be noted to measure how well the upgrades go. Frankly, any upgrade that even presents the user with questions should get some minor points against it, but if a user has to do a hand merge it should get lots of negative points.
Upgrades that got no complaint should be recorded, and upgrades that get an explicit positive comment (ie. the user actively says it went great) should also be noted. Of course, any time a user does an explicit negative comment that’s the most useful info of all. Users should be able to browse a nice GUI of all their recent upgrades — even months later — and make notes on how well things are going. If you discover something broken, it should be easy to make the report.
Then, when it comes time to do a big upgrade, such as a distribution upgrade, certain of the upgrades can be branded as very, very safe, and others as more risky. In fact, users could elect to just do only the safe ones. Or they could even elect to automatically do safe upgrades, particularly if there are lots of safety reports on their exact conditions (former and current version, dependencies in place.) Automatic upgrading is normally a risky thing, it can generate the risk of a problem accidentally spreading like wildfire, but once you have lots of reports about how safe it is, you can make it more and more automatic.
Thus the process might start with upgrading the 80% of packages that are safe, and then the 15% that are mostly safe. Then allocate some time and get ready for the ones that probably will involve some risk or work. Of course, if everything depends on a risky change (such as a new libc) you can’t get that order, but you can still improve things.
There is a risk of people gaming the database, though in non-commercial environments that is hopefully small. It may be necessary to have reporters use IDs that get reputations. For privacy reasons, however, you want to anonymize data after verifying it.
Submitted by brad on Sat, 2006-12-16 03:15.
I’ve spoken before about ZUI (Zero User Interface) and how often it’s the right interface.
One important system that often has too complex a UI is backup. Because of that, backups
often don’t get done. In particular offsite backups, which are the only way to deal with
fire and similar catastrophe.
Here’s a rough design for a ZUI offsite backup. The only UI at a basic level is just
installing and enabling it — and choosing a good password (that’s not quite zero UI but
it’s pretty limited.)
Once enabled, the backup system will query a central server to start looking for backup
buddies. It will be particularly interested in buddies on your same LAN (though it will
not consider them offsite.) It will also look for buddies on the same ISP or otherwise close
by, network-topology wise. For potential buddies, it will introduce the two of you and let
you do bandwidth tests to measure your bandwidth.
At night, the tool would wait for your machine and network to go quiet, and likewise the
buddy’s machines. It would then do incremental backups over the network. These would
be encrypted with secure keys. Those secure keys would in turn be stored on your own
machine (in the clear) and on a central server (encrypted by your password.)
The backup would be clever. It would identify files on your system which are common
around the network — ie. files of the OS and installed software packages — and know it
doesn’t have to back them up directly, it just has to record their presence and the
fact that they exist in many places. It only has to transfer your own created files.
Your backups are sent to two or more different buddies each, compressed. Regular checks
are done to see if the buddy is still around. If a buddy leaves the net, it quickly
will find other buddies to store data on. Alas, some files, like video, images and
music are already compressed, so this means twice as much storage is needed for backup
as the files took — though only for your own generated files. So you do have to
have a very big disk 3 times bigger than you need, because you must store data for
the buddies just as they are storing for you. But disk is getting very cheap.
(Another alternative is RAID-5 style. In RAID-5 style, you distribute each
file to 3 or more buddies, except in the RAID-5 parity system, so that any
one buddy can vanish and you can still recover the file. This means you
may be able to get away with much less excess disk space. There are also
redundant storage algorithms that let you tolerate the loss of 2 or even 3
of a larger pool of storers, at a much more modest cost than using double
All this is, as noted, automatic. You don’t have to do anything to make it happen,
and if it’s good at spotting quiet times on the system and network, you don’t even
notice it’s happening, except a lot more of your disk is used up storing data for
It is the automated nature that is so important. There have been other proposals
along these lines, such as MNET and some commercial network backup apps, but never an app you
just install, do quick setup and then forget about until you need to restore a
file. Only such an app will truly get used and work for the user.
Restore of individual files (if your system is still alive) is easy. You have
the keys on file, and can pull your file from the buddies and decrypt it with
Loss of a local disk is more work, but if you have multiple computers in
the household, the keys could be stored on other computers on the same
LAN (alas this does require UI to approve this) and then you can go to
another computer to get the keys to rebuild the lost disk. Indeed, using
local computers as buddies is a good idea due to speed, but they don’t
provide offsite backup. It would make sense for the system, at the cost of
more disk space, to do both same-LAN backup and offsite. Same-LAN for
hardware failures, offsite for building-burns-down failures.
In the event of a building-burns-down failure, you would have to go
to the central server, and decrypt your keys with that password. Then you can get your
keys and find your buddies and restore your files. Restore would not
be ZUI, because we need no motiviation to do restore. It is doing regular
backups we lack motivation for.
Of course, many people have huge files on disk. This is particularly true
if you do things like record video with MythTV or make giant photographs,
as I do. This may be too large for backup over the internet.
In this case, the right thing to do is to backup the smaller files first,
and have some UI. This UI would warn the user about this, and suggest
options. One option is to not back up things like recorded video. Another
is to rely only on local backup if it’s available. Finally, the system
should offer a manual backup of the large files, where you connect a
removable disk (USB disk for example) and transfer the largest files to
it. It is up to you to take that offsite on a regular basis if you can.
However, while this has a UI and physical tasks to do, if you don’t do
it it’s not the end of the world. Indeed, your large files may get
backed up, slowly, if there’s enough bandwidth.
Submitted by brad on Fri, 2006-12-08 23:34.
Last week I wrote about linux’s problems with dependencies and upgrades and promised some suggestions this week.
There are a couple of ideas here to be stolen from (sacrilige) windows which could be a start here, though they aren’t my long term solution.
Microsoft takes a different approach to updates, which consists of
little patches and big service packs. The service packs integrate a lot
of changes, including major changes, into one upgrade. They are not
very frequent, and in some ways akin to the major distribution releases
of systems like Ubuntu (but not its parent Debian ), Fedora Core and
Installing a service pack is certainly not without risks, but
the very particular combination of new libraries and changed apps in
a service pack is extensively tested together, as is also the case for
a major revision of a linux distribution. Generally installing one of
these packs has been a safe procedure. Most windows programs also do not
use hand-edited configuration files for local changes, and so don’t suffer
from the upgrade problems associated with this particular technique nearly
as much. read more »
Submitted by brad on Sat, 2006-12-02 01:13.
We all spend far too much of our time doing sysadmin. I’m upgrading and it’s as usual far more work than it should be. I have a long term plan for this but right now I want to talk about one of Linux’s greatest flaws — the dependencies in the major distributions.
When Unix/Linux began, installing free software consisted of downloading it, getting it to compile on your machine, and then installing it, hopefully with its install scripts. This always works but much can go wrong. It’s also lots of work and it’s too disconnected a process. Linuxes, starting with Red Hat, moved to the idea of precompiled binary packages and a package manager. That later was developed into an automated system where you can just say, “I want package X” and it downloads and installs that program and everything else it needs to run with a single command. When it works, it “just works” which is great.
When you have a fresh, recent OS, that is. Because when packagers build packages, they usually do so on a recent machine, typically fully updated. And the package tools then decide the new package “depends” on the latest version of all the libraries and other tools it uses. You can’t install it without upgrading all the other tools, if you can do this at all.
This would make sense if the packages really depended on the very latest libraries. Sometimes they do, but more often they don’t. However, nobody wants to test extensively with old libraries, and serious developers don’t want to run old distributions, so this is what you get.
So as your system ages, if you don’t keep it fully up to date, you run into a serious problem. At first you will find that if you want to install some new software, or upgrade to the lastest version to get a fix, you also have to upgrade a lot of other stuff that you don’t know much about. Most of the time, this works. But sometimes the other upgrades are hard, or face a problem, one you don’t have time to deal with.
However, as your system ages more, it gets worse. Once you are no longer running the most recent distribution release, nobody is even compiling for your old release any more. If you need the latest release of a program you care about, in order to fix a bug or get a new feature, the package system will no longer help you. Running that new release or program requires a much more serious update of your computer, with major libraries and more — in many ways the entire system. And so you do that, but you need to be careful. This often goes wrong in one way or another, so you must only do it at a time when you would be OK not having your system for a day, and taking a day or more to work on things. No, it doesn’t usually take a day — but it might. And you have to be ready for that rare contingency. Just to get the latest version of a program you care about.
Compare this to Windows. By and large, most binary software packages for windows will install on very old versions of Windows. Quite often they will still run on Windows 95, long ago abandoned by Microsoft. Win98 is still supported. Of late, it has been more common to get packages that insist on 7 year old Windows 2000. It’s fairly rare to get something that insists on 5-year-old Windows XP, except from Microsoft itself, which wants everybody to need to buy upgrades.
Getting a new program for your 5 year old Linux is very unlikley. This is tolerated because Linux is free. There is no financial reason not to have the latest version of any package. Windows coders won’t make their program demand Windows XP because they don’t want to force you to buy a whole new OS just to run their program. Linux coders forget that the price of the OS is often a fairly small part of the cost of an upgrade.
Systems have gotten better at automatic upgrades over time, but still most people I know don’t trust them. Actively used systems acquire bit-rot over time, things start going wrong. If they’re really wrong you fix them, but after a while the legacy problems pile up. In many cases a fresh install is the best solution. Even though a fresh install means a lot of work recreating your old environment. Windows fresh installs are terrible, and only recently got better.
Linux has been much better at the incremental upgrade, but even there fresh installs are called for from time to time. Debian and its children, in theory, should be able to just upgrade forever, but in practice only a few people are that lucky.
One of the big curses (one I hope to have a fix for) is the configuration file. Programs all have their configuration files. However, most software authors pre-load the configuration file with helpful comments and default configurations. The user, after installing, edits the configuration file to get things as they like, either by hand, or with a GUI in the program. When a new version of the program comes along, there is a new version of the “default” configuration file, with new comments, and new default configuration. Often it’s wrong to run your old version, or doing so will slowly build more bit-rot, so your version doesn’t operate as nicely as a fresh one. You have to go in and manually merge the two files.
Some of the better software packages have realized they must divide the configuration — and even the comments — made by the package author or the OS distribution editor from the local changes made by the user. Better programs have their configuration file “include” a normally empty local file, or even better all files in a local directory. This does not allow comments but it’s a start.
Unfortunately the programs that do this are few, and so any major upgrade can be scary. And unfortunately, the more you hold off on upgrading the scarier it will be. Most individual package upgrades go smoothly, most of the time. But if you leave it so you need to upgrade 200 packages at once, the odds of some problem that diverts you increase, and eventually they become close to 100%.
Ubuntu, which is probably my favourite distribution, has announced that their “Dapper Drake” distribution, from mid 2006, will be supported for desktop use for 3 years, and 5 years for server use. I presume that means they will keep compiling new packages to run on the older base of Dapper, and test all upgrades. This is great, but it’s thanks to the generousity of Mark Shuttleworth, who uses his internet wealth to be a fabulous sugar daddy to the Linux and Ubuntu movements. Already the next release is out, “Edgy” and it’s newer and better than Dapper, but with half the support promise. It will be interesting to see what people choose.
When it comes to hardware, Linux is even worse. Each driver works with precisely one kernel it is compiled for. Woe onto you once you decide to support some non-standard hardware in your Linux box that needs a special driver. Compiling a new driver isn’t hard once, until you realize you must do it all again any time you would like to slightly upgrade your kernel. Most users simply don’t upgrade their kernels unless they face a screaming need, like fixing a major bug, or buying some new hardware. Linux kernels come out every couple of weeks for the eager, but few are so eager.
As I get older, I find I don’t have the time to compile everything from source, or to sysadmin every piece of software I want to use. I think there are solutions to some of these problems, and a simple first one will be talked about in the next installment, namely an analog of Service Packs
Submitted by brad on Fri, 2006-08-11 17:06.
Everybody’s pulling out IBM PC stories on the 25th anniversary so I thought I would relate mine. I had been an active developer as a teen for the 6502 world — Commodore Pet, Apple ][, Atari 800 and the like, and sold my first game to Personal Software Inc. back in 1979. PSI was just starting out, but the founders hired me on as their first employee to do more programming. The company became famous shortly thereafter by publishing VisiCalc, which was the first serious PC application, and the program that helped make Apple as a computer company outside the hobby market.
In 1981, I came back for a summer job from school. Mitch Kapor, who had worked for Personal Software in 1980 (and had been my manager at the time) had written a companion for VisiCalc, called VisiPlot. VisiPlot did graphs and charts, and a module in it (VisiTrend) did statistical analysis. Mitch had since left, and was on his way to founding Lotus. Mitch had written VisiPlot in Apple ][ Basic, and he won’t mind if I say it wasn’t a masterwork of code readability, and indeed I never gave it more than a glance. Personal Software, soon to be renamed VisiCorp, asked me to write VisiPlot from scratch, in C, for an un-named soon to be released computer.
I didn’t mention this, but I had never coded in C before. I picked up a copy of the Kernighan and Ritchie C manual, and read it as my girlfriend drove us over the plains on my trip from Toronto to California.
I wasn’t told much about the computer I would be coding for. Instead, I defined an API for doing I/O and graphics, and wrote to a generalized machine. Bizarrely (for 1981) I did all this by dialing up by modem to a unix computer time sharing service called CCA on the east coast. I wrote and compiled in C on unix, and defined a serial protocol to send graphics back to, IIRC an Apple computer acting as a terminal. And, in 3 months, I made it happen.
(Very important side note: CCA-Unix was on the arpanet. While I had been given some access to
an Arpanet computer in 1979 by Bob Frankston, the author of VisiCalc, this was my first
day to day access. That access turned out to be the real life-changing event in this story.)
There was a locked room at the back of the office. It contained the computer my code would eventually run on. I was not allowed in the room. Only a very small number of outside companies were allowed to have an IBM PC — Microsoft, UCSD, Digital Research, VisiCorp/Software Arts and a couple of other applications companies.
On this day, 25 years ago, IBM announced their PC. In those days, “PC” meant any kind of personal computer. People look at me strangely when I call an Apple computer a PC. But not long after that, most people took “PC” to mean IBM. Finally I could see what I was coding for. Not that the C compilers were all that good for the 8088 at the time. However, 2 weeks later I would leave to return to school. Somebody else would write the library for my API so that the program would run on the IBM PC, and they released the product. The contract with Mitch required they pay royalties to him for any version of VisiPlot, including mine, so they bought out that contract for a total value close to a million — that helped Mitch create Lotus, which would, with assistance from the inside, outcompete and destroy VisiCorp.
(Important side note #2: Mitch would use the money from Lotus to found the E.F.F. — of which I am now chairman.)
The IBM PC was itself less exciting than people had hoped. The 8088 tried to be a 16 bit processor but it was really 8 bit when it came to performance. PC-DOS (later MS-DOS) was pretty minimal. But it had an IBM name on it, so everybody paid attention. Apple bought full page ads in the major papers saying, “Welcome IBM, Seriously.” Later they would buy ads with lines like Steve Jobs saying, “When I invented the personal computer…” and most of us laughed but some of the press bought it. And of course there is a lot more to this story.
And I was paid about $7,000 for the just under 4 months of work, building almost all of an entire software package. I wish I could program like that today, though I’m glad I’m not paid that way today.
So while most people today will have known the IBM-PC for 25 years, I was programming for it before it released. I just didn’t know it!
Submitted by brad on Fri, 2006-07-28 13:47.
Yesterday I received a Dell 3007WFP panel display. The price hurt ($1600 on eBay, $2200 from Dell but sometimes there are coupons) and you need a new video card (and to top it off, 90% of the capable video cards are PCI-e and may mean a new motherboard) but there is quite a jump by moving to this 2560 x 1600 (4.1 megapixel) display if you are a digital photographer. This is a very similar panel to Apple's Cinema, but a fair bit cheaper.
It's great for ordinary windowing and text of course, which is most of what I do, but it's a great deal cheaper just to get multiple displays. In fact, up to now I've been using CRTs since I have a desk designed to hold 21" CRTs and they are cheaper and blacker to boot. You can have two 1600x1200 21" CRTs for probably $400 today and get the same screen real estate as this Dell.
But that really doesn't do for photos. If you are serious about photography, you almost surely have a digital camera with more than 4MP, and probably way more. If it's a cheap-ass camera it may not be sharp if viewed at 1:1 zoom, but if it's a good one, with good lenses, it will be.
If you're also like me you probably never see 99% of your digital photos except on screen, which means you never truly see them. I print a few, mostly my panoramics and finally see all their resolution, but not their vibrance. A monitor shows the photos with backlight, which provides a contrast ratio paper can't deliver.
At 4MP, this monitor is only showing half the resolution of my 8MP 20D photos. And when I move to a 12MP camera it will only be a third, but it's still a dramatic step up from a 2MP display. It's a touch more than twice as good because the widescreen aspect ratio is a little closer to the 3:2 of my photos than the 4:3 of 1600x1200. Of course if you shoot with a 4:3 camera, here you'll be wasting pixels. In both cases, of course, you can crop a little so you are using all the pixels. (In fact, a slideshow mode that zoom/crops to fully use the display would be a handy mode. Most slideshows offer 1:1 and zoom to fit based on no cropping.)
There are many reasons for having lots of pixels aside from printing and cropping. Manipulations are easier and look better. But let's face it, actually seeing those pixels is still the biggest reason for having them. So I came to the conclusion that I just haven't been seeing my photos, and now I am seeing them much better with a screen like this. Truth is, looking at pictures on it is better than any 35mm print, though not quite at a 35mm slide of quality.
Dell should give me a cut for saying this.
Long ago I told people not to shoot on 1MP and 2MP digital cameras instead of film, because in the future, displays would get so good the photos will look obviously old and flawed. That day is now well here. Even my 3MP D30 pictures don't fill the screen. I wonder when I'll get a display that makes my 8MP pictures small.
Submitted by brad on Sun, 2006-05-21 18:01.
It can be very frustrating when a PC decides to send a signal to a monitor that is outside its scan range. Yes, the systems try hard to avoid it, via things like plug and play EDID information on monitor specs, and reverting changes to monitor settings if you don’t confirm them after a few seconds, but sometimes it still happens. It happens after monitor swap, it happens if you don’t have a monitor turned on when you boot or if you have KVM switch that doesn’t talk about the monitor.
The result can be frustrating. If you know how to reboot your PC without seeing the screen you can try that but even that can fail.
So I suggest that monitors be a bit better about signals that are outside of their range. If the dot clock is too fast, for example, consider dividing it by two if the electronics can handle that, showing half the pixels. If there are too many scan lines, just show as many as you can. The bottom of the screen will be missing, but that’s better than no view at all. If the refresh frequency is too high (though usually that’s because the dot clock is too fast) you can skip every other frame, for a very flickering display, but at least not a blank one. Whatever you can do, you can save people from hitting the reset button.
Submitted by brad on Tue, 2006-05-02 20:33.
Ok, so there's a million things to fix about eBay, and as I noted before my top beef is the now-common practice of immense shipping charges and below-cost prices for products -- making it now impossible to search by price because the listed price is getting less relevant.
Here's one possible fix. Just as you can have a list of favourite sellers, allow me to add a seller to my list of blocked sellers. I would no longer see listings from them. Once I scan a seller's reputation and see that I don't trust them, I don't want to be confused by their listings. Likewise if I want to block the sellers who use the fat shipping, I could do that, so I could unclutter my listings. That might be something to make a bit more temporary.
Ideally let sellers know they are getting on these lists, too. They should know that their practices are costing them bidders.
Submitted by brad on Mon, 2006-05-01 15:03.
I’ve been playing with various calendar systems, such as Mozilla calendar, Korganizer, Google Calendar, Chandler and a few others, and I’m finding them wanting. I have not used iCal or Outlook so perhaps they solve all my problems, but I doubt they do.
I see two ways to want to merge in additional calendars, neither of which is supported very well.
The first type of merger is an intmate one, for calendars in which I will attend most or all events. Effectively they are like extensions of my own calendar, in that I should be considered busy for any event in these calendars, unless I explicity say otherwise. One example would be a couple’s calendar, for social events attended as a couple — parties, weddings etc. Family calendars and workgroup calendars could also qualify.
The other class of calendar is a suggested calendar. These calendars are imported but I will be attending relatively few
events from them. It’s more I want to browse them. There are many such calendars now available on the calendar sharing
In a few of the tools you can copy an event from an imported calendar into your personal calendar, but after you do you now see two of the event. What you really want is a pointer to the imported event. Minor changes in the imported event should flow through into your final personal calendar. Changes in date or changes that cause a conflict should also flow through but be flagged as needing attention.
Tools like Google calendar, which allow you to access your calendar from remote locations (and easily publish public calendars) are good but they have privacy problems. As you may know if you read this blog, information on your own computer is protected by the 4th amendment. Information on somebody else’s computer (like Google’s) is not. As such, you would like to have any export of your personal calendar be encrypted, and accessible only while you are logged on with the password. Distilled, “free/busy” information may remain unencrypted for access even when you’re not online. However, this is a hard engineering problem to get right — in the long run we need the scope of the 4th amendment re-expanded so that “your papers” include not just your records stored at home, but your records stored on external servers.
Have I just not used enough tools? Do some calendars work this way that I haven’t seen?
Submitted by brad on Thu, 2006-04-13 23:43.
I get a lot of party invites by Evite, and it’s very frustrating. I’ve missed some events because they refuse to improve their interface.
When I get event invites, I save them to a mail folder. Then I can browse the mail folder later to check dates. If I am not in front of my calendar (which alas is not available everywhere) I can go back and enter items I save.
When I am on the road, sometimes my connectivity is bursty. That means I download mail and read it offline. But this is useless with Evites, as they don’t tell you anything about the event except a usually vague title if you are offline. After that it’s easy to forget you needed to go back and re-read the thing while online. Almost all other invites I get put the party date into the subject line, as it should be.
I’ve complained to Evite several times about this. So have many other people. They say they “are taking it under advisement.” One friend pushed Evite (using the threat of spam complaint, which is not really valid here) to put in a block so she doesn’t get evites. Her friend get told to send her a direct invitation. I’ve concluded that since this change is pretty easy to do, Evite has decided deliberately to be user-unfriendly here, in order to get more people to click on the links to see the ads.
While Google gets a lot of ribbing over the “don’t be evil” mantra, the truth is it started out with a simple principle like this one. Don’t do things deliberately against user interest because it seems they might generate a bit more advertising revenue. Examples of this sort of “evil” include pop-up ads, animated ads and paying for search results, which are all things other sites did. I would have hoped more companies would have learned the lesson of that, and try to emulate the successful strategy of Google. No luck, at least with Evite.
Do don’t be Evite. If you use their product, stuff at least the date, and if necessary the place, into what they consider the short title of the party, even if you must shorted the title. Yes, you will then enter it again, but your guests will thank you.
Submitted by brad on Mon, 2006-04-10 10:47.
Most people use wireless access points to provide access to the internet, of course, but often there are situations where you can’t get access, or access fast enough to be meaningful. (ie. a dialup connection quickly gets overloaded with all but the lightest activity.)
I suggest that AP firmwares be equipped with local services that can be used even with no internet connection. In particular, collaboration tools such as a simple IRC server, and a web server with tiny wiki or web chat application. Of course, there are limitations on flash size, so it might be you would make a firmware for some APs which rips out the external connection stuff to make room for collaboration.
There are a variety of open source firmwares out there, particularly for the Linksys WRT54 line of APs, where these features could be added. There are a few APs that have USB ports where you can add USB or flash drives so that you have a serious amount of memory and could have lots of collaborative features.
Then, at conferences, these collaboration APs could be put up, whether or not there is a connection. Indeed, some conferences might decide to deliberately not have an outside connection but allow collaboration.
Submitted by brad on Sun, 2006-04-02 22:47.
I’ve blogged several times before about my desire for universal DC power — ideally with smart power, but even standardized power supplies would be a start.
However, here’s a way to get partyway, cheap. PC power supplies are really cheap, fairly good, and very, very powerful. They put out lots of voltages. Most of the power is at +5v, +12v and now +3.3v. Some of the power is also available at -5v and -12v in many of them. The positive voltages above can be available as much as 30 to 40 amps! The -5 and -12 are typically lower power, 300 to 500ma, but sometimes more.
So what I want somebody to build is a cheap adapter kit (or a series of them) that plug into the standard molex of PC power supplies, and then split out into banks at various voltages, using the simple dual-pin found in Radio Shack’s universal power supplies with changeable tips. USB jacks at +5 volts, with power but no data, would also be available because that’s becoming the closest thing we have to a universal power plug.
There would be two forms of this kit. One form would be meant to be plugged into a running PC, and have a thick wire running out a hole or slot to a power console. This would allow powering devices that you don’t mind (or even desire) turning off when the PC is off. Network hubs, USB hubs, perhaps even phones and battery chargers etc. It would not have access to the +3.3v directly, as the hard drive molex connector normally just gives the +5 and 12 with plenty of power.
A second form of the kit would be intended to get its own power supply. It might have a box. These supplies are cheap, and anybody with an old PC has one lying around free, too. Ideally one with a variable speed fan since you’re not going to use even a fraction of the capacity of this supply and so won’t get it that hot. You might even be able to kill the fan to keep it quiet with low use. This kit would have a switch to turn the PS on, of course, as modern ones only go on under simple motherboard control.
Now with the full set of voltages, it should be noted you can also get +7v (from 5 to 12), 8.7v (call it 9) from 3.3 to 12, 1.7v (probably not that useful), and at lower currents, 10v (-5 to +5), 17v (too bad that’s low current as a lot of laptops like this), 24v, 8.3v, and 15.3v.
On top of that, you can use voltage regulators to produce the other popular voltages, in particular 6v from 7, and 9v from 12 and so on. Special tips would be sold to do this. This is a little bit wasteful but super-cheap and quite common.
Anyway, point is, you would get a single box and you could plug almost all your DC devices into it, and it would be cheap-cheap-cheap, because of the low price of PC supplies. About the only popular thing you can’t plug in are the 16v and 22v laptops which require 4 amps or so. 12v laptops of course would do fine. At the main popular voltages you would have more current than you could ever use, in fact fuses might be in order. Ideally you could have splitters, so if you have a small array of boxes close together you can get simple wiring.
Finally, somebody should just sell nice boxes with all this together, since the parts for PC power supplies are dirt cheap, the boxes would be easy to make, and replace almost all your power supplies. Get tips for common cell phone chargers (voltage regulators can do the job here as currents are so small) as well as battery chargers available with the kit. (These are already commonly available, in many cases from the USB jack which should be provided.) And throw in special plugs for external USB hard drives (which want 12v and 5v just like the internal drives.)
There is a downside. If the power supply fails, everything is off. You may want to keep the old supplies in storage. Some day I envision that devices just don’t come with power supplies, you are expected to have a box like this unless the power need is very odd. If you start drawing serious amperage the fan will need to go on and you might hear it, but it should be pretty quiet in the better power supplies.
Submitted by brad on Sun, 2006-04-02 11:17.
GPS receivers with bluetooth are growing in popularity, and it makes sense. I want my digital camera to have bluetooth as well so it can record where each picture is taken.
But as I was drivng from the airport last night, I realized that my cell phone has location awareness in it (for dialing 911 and location aware apps) and my laptop has bluetooth in it, and mapping software if connected to a GPS — so why couldn’t my cell phone be talking to my laptop to give it my location for the mapping software? Or ideed, why won’t it tell a digital camera that info as well?
Are people making cell phones that can be told to transmit their position to a local device that wants such data?
Update: My Sprint Mogul, whose GPS is enabled by the latest firmware update, is able to act as a bluetooth GPS using a free GPS2Blue program.