Brad Templeton is an EFF
director, Singularity U
faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Submitted by brad on Thu, 2006-12-21 15:33.
I was seduced by Google’s bribe of $20 per $50 or greater order to try their new Checkout service, and did some Christmas shopping on buy.com. Normally buy.com, being based in Southern California, takes only 1 or 2 days by UPS ground to get things to me. So ordering last weekend should have been low risk for items that are “in stock and ship in 1-2 days.” Yes, they cover their asses by putting a longer upper bound on the shipping time, but generally that’s the ship time for people on the other coast.
I got a mail via Google (part of their privacy protection) that the items had been shipped on Tuesday, so all was well. Unfortunately, I didn’t go and immediately check on the tracking info. The new interface with Google Checkout makes that harder to do — normally you can just go to the account page on most online stores and follow links directly to checking. Here the interface requires you to cut and paste order numbers and it’s buggy, reporting incorrect shipper names.
Unfortuantely it’s becoming common for online stores to keep things in different warehouses around the country now. Some items I ordered, it turns out, while shipped quickly, were shipped from far away. They’ll arrive after Christmas. So now I have to go out and buy the items at stores, or different items in some cases, at higher prices, without the seductive $20 discount — and I then need to arrange return of items ordered after they get here. And I’ll probably be out not only the money I paid for shipping (had I wanted them after christmas I would have selected the free saver shipping option of course) but presumably return shipping.
A very unsatisfactory shopping experience.
How could this have been improved (other than by getting the items to me?)
- When they e-mail you about shipment, throw in a tracking link and also include the shipper’s expected delivery day. UPS and Fedex both give that, and even with the USPS you can provide decent estimates.
- Let me specify in the order, “I need this by Dec 23.” They might be able to say right then and there that “This item is in stock far away. You need to specify air shipping to do that.”
- Failing that, they could, when they finally get ready to ship it, look at what the arrival date will be, and, if you’ve set a drop-dead date, cancel the shipment if it won’t get to you on time. Yes, they lose a sale but they avoid a very disappointed customer.
This does not just apply around Christmas. I often go on trips, and know I won’t be home on certain days. I may want to delay delivery of items around such days.
As I blogged earlier, it also would simplify things a lot if you could use the tracking interface of UPS, Fedex and the rest to reject or divert shipments in transit. If I could say “Return to sender” via the web on a shipment I know is a waste of time, the vendor wins, I win, and even the shipping company can probably set a price for this where they win too. The recipient saves a lot of hassle, and the vendor can also be assured the item has not been opened and quickly restock it as new merchandise. If you do a manual return they have to inspect, and even worry about people who re-shrinkwrap returns to cheat them.
Another issue that will no doubt come up — the Google discount was $20 off orders of $50 or more. If I return only some of the items, will they want to charge me the $20? In that case, you might find yourself in a situation where returning an item below $20 would cost you money! In this case I need to return the entire order except one $5 item I tossed on the order, so it won’t be an issue.
Jolly December to all. (Jolly December is my proposal for the Pastafarian year-end holiday greeting, a good salvo in the war on Christmas. If they’re going to invent a war on Christmas, might as well have one.)
Submitted by brad on Thu, 2006-12-21 13:59.
Last week, I wrote about new ideas for finding the lost. One I’ve done some follow-up on is the cell phone approach. While it’s not hard to design a good emergency rescue radio if you are going to explicitly carry a rescue device when you get lost, the key to cell phones is that people are already carrying them without thinking about it — even when going places with no cell reception since they want the phone with them when they return to reception.
Earlier I proposed a picocell to be mounted in a light plane (or even drone) that would fly over the search area and try to ping the phone and determine where it is. That would work with today’s phones. It might have found the 3 climbers, now presumed dead, on Mt. Hood because one of them definitely had a cell phone. It would also have found James Kim because they had a car battery, on which a cell phone can run for a long time.
My expanded proposal is for a deliberate emergency rescue mode on cell phones. It’s mostly software (and thus not expensive to add) but people would even pay for it. You could explicitly put your phone into emergency rescue mode, or have it automatically enter it if it’s out of range for a long time. (For privacy reasons you would want to be able to disable any automatic entry into such a mode, or at least be warned about it.)
What you do in this mode depends on how accurate a clock you have. Many modern phones have a very accurate clock, either from the last time they saw the cell network, or from GPS receivers inside the phone. If you have an accurate clock, then you can arrange to wake up and listen for signals from rescue planes at very precise times, and the planes will know those times exactly as well. So you can be off most of the time and thus do this with very low power consumption. It need not be a plane — it’s not out of the question to have a system with a highly directional antenna in some point that can scan the area.
If you don’t know the exact time, you can still listen at intervals while you have power. As your battery dies, the intervals between wakeups have to get longer. Once they get down to long periods like hours, the rescue crews can’t tell exactly when you will transmit and just have to run all the time.
If you know the exact time a phone will be on, you can even pull tricks like have other transmitters cut out briefly at that time (most protocols can tolerate sub-second outages) to make the radio spectrum quieter.
At first, you can actually listen quite often. The owner of the phone, if conscious might even make the grim evaluation of how long they can hold out and tell the phone to budget power for that many days.
When the phone hears the emergency ping (which quite possibly will be at above-normal power) it can also respond at above normal power, if it feels it has the power budget for it. It can also beep to the owner to get input on that question. (Making the searcher’s ping more powerful can actually be counterproductive as it could make the phone respond when it can’t possibly be received. The ping could indicate what its transmit power was, allowing the phone to judge whether its signal could possibly make it back to a good receiver.)
Of course if the phone has a GPS, once it does sync up with the picocell, it could provide its exact location. Otherewise it could do a series of blips to allow direction finding or fly-over signal strength location of the phone.
In most cases, if we know who the missing person is we’ll know their cell phone number, and thus their phone carrier and in most cases the model of phone they have. So searchers would know exactly what to look for, and whether the phone supports any emergency protocol or just has to be searched for with standard tech.
I’ve brought some of these ideas up with friends at Qualcomm. We’ll see if something can come of it.
Update: Lucent does have a picocell that was deployed in some rescue operations in New Orleans. Here’s a message discussing it
Submitted by brad on Tue, 2006-12-19 19:49.
This week I participated in this thread on Newcomb’s Paraodox which was noted on BoingBoing.
A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only. But there’s a catch.
The being claims that he is able to predict what any human being will decide to do. If he predicted you would take only the closed box, then he placed a million dollars in it. But if he predicted you would take both boxes, he left the closed box empty. Furthermore, he has run this experiment with 999 people before, and has been right every time.
What do you do?
A short version of my answer: The parodox confuses people because it stipulates you are a highly predictable being to the alien, then asks you to make a choice. But in fact you don’t make a choice, you are a choice. Your choice derives from who you are, not the logic you go through before the alien. The alien’s power dictates you already either are or aren’t the sort of person who picks one box or two, and in fact the alien is the one who made the choice based on that — you just imagine you could do differently than predicted.
Those who argue that since the money is already in the boxes, you should always take both miss the point of the paradox. That view is logically correct, but those who hold that view will not become millionaires, and this was set by the fact they hold the view. It isn’t that there’s no way the contents of the boxes can change because of your choice, it’s that there isn’t a million there if you’re going to think that way.
Of course people don’t like that premise of predictability and thus, as you will see in the thread, get very involved in the problem.
In thinking about this, it came to me that the alien is not so hypothetical. As you may know from reading this blog, I was once administered Versed, a sedative that also blocks your ability to form long term memories. I remember the injection, but not the things I said and did afterwards.
In my experiment we recruit subjects to test the paradox. They come in and an IV drip is installed, though they are not told about Versed. (Some people are not completely affected by Versed but assume our subjects are.) We ask subjects to give a deliberated answer, not to just try to be random, flip a coin or whatever.
So we administer the drug and present the problem, and see what you do. The boxes are both empty — you won’t remember that we cheated you. We do it a few times if necessary to see how consistent you are. I expect that most people would be highly consistent, but I think it would be a very interesting thing to research! If a few are not consistent, I suspect they may be deliberately being random, but again it would be interesting to find out why.
We videotape the final session, where there is money in the boxes. (Probably not a million, we can’t quite afford that.) Hypothetically, it would be even better to find another drug that has the same sedative effects of Versed so you can’t tell it apart and don’t reason differently under it, but which allows you to remember the final session — the one where, I suspect, we almost invariably get it right.
Each time you do it, however, you think you’re doing it for the first time. However, at first you probably (and correctly) won’t want to believe in our amazing predictive powers. There is no such alien, after all. That’s where it becomes important to videotape the last session or even better, have a way to let you remember it. Then we can have auditors you trust completely audit the experimenter’s remarkable accuracy (on the final round.) We don’t really have to lie to the auditors, they can know how we do it. We just need a way for them to swear truthfully that on the final round, we are very, very accurate, without conveying to the subject that there are early, unremembered rounds where we are not accurate. Alas, we can’t do that for the initial subjects — another reason we can’t put a million in.
Still, I suspect that most people would be fairly predictable and that many would find this extremely disturbing. We don’t like determinism in any form. Certainly there are many choices that we imagine as choices but which are very predictable. Unless you are bi, you might imagine you are choosing the sex of your sexual partners — that you could, if it were important, choose differently — but in fact you always choose the same.
What I think is that having your choices be inherent in your makeup is not necessarily a contradiction to the concept of free will. You have a will, and you are free to exercise it, but in many cases that will is more a statement about who you are than what you’re thinking at the time. The will was exercised in the past, in making you the sort of mind you are. It’s still your will, your choices. In the same way I think that entirely deterministic computers can also make choices and have free will. Yes, their choices are entirely the result of their makeup. But if they rate being an “actor” then the choices are theirs, even if the makeup’s initial conditions came from a creator. We are created by our parents and environment (and some think by a deity) but that’s just the initial conditions. Quickly we become something unto ourselves, even if there is only one way we could have done that. We are not un-free, we just are what we are.
Submitted by brad on Mon, 2006-12-18 02:57.
I’ve been writing recently about the linux upgrade nightmares that continue to trouble the world. The next in my series of ideas is a suggestion that we try to measure how well upgrades go, and make a database of results available.
Millions of people are upgrading packages every day. And it usually goes smoothly. However, when it doesn’t, it would be nice if that were recorded and shared. Over time, one could develop an idea of which upgrades are safer than others. Thus, when it’s time to upgrade many packages, the system could know which ones always go well, and which ones might deserve a warning, or should only be done if you don’t have something critical coming up that day.
We already know some of these. Major packages like Apache are often a chore, though they’ve done a lot more by using a philosophy of configuration files I heartily approve of — dividing up configuration to put config by different people in different files.
Some detection is automated. For example, the package tools detect if a configuration file is being upgraded after it’s been changed and offer the user a chance to keep the new one, their old one, or hand-mix them. What choice the user makes could be noted to measure how well the upgrades go. Frankly, any upgrade that even presents the user with questions should get some minor points against it, but if a user has to do a hand merge it should get lots of negative points.
Upgrades that got no complaint should be recorded, and upgrades that get an explicit positive comment (ie. the user actively says it went great) should also be noted. Of course, any time a user does an explicit negative comment that’s the most useful info of all. Users should be able to browse a nice GUI of all their recent upgrades — even months later — and make notes on how well things are going. If you discover something broken, it should be easy to make the report.
Then, when it comes time to do a big upgrade, such as a distribution upgrade, certain of the upgrades can be branded as very, very safe, and others as more risky. In fact, users could elect to just do only the safe ones. Or they could even elect to automatically do safe upgrades, particularly if there are lots of safety reports on their exact conditions (former and current version, dependencies in place.) Automatic upgrading is normally a risky thing, it can generate the risk of a problem accidentally spreading like wildfire, but once you have lots of reports about how safe it is, you can make it more and more automatic.
Thus the process might start with upgrading the 80% of packages that are safe, and then the 15% that are mostly safe. Then allocate some time and get ready for the ones that probably will involve some risk or work. Of course, if everything depends on a risky change (such as a new libc) you can’t get that order, but you can still improve things.
There is a risk of people gaming the database, though in non-commercial environments that is hopefully small. It may be necessary to have reporters use IDs that get reputations. For privacy reasons, however, you want to anonymize data after verifying it.
Submitted by brad on Sat, 2006-12-16 03:15.
I’ve spoken before about ZUI (Zero User Interface) and how often it’s the right interface.
One important system that often has too complex a UI is backup. Because of that, backups
often don’t get done. In particular offsite backups, which are the only way to deal with
fire and similar catastrophe.
Here’s a rough design for a ZUI offsite backup. The only UI at a basic level is just
installing and enabling it — and choosing a good password (that’s not quite zero UI but
it’s pretty limited.)
Once enabled, the backup system will query a central server to start looking for backup
buddies. It will be particularly interested in buddies on your same LAN (though it will
not consider them offsite.) It will also look for buddies on the same ISP or otherwise close
by, network-topology wise. For potential buddies, it will introduce the two of you and let
you do bandwidth tests to measure your bandwidth.
At night, the tool would wait for your machine and network to go quiet, and likewise the
buddy’s machines. It would then do incremental backups over the network. These would
be encrypted with secure keys. Those secure keys would in turn be stored on your own
machine (in the clear) and on a central server (encrypted by your password.)
The backup would be clever. It would identify files on your system which are common
around the network — ie. files of the OS and installed software packages — and know it
doesn’t have to back them up directly, it just has to record their presence and the
fact that they exist in many places. It only has to transfer your own created files.
Your backups are sent to two or more different buddies each, compressed. Regular checks
are done to see if the buddy is still around. If a buddy leaves the net, it quickly
will find other buddies to store data on. Alas, some files, like video, images and
music are already compressed, so this means twice as much storage is needed for backup
as the files took — though only for your own generated files. So you do have to
have a very big disk 3 times bigger than you need, because you must store data for
the buddies just as they are storing for you. But disk is getting very cheap.
(Another alternative is RAID-5 style. In RAID-5 style, you distribute each
file to 3 or more buddies, except in the RAID-5 parity system, so that any
one buddy can vanish and you can still recover the file. This means you
may be able to get away with much less excess disk space. There are also
redundant storage algorithms that let you tolerate the loss of 2 or even 3
of a larger pool of storers, at a much more modest cost than using double
All this is, as noted, automatic. You don’t have to do anything to make it happen,
and if it’s good at spotting quiet times on the system and network, you don’t even
notice it’s happening, except a lot more of your disk is used up storing data for
It is the automated nature that is so important. There have been other proposals
along these lines, such as MNET and some commercial network backup apps, but never an app you
just install, do quick setup and then forget about until you need to restore a
file. Only such an app will truly get used and work for the user.
Restore of individual files (if your system is still alive) is easy. You have
the keys on file, and can pull your file from the buddies and decrypt it with
Loss of a local disk is more work, but if you have multiple computers in
the household, the keys could be stored on other computers on the same
LAN (alas this does require UI to approve this) and then you can go to
another computer to get the keys to rebuild the lost disk. Indeed, using
local computers as buddies is a good idea due to speed, but they don’t
provide offsite backup. It would make sense for the system, at the cost of
more disk space, to do both same-LAN backup and offsite. Same-LAN for
hardware failures, offsite for building-burns-down failures.
In the event of a building-burns-down failure, you would have to go
to the central server, and decrypt your keys with that password. Then you can get your
keys and find your buddies and restore your files. Restore would not
be ZUI, because we need no motiviation to do restore. It is doing regular
backups we lack motivation for.
Of course, many people have huge files on disk. This is particularly true
if you do things like record video with MythTV or make giant photographs,
as I do. This may be too large for backup over the internet.
In this case, the right thing to do is to backup the smaller files first,
and have some UI. This UI would warn the user about this, and suggest
options. One option is to not back up things like recorded video. Another
is to rely only on local backup if it’s available. Finally, the system
should offer a manual backup of the large files, where you connect a
removable disk (USB disk for example) and transfer the largest files to
it. It is up to you to take that offsite on a regular basis if you can.
However, while this has a UI and physical tasks to do, if you don’t do
it it’s not the end of the world. Indeed, your large files may get
backed up, slowly, if there’s enough bandwidth.
Submitted by brad on Wed, 2006-12-13 23:17.
A new program has appeared at San Jose Airport, and a few other airports like Orlando. It’s called “Clear” and is largely the product of the private company Clear at flyclear.com. But something smells very wrong.
To get the Clear card, you hand over $99/year. The private company keeps 90% and the TSA gets the small remainder. You then have to provide a fingerprint, an iris scan and your SSN, among other things.
What do you get for this? You get to go to the front of the security line, past all the hoi polloi. But that’s it. Once at the front of the line, you still go through the security scan the same as anybody else. Which is, actually, the right thing to do since “trusted traveller” programs which actually let you bypass the security procedure are in fact bad for security compared to random screening.
But what doesn’t make sense is — why all the background checks and biometrics just to go to the head of the line? Why wouldn’t an ordinary photo ID card work? It doesn’t matter who you are. You could be Usama bin Ladin because all you did was not wait in line.
So what gives? Is this just an end run to get people more used to handing over fingerprints and other information as a natural consequence of flying? Is it a plan to change the program to one that lets the “clear” people actually avoid being x-rayed. As it stands, it certainly makes no sense.
Note that it’s not paying to get to the front of the line that makes no sense, though it’s debatable why the government should be selling such privileges. It’s the pointless security check and privacy invasion. For some time United Airlines at their terminal in SFO has had a shorter security line for their frequent flyers. But it doesn’t require any special check on who you are. If you have status or a 1st class ticket, you’re in the short line.
Submitted by brad on Wed, 2006-12-13 00:54.
Normally I’m a general-purpose computing guy. I like that the computer that runs my TV with MythTV is a general purpose computer that does far more than a Tivo ever would. My main computer is normally on and ready for me to do a thousand things.
But there is value in specialty internet appliances, especially ones that can be very low power and small. But it doesn’t make sense to have a ton of those either.
I propose a generic internet appliance box. It would be based on the same small single-board computers which run linux that you find in the typical home router and many other small network appliances. It would ideally be so useful that it would be sold in vast quantities, either in its generic form or with minor repurposings.
Here’s what would be in level 1 of the box:
- A small, single-board linux computer with low power processor such as the ARM
- Similar RAM and flash to today’s small boxes, enough to run a modest linux.
- WiFi radio, usually to be a client — but presumably adaptable to make access points (in which case you need ethernet ports, so perhaps not.)
- USB port
- Infrared port for remote control or IR keyboard (optionally a USB add-on)
Optional features would include:
- Audio output with low-fi speaker
- Small LCD panel
- DVI output for flat panel display
- 3 or 4 buttons arranged next to the LCD panel
The USB port on the basic unit provides a handy way to configure the box. On a full PC, write a thumb-drive with the needed configuration (in particular WiFi encryption keys) and then move the thumb drive to the unit. Thumb drives can also provide a complete filesystem, software or can contain photo slide shows in the version with the video output. Thumb drives could in fact contain entire applications, so you insert one and it copies the app to the box’s flash to give it a personality.
Here are some useful applications:
- In many towns, you can see when a bus or train will arrive at your stop over the internet. Program the appliance with your stop and how long it takes to walk there after a warning. Press a button when you want to leave, and the box announces over the speaker a countdown of when to go to meet the transit perfectly.
- Email notifier
- MP3 output to stereo or digital speakers
- File server (USB connect to external drives — may require full ethernet.)
- VOIP phone system speakerphone/ringer/announcer
- Printer server for USB printers
- Household controller interface (X10, thermostat control, etc.)
Slap on the back of cheap flat panel display mounted on the wall, connected with video cable. Now offer a vast array of applications such as:
- Slide show
- Security video (low-res unless there is an mpeg decoder in the box.)
- Weather/News/Traffic updates
- With an infrared keyboard, be a complete terminal to other computer apps and a minimal web browser.
There are many more applications people can dream up. The idea is that one cheap box can do all these things, and since it could be made in serious quantities, it could end up cheaper than the slightly more specialized boxes, which themselves retail for well under $50 today. Indeed today’s USB printer servers turn out to be pretty close to this box.
The goal is to get these out and let people dream up the applications.
Submitted by brad on Fri, 2006-12-08 23:34.
Last week I wrote about linux’s problems with dependencies and upgrades and promised some suggestions this week.
There are a couple of ideas here to be stolen from (sacrilige) windows which could be a start here, though they aren’t my long term solution.
Microsoft takes a different approach to updates, which consists of
little patches and big service packs. The service packs integrate a lot
of changes, including major changes, into one upgrade. They are not
very frequent, and in some ways akin to the major distribution releases
of systems like Ubuntu (but not its parent Debian ), Fedora Core and
Installing a service pack is certainly not without risks, but
the very particular combination of new libraries and changed apps in
a service pack is extensively tested together, as is also the case for
a major revision of a linux distribution. Generally installing one of
these packs has been a safe procedure. Most windows programs also do not
use hand-edited configuration files for local changes, and so don’t suffer
from the upgrade problems associated with this particular technique nearly
as much. read more »
Submitted by brad on Wed, 2006-12-06 13:13.
There is a story that Ikonos is going to redirect a satellite to do a high-res shot of the area where CNet editor James Kim is missing in Oregon. That’s good, though sadly, too late, but they also report not knowing what to do with the data.
I frankly think that while satellite is good, for something like this, traditional aerial photography is far better, because it’s higher resolution, higher contrast, can be done under clouds, can be done at other than a directly overhead angle, is generally cheaper and on top of all this can possibly be done from existing searchplanes.
But what to do with such hi-res data? Load it into a geo-browsing system like Google Earth or Google Maps or Microsoft Live. Let volunteers anywhere in the world comb through the images and look for clues about the missing person or people. Ideally, allow the map to be annotated so that people don’t keep reporting the same clues or get tricked by the same mistakes. (In addition to annotation, you would want to track which areas had been searched the most, and offer people suggested search patterns that cover unsearched territory or special territory of interest.)
These techniques are too late for Kim, but the tools could be ready for the next missing person, so that a plane could be overflying an area on short notice, and the data processed and up within just minutes of upload and stitching.
Right now Google’s tools don’t have any facility for looking at shots from an angle, while Microsoft’s do but without the lovely interface of Keyhole/Google Earth. Angle shots can do things like see under some trees, which could be important. This would be a great public service for some company to do, and might actually make searches far faster and cheaper. Indeed, in time, people who are lost might learn that, if they can’t flash a mirror at a searchplane, they should find a spot with a view of the sky and build some sort of artificial glyph on the ground. If there were a standard glyph, algorithms could even be written to search for it in pictures. With high-res aerial photography the glyph need not be super large.
Update: It’s also noted the Kims had a cell phone, and were found because their phone briefly synced with a remote tower. They could have been found immediately if rescue crews had a small mini-cell base station (for all cell technologies) that could be mounted in a regular airplane and flown over the area. People might even know to turn on their cell phone if they are conserving power if they heard a plane. (In a car with a car charger, you can leave the phone on.) As soon as the plane gets within a few miles (range is very good for sky-based antenna) you could just call and ask “where are you?” or, in the sad case where they can’t answer, find it with signal strength or direction finding. There are plans to build cell stations to be flown over disaster areas, but this would be just a simple unit able to handle just one call. It could be a good application for software radio, which is able to receive on all bands at once with simple equipment, at a high cost in power. No problem on a plane.
Speaking of rescue, I should describe one of my father’s inventions from the 70s. He designed a very simple “sight” to be placed on a mirror. First you got a mirror (or piece of foil) and punched a hole in it you could look
through. In his fancy version, he had a tube connected to the mirror with wires, but it could be handheld. The tube itself had a smaller exit hole (like a washer glued to the end of a toilet paper cardboard tube.)
Anyway, you could look through the hole in your mirror, sight the searchplane through the washer in the cardboard tube and adust the mirror so the back of the washer is illumnated by the sunlight from the mirror. Thus you could be sure you were flashing sunlight at the plane on a regular basis. He tried to sell military on putting a folded mirror and sighting tube in soldier’s rescue kits. You could probably do something with your finger in a pinch though, just put your finger next to the plane and move the mirror so your finger lights up. Kim didn’t think of it, but taking one of the mirrors off his car would have been a good idea as he left on his trek.
Submitted by brad on Sat, 2006-12-02 16:17.
We still see a lot of thermal printers out there, particularly for printing labels, receipts and the like. They are cheap, of course, though the paper costs extra so it's not always a long term win.
However, I am seeing them used for receipts that people may need to use some time later, and the problem is they fade. They definitely fade if you put them in a wallet or anywhere else that will be kept on your body. For my prepaid cell phone in Canada, for example, I need to buy the vouchers in advance so I can refill over the web before I travel back to Canada, and the most recent purchase came on thermal paper that is already faded partly and will be gone soon. I wrote down the number for protection, but it's just 3 weeks later.
So let's see a move away from thermal printers for receipts. They are OK for mailing labels which are very short lived, or places that will never see exposure to heat, or accidentally being left in the sun, but inkjets are so cheap now that there's not much excuse. (Though I realize inkjets have more moving parts.)
I also find for some reason that the thin thermal paper they use at Fry's for their receipts confuses the sheetfed scanner I use to scan receipts. It's not always sure there is paper in the scanner. I suppose that's mostly the scanner's fault, but it wouldn't happen if Fry's used a better paper or process.
Submitted by brad on Sat, 2006-12-02 01:13.
We all spend far too much of our time doing sysadmin. I’m upgrading and it’s as usual far more work than it should be. I have a long term plan for this but right now I want to talk about one of Linux’s greatest flaws — the dependencies in the major distributions.
When Unix/Linux began, installing free software consisted of downloading it, getting it to compile on your machine, and then installing it, hopefully with its install scripts. This always works but much can go wrong. It’s also lots of work and it’s too disconnected a process. Linuxes, starting with Red Hat, moved to the idea of precompiled binary packages and a package manager. That later was developed into an automated system where you can just say, “I want package X” and it downloads and installs that program and everything else it needs to run with a single command. When it works, it “just works” which is great.
When you have a fresh, recent OS, that is. Because when packagers build packages, they usually do so on a recent machine, typically fully updated. And the package tools then decide the new package “depends” on the latest version of all the libraries and other tools it uses. You can’t install it without upgrading all the other tools, if you can do this at all.
This would make sense if the packages really depended on the very latest libraries. Sometimes they do, but more often they don’t. However, nobody wants to test extensively with old libraries, and serious developers don’t want to run old distributions, so this is what you get.
So as your system ages, if you don’t keep it fully up to date, you run into a serious problem. At first you will find that if you want to install some new software, or upgrade to the lastest version to get a fix, you also have to upgrade a lot of other stuff that you don’t know much about. Most of the time, this works. But sometimes the other upgrades are hard, or face a problem, one you don’t have time to deal with.
However, as your system ages more, it gets worse. Once you are no longer running the most recent distribution release, nobody is even compiling for your old release any more. If you need the latest release of a program you care about, in order to fix a bug or get a new feature, the package system will no longer help you. Running that new release or program requires a much more serious update of your computer, with major libraries and more — in many ways the entire system. And so you do that, but you need to be careful. This often goes wrong in one way or another, so you must only do it at a time when you would be OK not having your system for a day, and taking a day or more to work on things. No, it doesn’t usually take a day — but it might. And you have to be ready for that rare contingency. Just to get the latest version of a program you care about.
Compare this to Windows. By and large, most binary software packages for windows will install on very old versions of Windows. Quite often they will still run on Windows 95, long ago abandoned by Microsoft. Win98 is still supported. Of late, it has been more common to get packages that insist on 7 year old Windows 2000. It’s fairly rare to get something that insists on 5-year-old Windows XP, except from Microsoft itself, which wants everybody to need to buy upgrades.
Getting a new program for your 5 year old Linux is very unlikley. This is tolerated because Linux is free. There is no financial reason not to have the latest version of any package. Windows coders won’t make their program demand Windows XP because they don’t want to force you to buy a whole new OS just to run their program. Linux coders forget that the price of the OS is often a fairly small part of the cost of an upgrade.
Systems have gotten better at automatic upgrades over time, but still most people I know don’t trust them. Actively used systems acquire bit-rot over time, things start going wrong. If they’re really wrong you fix them, but after a while the legacy problems pile up. In many cases a fresh install is the best solution. Even though a fresh install means a lot of work recreating your old environment. Windows fresh installs are terrible, and only recently got better.
Linux has been much better at the incremental upgrade, but even there fresh installs are called for from time to time. Debian and its children, in theory, should be able to just upgrade forever, but in practice only a few people are that lucky.
One of the big curses (one I hope to have a fix for) is the configuration file. Programs all have their configuration files. However, most software authors pre-load the configuration file with helpful comments and default configurations. The user, after installing, edits the configuration file to get things as they like, either by hand, or with a GUI in the program. When a new version of the program comes along, there is a new version of the “default” configuration file, with new comments, and new default configuration. Often it’s wrong to run your old version, or doing so will slowly build more bit-rot, so your version doesn’t operate as nicely as a fresh one. You have to go in and manually merge the two files.
Some of the better software packages have realized they must divide the configuration — and even the comments — made by the package author or the OS distribution editor from the local changes made by the user. Better programs have their configuration file “include” a normally empty local file, or even better all files in a local directory. This does not allow comments but it’s a start.
Unfortunately the programs that do this are few, and so any major upgrade can be scary. And unfortunately, the more you hold off on upgrading the scarier it will be. Most individual package upgrades go smoothly, most of the time. But if you leave it so you need to upgrade 200 packages at once, the odds of some problem that diverts you increase, and eventually they become close to 100%.
Ubuntu, which is probably my favourite distribution, has announced that their “Dapper Drake” distribution, from mid 2006, will be supported for desktop use for 3 years, and 5 years for server use. I presume that means they will keep compiling new packages to run on the older base of Dapper, and test all upgrades. This is great, but it’s thanks to the generousity of Mark Shuttleworth, who uses his internet wealth to be a fabulous sugar daddy to the Linux and Ubuntu movements. Already the next release is out, “Edgy” and it’s newer and better than Dapper, but with half the support promise. It will be interesting to see what people choose.
When it comes to hardware, Linux is even worse. Each driver works with precisely one kernel it is compiled for. Woe onto you once you decide to support some non-standard hardware in your Linux box that needs a special driver. Compiling a new driver isn’t hard once, until you realize you must do it all again any time you would like to slightly upgrade your kernel. Most users simply don’t upgrade their kernels unless they face a screaming need, like fixing a major bug, or buying some new hardware. Linux kernels come out every couple of weeks for the eager, but few are so eager.
As I get older, I find I don’t have the time to compile everything from source, or to sysadmin every piece of software I want to use. I think there are solutions to some of these problems, and a simple first one will be talked about in the next installment, namely an analog of Service Packs
Submitted by brad on Thu, 2006-11-30 20:56.
Parking at airports seems a terrible waste — expensive parking and your car sits doing nothing. I first started thinking about the various Car Share companies (City CarShare, ZipCar, FlexCar — effectively membership based hourly car rentals which include gas/insurance and need no human staff) and why one can’t use them from the airport. Of course, airports are full of rental car companies, which is a competitive problem, and parking space there is at a premium.
Right now the CarShare services tend to require round-trip rentals, but for airports the right idea would be one-way rentals — one member drives the car to the airport, and ideally very shortly another member drives the car out of the airport. In an ideal situation, coordinated by cell phone, the 2nd member is waiting at the curb, and you would just hand off the car once it confirms their membership for you. (Members use a code or carry a key fob.) Since you would know in advance before you entered the airport whether somebody is ready, you would know whether to go to short term parking or the curb — or a planned long-term parking lot with a bit more advance notice so you allocate the extra time for that.
Of course the 2nd member might not want to go to the location you got the car from, which creates the one-way rental problem that carshares seem to need to avoid. Perhaps better balancing algorithms could work, or at worst case, the car might have to wait until somebody from your local depot wants to go there. That’s wasteful, though. However, I think this could be made to work as long as the member base is big enough that some member is going in and out of the airport.
I started thinking about something grander though, namely being willing to rent your own private car out to bonded members of a true car sharing service. This is tougher to do but easier to make efficient. The hard part is bonding reliability on the part of all concerned.
Read on for more thinking on it… read more »
Submitted by brad on Tue, 2006-11-28 14:27.
There’s a great tragedy going on in the Sudan, and not much is being done about it. Among the people trying to get out the message are hollywood celebrities. I am not faulting them for doing that, but I have a suggestion that is right up their alley.
Which is to make a movie to tell the story, a true movie that is, hopefully a moving as a Schinder’s List or the Pianist. Put the story in front of the first world audience.
And, I suggest with a sad dose of cynicism, do it with whitebread american actors. Not that African actors can’t do a great job and make a moving film like Hotel Rwanda. I just have a feeling that first world audiences would be more affected if they saw it happening to people like them, rather than people who live in a tiny poor muslim villages in a remote desert. The skin colour is only part of what seems to have distanced this story to the point that little is being done. We may have to never again believe that people will keep the vow of never again.
So change the setting a bit and the people, but keep the story and the atrocities, and perhaps it can have the same effect that seeing a Schindler’s list can have on white euro descended Jews and non-Jews. And the Hollywood folks would be doing exactly what they are best at.
Submitted by brad on Fri, 2006-11-24 20:06.
I’m pleased to see that more of my photography is getting licenced for ads and web sites these days. I like the job that this PDA ad does with my 360 degree view of Shanghai People’s Square. Of course I can’t read the text very well.
By the way, I learned the hard way how valuable the feature I proposed earlier for digital cameras — where they would notice if they’ve been set in an unusual state after a long gap between sessions — while on my trip this month to Edmonton, and one of my favourite spots on the planet — the rocky mountains in Banff and Jasper. Just before the trip I had put the camera into the “small” image size mode because I was shooting some stuff for eBay, and you really don’t need 8 megapixel shots for that. Alas, I left it there, and this is one of those mode switches which is not at all obvious. You won’t notice it unless you pay careful attention to the tiny “s” on the LCD panel, or if you download the photos. Alas, on my 4gb card I can go a long way without downloading, so a full days shots, including a lovely snow dusted Lake Louise were shot in small size, high compression.
The other way you would spot this is the camera shows you how many shots you have left. My 4gb card shows 999 when it starts even in large mode. But after shooting for a short while it eventually starts counting down. I only noticed I was in small mode when the 999 didn’t start counting down with hundreds of shots.
So this is definitely a case where the camera should notice it’s been days since I shot, and warn me I’m shooting with this unusual setting. I will still get quite serviceable web photos from that day, but not the wall sized prints I love.
Submitted by brad on Mon, 2006-11-20 01:27.
It’s always reported how low US voter turnout is in midterm elections. 2006, at about 40%, seems pretty poor, though it was higher than 2002.
However the statistic I would like to see is “Voter turnout in districts where there is an important, hotly contested race.” This is the number we might want to monitor from year to year.
Virginia, it turns out, which had the Webb-Allen “Macaca” race, had the highest voter turnout in its history. You wouldn’t think that after hearing about the low turnout of a typical mid-term. Of course it will also go down as the first time a major U.S. politician was taken down due to blogs, the web and YouTube. Since it was so close, almost any factor can be given credit for Allen’s loss.
It is not surprising that when there is no contested race, that turnout is low. The U.S. for various bizarre reasons, has most incumbents always safe in their seats. This switch of 30 or so seats in the house and 6 in the senate is considered a major upheaval, nigh a revolution, by Americans. With seats so safe, there is no suprise there is little incentive in voting. U.S. ballots are very complex compared to many countries, and there are often long voting lines, and you don’t get official time off to vote.
Contrast that to Canada, where a public upset with the Conservative party’s introduction of the visible Goods and Services Tax (a 7% VAT) took the party from having a majority of parliament to having TWO seats. 2, as in 1 plus 1. There’s no such safety zone for incumbents, no cry for term limits in much of the rest of the world. There, if the public gets upset it throws the bums out, or drops them back to a minority position due to the fact that there are more than 2 parties.
I hope one of the major statistical agencies starts tracking voter turnout modulated by how motivated the voters are in particular districts. Of course voter turnout is the final metric of how motivated they were, but there are other, earlier indicators in most cases.
Submitted by brad on Sun, 2006-11-19 14:11.
Ok, this is a silly idea, but it would make a great baby shower gift. Crib sheets — which is to say sheets to go on a baby’s bed — printed with small notes on your favourite subjects of choice — math, physics, history, as you would need for taking an exam. And who knows, maybe you can pretend if the baby sleeps surrounded by Maxwell’s equations she’ll become a genius.
Submitted by brad on Sun, 2006-11-19 00:58.
I’m not a gamer. I wrote video games 25 years ago but stopped when game creation became more about sizzle (graphics) than steak (strategy.) But the story of the release of the Playstation 3 is a fascinating one. Sony couldn’t make enough, so to get them, people camped out in front of stores, or in some cases camped out just to get a certificate saying they could buy one when they arrived. But word got out that people would pay a lot for them on eBay. The units cost about $600, depending on what model you got, but people were bidding thousands of dollars even in advance, for those who had received certificates from stores.
It was amusing to read the coverage of the launch at Sony’s own Sonystyle store in San Francisco. There the press got bored as they asked people in line why they were lining up to get a PS3. The answer most commonly seemed to be not a love of gaming, but to flip the box for a profit.
And flip they did. There were several tens of thousands of eBay auctions for PS3s, and prices were astounding. About 20,000 auctions closed. Another 25,000 are still running at this time. Some auctions concluded for ridiculous numbers like $110,000 for 4 of them, or a more “reasonable” $20,000 for 5. Single auctions reached as high as $25,000, though in many of these cases, it’s bad news for the seller because the high bidders are people with zero eBay reputation who obviously won’t complete the transaction. In other cases serious sellers will try to claim their bid was a typo. There are some auctions with serious multiple bidders that got to 3 and 4 thousand dollars, but by mid-day today they were all running about $2,000, and they started dropping very quickly. As I watched in a few minutes they fell from $1,500 to going below a thousand. Still plenty of profit for those willing to brave the lines.
It’s interesting to consider what the best strategy for a seller is. It’s hard to predict what form a frenzy like this will take, and when the best price will come. The problem is eBay has a minimum 1 day for the auction, so you must guess the peak 1 day in advance. Since many buyers were keen to see the auction listing showing that the person had the unit in hand, ready to ship, the possible strategy of listing the item before going to get it bore some risks. Some showed scans of their pre-purchase.
The most successful sellers were probably those who picked a clever “buy it now” price which was taken during the early frenzy by people who did not realize how much the price would drop. All the highest auctions (including those with fake buyers) were buy-it-now results. Of course, it’s mostly luck in guessing what the right price was. I presume the buy-it-now/best-offer feature (new on eBay) might have done well for some sellers.
However, those who got a bogus buyer are punished heavily. They can re-list, but must wait a day to sell by auction, and will have lost a bunch of money in that day. If they can find the buyer they might be able to sue. If they are smart, they would re-list with a near-market buy-it-now to catch the market while it’s hot.
Real losers are those who placed a reserve on their auctions, or a high starting bid price. In many cases their auctions will close with no succesful bidder, and they’ll sell for less later. Using a reserve or high starting bid makes no sense when you have such a high-demand item. Those paranoid about losing money should have at most started bidding at their purchase price. I can’t think of any reason for a reserve price auction in this case — or in most other cases, for that matter. Other than with experimental rare products, they are just annoying.
Particularly sad was one auction where the seller claimed to be a struggling single mom who had kids that lucked out and got spots in line, along with pictures of the kids holding the boxes. She set a too-high starting price, and will have to re-list.
Another bad strategy was to do a long multi-day listing.
It’s possible the rarity of these items will grow, as people discover they just can’t get one for their kids for Christmas, but I doubt it.
The other big question this raises is this: Could Sony have released the machine differently? Sony obviously left millions on the table here, about 30 to 40 million I would guess. That’s tolerable for Sony, and they might have decided to give it up for the publicity that surrounds a buying craze. But I have to wonder, would they not have been better served to conduct their own auctions, perhaps a giant dutch auction, for the units, with some allocated at list price by lottery or for those willing to wait in line so that it doesn’t seem so elitist. (As if any poor person is going to buy a PS3 and keep it if they can make a fast thousand in any event.)
Some retailers took advantage of demand by requiring customers to buy several games with the box, presumably Sony approved that. With no control from Sony all the retailers would be trying to capture all this money themselves, which they could easily have done — selling on eBay directly if need be.
I predict in the future we will see a hot Christmas item sold through something like a dutch auction, since being the first to do that would generate a lot of publicity. Dutch auctions are otherwise not nearly so exciting. When Google went public through one, the enemies of dutch auctions worked to make sure people thought it was boring, causing Google to leave quite a bit of money on the table, but far less than they would have left had they used traditional underwriters.
On a side note — if you shop on eBay, I recommend the mozilla/firefox/iceweasel plugin “Shortship” which fixes one of eBay’s most annoying bugs. It lets you see the total of price plus shipping, and sort by it, at least within one ebay display page.
Submitted by brad on Fri, 2006-11-17 16:43.
Differential pricing occurs when a company attempts to charge different prices to two different customers for what is essentially the same product. One place we all encounter it a lot is air travel, where it seems no two passengers paid the same price for their tickets on any given flight. You also see it in things like one of my phones, which has 4 line buttons but only 2 work — I must pay $30 for a code to enable the other 2 buttons.
The public tends to hate differential pricing, though in truth we should only hate it when it makes us pay more. Clearly some of the time we’re paying less than we might pay if differential pricing were not possible or illegal.
So even if differential pricing is neutral, one can rail if it punishes/overcharges the wrong thing. There might be a better way to get at the vendor’s goal of charging each customer the most they will tolerate — hopefully subject to competition. Competition makes differential pricing complex, as it’s only stable if all competitors use roughly the same strategy.
In air travel, the prevailing wisdom has been that business travellers will tolerate higher ticket prices than vacation travellers, and so most of the very complex pricing rules in that field are based on that philosophy. Business travellers don’t want to stay over weekends, they like to change their flights, they want to fly a combination of one-way trips and they want to book flights at short notice. (They also like to fly business class.) All these things cost a lot more in the current regime.
Because of this, almost all the travel industry has put a giant surcharge on flexibility. It makes sense that it might cost a bit more — it’s much easier to schedule your airline or hotel if people will book well in advance and keep to their booking — but it seems as though the surcharge has gotten immense, where flexible travel can cost 2 to 4 times what rigidly scheduled travel costs.
Missing the last flight of the day can be wallet-breaking. Indeed, there are many arguments that since an empty seat or hotel room is largely wasted, vendors might be encouraged to provide cheaper tickets to those coming in at the last minute, rather than the most expensive. (And sometimes they do. In the old days flying standby was the cheapest way to fly, suitable only for students or the poor. There are vendors that sell cheap last minute trips.)
Vendors have shied away from selling cheap last-minute travel because they don’t want customers to find it reliable enough to depend on. But otherwise it makes a lot of sense.
So my “Solve this” problem is to come up with schemes that still charge people as much as they will tolerate, but don’t punish travel flexiblity as much.
One idea is to come up with negative features for cheap tickets that flexible, non-business travellers will tolerate but serious business travellers and wealthy travellers will not. For example, tickets might come with a significant (perhaps 10-20%) chance of being bumped, ideally with sufficient advance notice by cell phone that you don’t waste time going to the airport. For example, the airline might sell a cheap ticket but effectively treat the seat as available for sale again to a higher-paying passenger if they should come along. You might learn the morning of your trip that somebody else bought your seat, and that you’ll be going on a different flight or even the next day. They would put a cap on how much they could delay you, and that cap might change the price of your ticket.
For a person with a flexible work schedule (like a consultant) or the retired, they might well not care much about exactly what day they get back home. They might like the option to visit a place until they feel like returning, with the ability to get a ticket then, but the risk that it might not be possible for a day or two more. Few business travellers would buy such a ticket.
Such tickets would be of most value to those with flexible accomodations, who are staying with friends and family, for example, or in flexible hotels. Rental cars tend to be fairly flexible.
Of course, if you’re willing to be bumped right at the airport, that should given you an even cheaper ticket, but that’s quite a burden. And with today’s ubiquitous cell phones and computer systems there’s little reason not to inform people well in advance.
This technique could even provide cheaper first-class. You might buy a ticket at a lower price, a bit above coach, that gets you a first class seat half the time but half the time puts you in coach because somebody willing to pay the real price of first class bought a ticket. (To some extent, the upgrade system, where upgrades are released at boarding time based on how many showed up for first class, does something like this.)
Any other ideas how airlines could provide cheaper flexible tickets without eating into their business flyer market? If only one airline tries a new idea, you get an interesting pattern where everybody who likes the new fare rules switches over to that airline in the competitive market, and the idea is forced to spread.
Added note: In order to maintain most of their differential pricing schemes today, airlines need and want the photo-ID requirement for flying. If tickets (including tickets to half a return trip) could be easily resold on the web to anybody, they could not use the systems they currently use. However, the system I suggest, which requires the passenger be willing to be bumped, inhibits resale without requiring any type of ID. A business traveller might well buy a cheap ticket at the last minute from somebody who bought earlier, but they are going to be less willing to buy a ticket with unacceptable delay risks associated with it.
Submitted by brad on Wed, 2006-11-15 15:07.
I’ve written before about one of the greatest flaws in the modern political system is the immense need of candidates to raise money (largely for TV ads) which makes them beholden to contributors, combined with the enhanced ability incumbents have at raising that money. Talk to any member of congress and they will tell you they start work raising money the day after the election.
Last year I proposed one radical idea, a special legitimizing of political spam done through the elections office. That will take some time as it requires a governmental change. So other factors are coming forward.
In some states and nations, efforts are already underway to have the government finance elections. The Presidential campaign fund that you contribute to whether you check the box on the tax return or not is one effort in this direction.
I propose that the operators of the big, advertising-supported web sites, in particular sites like Yahoo, Google, Microsoft, Myspace and the like join together to create a program to give free web advertising to registered candidates on a fair basis. This could be done by simply providing unsold inventory, which is close to free, or it could be real valuable inventory including credits for targetted ads.
Of course, not everybody reads the web all day, so this only reaches one segment of the population, but it reaches a lot. The main goal is to reduce the need, in the minds of candidates, to raise a lot of money for TV ads. They won’t stop entirey, but it might get scaled back.
Such a system would allow users the option of setting a cookie to provide preferences for the political ads they see. While each candidate would get one free shot, voters could opt-out of ads for specific candidates or races. (In some cases the geography-matcher would get it wrong and they would change the district the system think they are in.) They could also tone down the amount of advertising, or opt in or out of certain styles (flash, animated, text, video.)
It would be up to candidates to tune their message, and not overdo things or annoy voters, pushing them to opt out.
There can’t be too much opting out though, because the goal here is to deliver the same thing that candidates rely on TV for — pushing their message at voters who have not gone seeking it. If we don’t provide that, we’ll never cut the dependency on TV and other intrusive ads.
Allowing these ads to be intrusive seems wrong, but the real thing to do is consider the competition, and what its thirst for money does to society. Thanks to the internet, we’ve reduced the price of advertising by an order of magnitude. If the price of advertising is what corrupts the political system, it seems we should have a shot of fixing the problem.
Ads would be served by the special consortium managing the opt-out system, not the candidate, in order to protect privacy. So if you click on an ad for a candidate, the first landing page is not hosted by the candidate, but may have links to their site.
A system would have to be devised to allocate “importance” to elections. Ie. how many ads do the candidates for President get vs. those for state comptroller.
One risk is that the IRS or other forces might try to declare this program a political contribution by the web sites. If applied fairly to all candidates, we’ll need a ruling that states it is not a contribution. This is needed, because otherwise sites will balk at the idea of running free ads for candidates they dispise.
If the system got powerful enough, it could even make a bolder claim. It could only allow the free advertising to candidates who agree to spending limits in other media. On one hand this is just what most campaign finance reform programs do to avoid the 1st amendment. On the other hand, it may seem like an antitrust violation — deliberately giving stuff away not just to kill the “competition” but actually forbidding the candidates from spending too much with the competition.
This need not be limited to the web of course. Other media could join in, though the ones that already make a ton of money from political advertising (TV, radio) are not so likely to join.
This won’t solve the whole problem, but it could make a dent, and even a dent is pretty important in a problem as major as this.
Submitted by brad on Wed, 2006-11-15 00:37.
I go to many conferences, and most of them seem to want to give me a nice canvas bag, and often a shirt as well. Truth is though, I now have a stack of about 20 bags in my closet. I’ve used some of the bags, typically the backpacks, but when I have so many other bags I don’t feel a strong motivation to walk around with a briefcase or laptop bag with a giant sponsor’s logo on it, or worse, a collection of 10 logos. No matter how nice the bag is. In addition, even if I got logo-free bags I have no need for 20 of them, but I can’t really give away logo covered bags as gifts.
Which means the sponsor wasted their money. And I think this is common, for while I sometimes see people carrying a sponsor bag outside the confines of a conference, it’s pretty rare compared to the number given out. You want me to be your billboard, I want more than a bag for it.
Might some sponsors take the plunge and make a bag with the sponsor’s logo inside the bag? Or perhaps if on the outside, in a more subtle way. This seems stupid at first, but a bag I actually use, which at least reminds me of the company when I use it, is better than a bag that stays stacked in a closet. (Of course, logo-inside bags would be given away more, which may not accomplish much.) Perhaps the sponsors should go in for designer bags, and turn their logos into desirable designer logos?
If your name is Versace, you can get people to pay to carry your advertising, but sorry, not if your name is AT&T. I hope you can get over it. And while a bag is useful for carrying stuff home from the conferences and even storing literature, truth is you can use a $1 bag for that, not a $15 one. We really have to hunt to find better conference giveaways than bags, at least at conferences whose attendees all attend other fancy conferences.