Brad Templeton is an EFF
director, Singularity U
faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Submitted by brad on Sat, 2006-12-02 16:17.
We still see a lot of thermal printers out there, particularly for printing labels, receipts and the like. They are cheap, of course, though the paper costs extra so it's not always a long term win.
However, I am seeing them used for receipts that people may need to use some time later, and the problem is they fade. They definitely fade if you put them in a wallet or anywhere else that will be kept on your body. For my prepaid cell phone in Canada, for example, I need to buy the vouchers in advance so I can refill over the web before I travel back to Canada, and the most recent purchase came on thermal paper that is already faded partly and will be gone soon. I wrote down the number for protection, but it's just 3 weeks later.
So let's see a move away from thermal printers for receipts. They are OK for mailing labels which are very short lived, or places that will never see exposure to heat, or accidentally being left in the sun, but inkjets are so cheap now that there's not much excuse. (Though I realize inkjets have more moving parts.)
I also find for some reason that the thin thermal paper they use at Fry's for their receipts confuses the sheetfed scanner I use to scan receipts. It's not always sure there is paper in the scanner. I suppose that's mostly the scanner's fault, but it wouldn't happen if Fry's used a better paper or process.
Submitted by brad on Sat, 2006-12-02 01:13.
We all spend far too much of our time doing sysadmin. I’m upgrading and it’s as usual far more work than it should be. I have a long term plan for this but right now I want to talk about one of Linux’s greatest flaws — the dependencies in the major distributions.
When Unix/Linux began, installing free software consisted of downloading it, getting it to compile on your machine, and then installing it, hopefully with its install scripts. This always works but much can go wrong. It’s also lots of work and it’s too disconnected a process. Linuxes, starting with Red Hat, moved to the idea of precompiled binary packages and a package manager. That later was developed into an automated system where you can just say, “I want package X” and it downloads and installs that program and everything else it needs to run with a single command. When it works, it “just works” which is great.
When you have a fresh, recent OS, that is. Because when packagers build packages, they usually do so on a recent machine, typically fully updated. And the package tools then decide the new package “depends” on the latest version of all the libraries and other tools it uses. You can’t install it without upgrading all the other tools, if you can do this at all.
This would make sense if the packages really depended on the very latest libraries. Sometimes they do, but more often they don’t. However, nobody wants to test extensively with old libraries, and serious developers don’t want to run old distributions, so this is what you get.
So as your system ages, if you don’t keep it fully up to date, you run into a serious problem. At first you will find that if you want to install some new software, or upgrade to the lastest version to get a fix, you also have to upgrade a lot of other stuff that you don’t know much about. Most of the time, this works. But sometimes the other upgrades are hard, or face a problem, one you don’t have time to deal with.
However, as your system ages more, it gets worse. Once you are no longer running the most recent distribution release, nobody is even compiling for your old release any more. If you need the latest release of a program you care about, in order to fix a bug or get a new feature, the package system will no longer help you. Running that new release or program requires a much more serious update of your computer, with major libraries and more — in many ways the entire system. And so you do that, but you need to be careful. This often goes wrong in one way or another, so you must only do it at a time when you would be OK not having your system for a day, and taking a day or more to work on things. No, it doesn’t usually take a day — but it might. And you have to be ready for that rare contingency. Just to get the latest version of a program you care about.
Compare this to Windows. By and large, most binary software packages for windows will install on very old versions of Windows. Quite often they will still run on Windows 95, long ago abandoned by Microsoft. Win98 is still supported. Of late, it has been more common to get packages that insist on 7 year old Windows 2000. It’s fairly rare to get something that insists on 5-year-old Windows XP, except from Microsoft itself, which wants everybody to need to buy upgrades.
Getting a new program for your 5 year old Linux is very unlikley. This is tolerated because Linux is free. There is no financial reason not to have the latest version of any package. Windows coders won’t make their program demand Windows XP because they don’t want to force you to buy a whole new OS just to run their program. Linux coders forget that the price of the OS is often a fairly small part of the cost of an upgrade.
Systems have gotten better at automatic upgrades over time, but still most people I know don’t trust them. Actively used systems acquire bit-rot over time, things start going wrong. If they’re really wrong you fix them, but after a while the legacy problems pile up. In many cases a fresh install is the best solution. Even though a fresh install means a lot of work recreating your old environment. Windows fresh installs are terrible, and only recently got better.
Linux has been much better at the incremental upgrade, but even there fresh installs are called for from time to time. Debian and its children, in theory, should be able to just upgrade forever, but in practice only a few people are that lucky.
One of the big curses (one I hope to have a fix for) is the configuration file. Programs all have their configuration files. However, most software authors pre-load the configuration file with helpful comments and default configurations. The user, after installing, edits the configuration file to get things as they like, either by hand, or with a GUI in the program. When a new version of the program comes along, there is a new version of the “default” configuration file, with new comments, and new default configuration. Often it’s wrong to run your old version, or doing so will slowly build more bit-rot, so your version doesn’t operate as nicely as a fresh one. You have to go in and manually merge the two files.
Some of the better software packages have realized they must divide the configuration — and even the comments — made by the package author or the OS distribution editor from the local changes made by the user. Better programs have their configuration file “include” a normally empty local file, or even better all files in a local directory. This does not allow comments but it’s a start.
Unfortunately the programs that do this are few, and so any major upgrade can be scary. And unfortunately, the more you hold off on upgrading the scarier it will be. Most individual package upgrades go smoothly, most of the time. But if you leave it so you need to upgrade 200 packages at once, the odds of some problem that diverts you increase, and eventually they become close to 100%.
Ubuntu, which is probably my favourite distribution, has announced that their “Dapper Drake” distribution, from mid 2006, will be supported for desktop use for 3 years, and 5 years for server use. I presume that means they will keep compiling new packages to run on the older base of Dapper, and test all upgrades. This is great, but it’s thanks to the generousity of Mark Shuttleworth, who uses his internet wealth to be a fabulous sugar daddy to the Linux and Ubuntu movements. Already the next release is out, “Edgy” and it’s newer and better than Dapper, but with half the support promise. It will be interesting to see what people choose.
When it comes to hardware, Linux is even worse. Each driver works with precisely one kernel it is compiled for. Woe onto you once you decide to support some non-standard hardware in your Linux box that needs a special driver. Compiling a new driver isn’t hard once, until you realize you must do it all again any time you would like to slightly upgrade your kernel. Most users simply don’t upgrade their kernels unless they face a screaming need, like fixing a major bug, or buying some new hardware. Linux kernels come out every couple of weeks for the eager, but few are so eager.
As I get older, I find I don’t have the time to compile everything from source, or to sysadmin every piece of software I want to use. I think there are solutions to some of these problems, and a simple first one will be talked about in the next installment, namely an analog of Service Packs
Submitted by brad on Thu, 2006-11-30 20:56.
Parking at airports seems a terrible waste — expensive parking and your car sits doing nothing. I first started thinking about the various Car Share companies (City CarShare, ZipCar, FlexCar — effectively membership based hourly car rentals which include gas/insurance and need no human staff) and why one can’t use them from the airport. Of course, airports are full of rental car companies, which is a competitive problem, and parking space there is at a premium.
Right now the CarShare services tend to require round-trip rentals, but for airports the right idea would be one-way rentals — one member drives the car to the airport, and ideally very shortly another member drives the car out of the airport. In an ideal situation, coordinated by cell phone, the 2nd member is waiting at the curb, and you would just hand off the car once it confirms their membership for you. (Members use a code or carry a key fob.) Since you would know in advance before you entered the airport whether somebody is ready, you would know whether to go to short term parking or the curb — or a planned long-term parking lot with a bit more advance notice so you allocate the extra time for that.
Of course the 2nd member might not want to go to the location you got the car from, which creates the one-way rental problem that carshares seem to need to avoid. Perhaps better balancing algorithms could work, or at worst case, the car might have to wait until somebody from your local depot wants to go there. That’s wasteful, though. However, I think this could be made to work as long as the member base is big enough that some member is going in and out of the airport.
I started thinking about something grander though, namely being willing to rent your own private car out to bonded members of a true car sharing service. This is tougher to do but easier to make efficient. The hard part is bonding reliability on the part of all concerned.
Read on for more thinking on it… read more »
Submitted by brad on Tue, 2006-11-28 14:27.
There’s a great tragedy going on in the Sudan, and not much is being done about it. Among the people trying to get out the message are hollywood celebrities. I am not faulting them for doing that, but I have a suggestion that is right up their alley.
Which is to make a movie to tell the story, a true movie that is, hopefully a moving as a Schinder’s List or the Pianist. Put the story in front of the first world audience.
And, I suggest with a sad dose of cynicism, do it with whitebread american actors. Not that African actors can’t do a great job and make a moving film like Hotel Rwanda. I just have a feeling that first world audiences would be more affected if they saw it happening to people like them, rather than people who live in a tiny poor muslim villages in a remote desert. The skin colour is only part of what seems to have distanced this story to the point that little is being done. We may have to never again believe that people will keep the vow of never again.
So change the setting a bit and the people, but keep the story and the atrocities, and perhaps it can have the same effect that seeing a Schindler’s list can have on white euro descended Jews and non-Jews. And the Hollywood folks would be doing exactly what they are best at.
Submitted by brad on Fri, 2006-11-24 20:06.
I’m pleased to see that more of my photography is getting licenced for ads and web sites these days. I like the job that this PDA ad does with my 360 degree view of Shanghai People’s Square. Of course I can’t read the text very well.
By the way, I learned the hard way how valuable the feature I proposed earlier for digital cameras — where they would notice if they’ve been set in an unusual state after a long gap between sessions — while on my trip this month to Edmonton, and one of my favourite spots on the planet — the rocky mountains in Banff and Jasper. Just before the trip I had put the camera into the “small” image size mode because I was shooting some stuff for eBay, and you really don’t need 8 megapixel shots for that. Alas, I left it there, and this is one of those mode switches which is not at all obvious. You won’t notice it unless you pay careful attention to the tiny “s” on the LCD panel, or if you download the photos. Alas, on my 4gb card I can go a long way without downloading, so a full days shots, including a lovely snow dusted Lake Louise were shot in small size, high compression.
The other way you would spot this is the camera shows you how many shots you have left. My 4gb card shows 999 when it starts even in large mode. But after shooting for a short while it eventually starts counting down. I only noticed I was in small mode when the 999 didn’t start counting down with hundreds of shots.
So this is definitely a case where the camera should notice it’s been days since I shot, and warn me I’m shooting with this unusual setting. I will still get quite serviceable web photos from that day, but not the wall sized prints I love.
Submitted by brad on Mon, 2006-11-20 01:27.
It’s always reported how low US voter turnout is in midterm elections. 2006, at about 40%, seems pretty poor, though it was higher than 2002.
However the statistic I would like to see is “Voter turnout in districts where there is an important, hotly contested race.” This is the number we might want to monitor from year to year.
Virginia, it turns out, which had the Webb-Allen “Macaca” race, had the highest voter turnout in its history. You wouldn’t think that after hearing about the low turnout of a typical mid-term. Of course it will also go down as the first time a major U.S. politician was taken down due to blogs, the web and YouTube. Since it was so close, almost any factor can be given credit for Allen’s loss.
It is not surprising that when there is no contested race, that turnout is low. The U.S. for various bizarre reasons, has most incumbents always safe in their seats. This switch of 30 or so seats in the house and 6 in the senate is considered a major upheaval, nigh a revolution, by Americans. With seats so safe, there is no suprise there is little incentive in voting. U.S. ballots are very complex compared to many countries, and there are often long voting lines, and you don’t get official time off to vote.
Contrast that to Canada, where a public upset with the Conservative party’s introduction of the visible Goods and Services Tax (a 7% VAT) took the party from having a majority of parliament to having TWO seats. 2, as in 1 plus 1. There’s no such safety zone for incumbents, no cry for term limits in much of the rest of the world. There, if the public gets upset it throws the bums out, or drops them back to a minority position due to the fact that there are more than 2 parties.
I hope one of the major statistical agencies starts tracking voter turnout modulated by how motivated the voters are in particular districts. Of course voter turnout is the final metric of how motivated they were, but there are other, earlier indicators in most cases.
Submitted by brad on Sun, 2006-11-19 14:11.
Ok, this is a silly idea, but it would make a great baby shower gift. Crib sheets — which is to say sheets to go on a baby’s bed — printed with small notes on your favourite subjects of choice — math, physics, history, as you would need for taking an exam. And who knows, maybe you can pretend if the baby sleeps surrounded by Maxwell’s equations she’ll become a genius.
Submitted by brad on Sun, 2006-11-19 00:58.
I’m not a gamer. I wrote video games 25 years ago but stopped when game creation became more about sizzle (graphics) than steak (strategy.) But the story of the release of the Playstation 3 is a fascinating one. Sony couldn’t make enough, so to get them, people camped out in front of stores, or in some cases camped out just to get a certificate saying they could buy one when they arrived. But word got out that people would pay a lot for them on eBay. The units cost about $600, depending on what model you got, but people were bidding thousands of dollars even in advance, for those who had received certificates from stores.
It was amusing to read the coverage of the launch at Sony’s own Sonystyle store in San Francisco. There the press got bored as they asked people in line why they were lining up to get a PS3. The answer most commonly seemed to be not a love of gaming, but to flip the box for a profit.
And flip they did. There were several tens of thousands of eBay auctions for PS3s, and prices were astounding. About 20,000 auctions closed. Another 25,000 are still running at this time. Some auctions concluded for ridiculous numbers like $110,000 for 4 of them, or a more “reasonable” $20,000 for 5. Single auctions reached as high as $25,000, though in many of these cases, it’s bad news for the seller because the high bidders are people with zero eBay reputation who obviously won’t complete the transaction. In other cases serious sellers will try to claim their bid was a typo. There are some auctions with serious multiple bidders that got to 3 and 4 thousand dollars, but by mid-day today they were all running about $2,000, and they started dropping very quickly. As I watched in a few minutes they fell from $1,500 to going below a thousand. Still plenty of profit for those willing to brave the lines.
It’s interesting to consider what the best strategy for a seller is. It’s hard to predict what form a frenzy like this will take, and when the best price will come. The problem is eBay has a minimum 1 day for the auction, so you must guess the peak 1 day in advance. Since many buyers were keen to see the auction listing showing that the person had the unit in hand, ready to ship, the possible strategy of listing the item before going to get it bore some risks. Some showed scans of their pre-purchase.
The most successful sellers were probably those who picked a clever “buy it now” price which was taken during the early frenzy by people who did not realize how much the price would drop. All the highest auctions (including those with fake buyers) were buy-it-now results. Of course, it’s mostly luck in guessing what the right price was. I presume the buy-it-now/best-offer feature (new on eBay) might have done well for some sellers.
However, those who got a bogus buyer are punished heavily. They can re-list, but must wait a day to sell by auction, and will have lost a bunch of money in that day. If they can find the buyer they might be able to sue. If they are smart, they would re-list with a near-market buy-it-now to catch the market while it’s hot.
Real losers are those who placed a reserve on their auctions, or a high starting bid price. In many cases their auctions will close with no succesful bidder, and they’ll sell for less later. Using a reserve or high starting bid makes no sense when you have such a high-demand item. Those paranoid about losing money should have at most started bidding at their purchase price. I can’t think of any reason for a reserve price auction in this case — or in most other cases, for that matter. Other than with experimental rare products, they are just annoying.
Particularly sad was one auction where the seller claimed to be a struggling single mom who had kids that lucked out and got spots in line, along with pictures of the kids holding the boxes. She set a too-high starting price, and will have to re-list.
Another bad strategy was to do a long multi-day listing.
It’s possible the rarity of these items will grow, as people discover they just can’t get one for their kids for Christmas, but I doubt it.
The other big question this raises is this: Could Sony have released the machine differently? Sony obviously left millions on the table here, about 30 to 40 million I would guess. That’s tolerable for Sony, and they might have decided to give it up for the publicity that surrounds a buying craze. But I have to wonder, would they not have been better served to conduct their own auctions, perhaps a giant dutch auction, for the units, with some allocated at list price by lottery or for those willing to wait in line so that it doesn’t seem so elitist. (As if any poor person is going to buy a PS3 and keep it if they can make a fast thousand in any event.)
Some retailers took advantage of demand by requiring customers to buy several games with the box, presumably Sony approved that. With no control from Sony all the retailers would be trying to capture all this money themselves, which they could easily have done — selling on eBay directly if need be.
I predict in the future we will see a hot Christmas item sold through something like a dutch auction, since being the first to do that would generate a lot of publicity. Dutch auctions are otherwise not nearly so exciting. When Google went public through one, the enemies of dutch auctions worked to make sure people thought it was boring, causing Google to leave quite a bit of money on the table, but far less than they would have left had they used traditional underwriters.
On a side note — if you shop on eBay, I recommend the mozilla/firefox/iceweasel plugin “Shortship” which fixes one of eBay’s most annoying bugs. It lets you see the total of price plus shipping, and sort by it, at least within one ebay display page.
Submitted by brad on Fri, 2006-11-17 16:43.
Differential pricing occurs when a company attempts to charge different prices to two different customers for what is essentially the same product. One place we all encounter it a lot is air travel, where it seems no two passengers paid the same price for their tickets on any given flight. You also see it in things like one of my phones, which has 4 line buttons but only 2 work — I must pay $30 for a code to enable the other 2 buttons.
The public tends to hate differential pricing, though in truth we should only hate it when it makes us pay more. Clearly some of the time we’re paying less than we might pay if differential pricing were not possible or illegal.
So even if differential pricing is neutral, one can rail if it punishes/overcharges the wrong thing. There might be a better way to get at the vendor’s goal of charging each customer the most they will tolerate — hopefully subject to competition. Competition makes differential pricing complex, as it’s only stable if all competitors use roughly the same strategy.
In air travel, the prevailing wisdom has been that business travellers will tolerate higher ticket prices than vacation travellers, and so most of the very complex pricing rules in that field are based on that philosophy. Business travellers don’t want to stay over weekends, they like to change their flights, they want to fly a combination of one-way trips and they want to book flights at short notice. (They also like to fly business class.) All these things cost a lot more in the current regime.
Because of this, almost all the travel industry has put a giant surcharge on flexibility. It makes sense that it might cost a bit more — it’s much easier to schedule your airline or hotel if people will book well in advance and keep to their booking — but it seems as though the surcharge has gotten immense, where flexible travel can cost 2 to 4 times what rigidly scheduled travel costs.
Missing the last flight of the day can be wallet-breaking. Indeed, there are many arguments that since an empty seat or hotel room is largely wasted, vendors might be encouraged to provide cheaper tickets to those coming in at the last minute, rather than the most expensive. (And sometimes they do. In the old days flying standby was the cheapest way to fly, suitable only for students or the poor. There are vendors that sell cheap last minute trips.)
Vendors have shied away from selling cheap last-minute travel because they don’t want customers to find it reliable enough to depend on. But otherwise it makes a lot of sense.
So my “Solve this” problem is to come up with schemes that still charge people as much as they will tolerate, but don’t punish travel flexiblity as much.
One idea is to come up with negative features for cheap tickets that flexible, non-business travellers will tolerate but serious business travellers and wealthy travellers will not. For example, tickets might come with a significant (perhaps 10-20%) chance of being bumped, ideally with sufficient advance notice by cell phone that you don’t waste time going to the airport. For example, the airline might sell a cheap ticket but effectively treat the seat as available for sale again to a higher-paying passenger if they should come along. You might learn the morning of your trip that somebody else bought your seat, and that you’ll be going on a different flight or even the next day. They would put a cap on how much they could delay you, and that cap might change the price of your ticket.
For a person with a flexible work schedule (like a consultant) or the retired, they might well not care much about exactly what day they get back home. They might like the option to visit a place until they feel like returning, with the ability to get a ticket then, but the risk that it might not be possible for a day or two more. Few business travellers would buy such a ticket.
Such tickets would be of most value to those with flexible accomodations, who are staying with friends and family, for example, or in flexible hotels. Rental cars tend to be fairly flexible.
Of course, if you’re willing to be bumped right at the airport, that should given you an even cheaper ticket, but that’s quite a burden. And with today’s ubiquitous cell phones and computer systems there’s little reason not to inform people well in advance.
This technique could even provide cheaper first-class. You might buy a ticket at a lower price, a bit above coach, that gets you a first class seat half the time but half the time puts you in coach because somebody willing to pay the real price of first class bought a ticket. (To some extent, the upgrade system, where upgrades are released at boarding time based on how many showed up for first class, does something like this.)
Any other ideas how airlines could provide cheaper flexible tickets without eating into their business flyer market? If only one airline tries a new idea, you get an interesting pattern where everybody who likes the new fare rules switches over to that airline in the competitive market, and the idea is forced to spread.
Added note: In order to maintain most of their differential pricing schemes today, airlines need and want the photo-ID requirement for flying. If tickets (including tickets to half a return trip) could be easily resold on the web to anybody, they could not use the systems they currently use. However, the system I suggest, which requires the passenger be willing to be bumped, inhibits resale without requiring any type of ID. A business traveller might well buy a cheap ticket at the last minute from somebody who bought earlier, but they are going to be less willing to buy a ticket with unacceptable delay risks associated with it.
Submitted by brad on Wed, 2006-11-15 15:07.
I’ve written before about one of the greatest flaws in the modern political system is the immense need of candidates to raise money (largely for TV ads) which makes them beholden to contributors, combined with the enhanced ability incumbents have at raising that money. Talk to any member of congress and they will tell you they start work raising money the day after the election.
Last year I proposed one radical idea, a special legitimizing of political spam done through the elections office. That will take some time as it requires a governmental change. So other factors are coming forward.
In some states and nations, efforts are already underway to have the government finance elections. The Presidential campaign fund that you contribute to whether you check the box on the tax return or not is one effort in this direction.
I propose that the operators of the big, advertising-supported web sites, in particular sites like Yahoo, Google, Microsoft, Myspace and the like join together to create a program to give free web advertising to registered candidates on a fair basis. This could be done by simply providing unsold inventory, which is close to free, or it could be real valuable inventory including credits for targetted ads.
Of course, not everybody reads the web all day, so this only reaches one segment of the population, but it reaches a lot. The main goal is to reduce the need, in the minds of candidates, to raise a lot of money for TV ads. They won’t stop entirey, but it might get scaled back.
Such a system would allow users the option of setting a cookie to provide preferences for the political ads they see. While each candidate would get one free shot, voters could opt-out of ads for specific candidates or races. (In some cases the geography-matcher would get it wrong and they would change the district the system think they are in.) They could also tone down the amount of advertising, or opt in or out of certain styles (flash, animated, text, video.)
It would be up to candidates to tune their message, and not overdo things or annoy voters, pushing them to opt out.
There can’t be too much opting out though, because the goal here is to deliver the same thing that candidates rely on TV for — pushing their message at voters who have not gone seeking it. If we don’t provide that, we’ll never cut the dependency on TV and other intrusive ads.
Allowing these ads to be intrusive seems wrong, but the real thing to do is consider the competition, and what its thirst for money does to society. Thanks to the internet, we’ve reduced the price of advertising by an order of magnitude. If the price of advertising is what corrupts the political system, it seems we should have a shot of fixing the problem.
Ads would be served by the special consortium managing the opt-out system, not the candidate, in order to protect privacy. So if you click on an ad for a candidate, the first landing page is not hosted by the candidate, but may have links to their site.
A system would have to be devised to allocate “importance” to elections. Ie. how many ads do the candidates for President get vs. those for state comptroller.
One risk is that the IRS or other forces might try to declare this program a political contribution by the web sites. If applied fairly to all candidates, we’ll need a ruling that states it is not a contribution. This is needed, because otherwise sites will balk at the idea of running free ads for candidates they dispise.
If the system got powerful enough, it could even make a bolder claim. It could only allow the free advertising to candidates who agree to spending limits in other media. On one hand this is just what most campaign finance reform programs do to avoid the 1st amendment. On the other hand, it may seem like an antitrust violation — deliberately giving stuff away not just to kill the “competition” but actually forbidding the candidates from spending too much with the competition.
This need not be limited to the web of course. Other media could join in, though the ones that already make a ton of money from political advertising (TV, radio) are not so likely to join.
This won’t solve the whole problem, but it could make a dent, and even a dent is pretty important in a problem as major as this.
Submitted by brad on Wed, 2006-11-15 00:37.
I go to many conferences, and most of them seem to want to give me a nice canvas bag, and often a shirt as well. Truth is though, I now have a stack of about 20 bags in my closet. I’ve used some of the bags, typically the backpacks, but when I have so many other bags I don’t feel a strong motivation to walk around with a briefcase or laptop bag with a giant sponsor’s logo on it, or worse, a collection of 10 logos. No matter how nice the bag is. In addition, even if I got logo-free bags I have no need for 20 of them, but I can’t really give away logo covered bags as gifts.
Which means the sponsor wasted their money. And I think this is common, for while I sometimes see people carrying a sponsor bag outside the confines of a conference, it’s pretty rare compared to the number given out. You want me to be your billboard, I want more than a bag for it.
Might some sponsors take the plunge and make a bag with the sponsor’s logo inside the bag? Or perhaps if on the outside, in a more subtle way. This seems stupid at first, but a bag I actually use, which at least reminds me of the company when I use it, is better than a bag that stays stacked in a closet. (Of course, logo-inside bags would be given away more, which may not accomplish much.) Perhaps the sponsors should go in for designer bags, and turn their logos into desirable designer logos?
If your name is Versace, you can get people to pay to carry your advertising, but sorry, not if your name is AT&T. I hope you can get over it. And while a bag is useful for carrying stuff home from the conferences and even storing literature, truth is you can use a $1 bag for that, not a $15 one. We really have to hunt to find better conference giveaways than bags, at least at conferences whose attendees all attend other fancy conferences.
Submitted by brad on Tue, 2006-11-14 16:10.
It does get hard to be a privacy advocate when it’s easy to think of interesting apps that make use of tracking infrastructure. Here’s one.
How often have you wanted to talk to somebody in a car next to you on the road? Consider a system where people could register their licence plate(s) with their cell phone account. Then, if they had done this, you could call a special number on your own cell phone, and enter the numeric part of their licence plate.
If both you, and the other car were close by (for example in the same cell, but often the cell companies have much closer tracking information) and both of you were moving, it could then complete the call to the other car. The other car might get to screen the call (ie. you would have to enter the reason for the call and they would hear, “Will you accept a call from about .”)
Sounds like a good product for the cell companies, able to generate minutes. Easy enough to do if both people use the same cell company, lots more work between two different companies where a protocol would be needed. Would be easier to do with texting but you don’t want people texting in cars.
Could have used it last night, was tailing a friend on the road to her house, did not have her cell number but could see her plate.
As I’ve described the system it’s opt-in, nobody calls you unless you sign up for it and register a plate. However it could be made fairly safe to opt-in with a number of protections. As noted, the system could demand the cars are moving (cell network can see that) so that it can’t be used to reach your cell phone while you are not driving. You could have screening.
It should also have a reputation system. For example, if you call me, then after we disconnect I can leave a negative reputation comment on you. Get a few of these and you’re out of the system. This assures people don’t use it simply to express road rage at the next driver or other things that are largely annoying. On the other hand you can use it to tell people their blinker is blinking or their trunk is open. (Mind you, once you are aware of a problem you would want a function to tell callers you are aware of a driving problem and to press 2 if they are calling about something else.)
And sure, for those open to it, it would be used for flirting with the cutie who gave you the eye when you were both stopped at the light.
You can of course just stick your cell number on your bumper to do this, but it would not have the opt-out and reptuation systems. With today’s cheap phone numbers, however, you could get a special number that forwards to your cell and performs the screening/reputation/etc. but is not able to use the location awareness.
If the digits are not unambiguous (or, like me, you have a custom plate that’s all letters) the system would need to offer you the cars close to you that match.
Submitted by brad on Wed, 2006-11-08 20:59.
This weekend I experienced an air travel policy that I had not seen before and which I found quite shocking. I was flying on United Express (Skywest)’s flight from San Francisco to Calgary. As we waited for the early morning flight, they announced this “weather warn.” Visibility was poor in Calgary due to low clouds. Below 0.5 miles they plane would not be allowed to land there. They rated about a 1/3rd chance of this happening, 2/3 chance we would land normally.
The catch was this, if, when they got to Calgary, they found they could not land, they would divert to Great Falls, MT. After dropping the passengers in Great Falls, we would be entirely on our own, with no assistance at all with getting to Calgary via ground or air. (United Express and a few other airlines do sell tickets from Great Falls to Calgary, though all via fairly distant hubs like Denver, Salt Lake etc.) The important point about this is that the diversion is to an airport in a different country from the intended destination. This makes ground transportation particularly difficult, as car rental companies are disinclined to offer economical one-way rentals between countries — not to mention the 6 hour drive.
(Hertz will do it for about $320/day.)
I just checked and Greyhound will get you there in 1 day, 14 hours via Seattle and Vancouver. Amtrak doesn’t even go there.
Now the other passengers who had seen this before said that it usually works out, so we got on with a sense of adventure. But it would have been a big adventure had we been diverted, and just seemed to be a rather strange state for the airline to leave passengers. Yes, they did say that if we elected not to get on the flight, we could try a later one (with no assurance there would not be the same weather warning on that flight.) Most of the passengers got on, and we did land OK, but a few backed out.
Some international bureaucracy, they said, forbids them from landing at another Canadian airport, such as obvious choices like Edmonton, or even various smaller airports since this was a Canadair regional jet able to land at small airports. But just about anything would have been superior to Great Falls in the USA — some city with a means of getting to the destination. Indeed, the plane after landing in either GF or Calgary would have headed on to Chicago, which while far away, is at least a city one could find a flight to Calgary from, and from which United could certainly have arranged travel for the passengers.
I’m taking a wild guess that this bureaucracy is 9/11 related, but I could be wrong. But if it is, it’s another secret burden of that day.
(The likely result — passengers would probably have formed up in groups of 5 to rent Hertz cars and drive to Calgary. The cost — $320 plus $50 of gas — would have been tolerable shared among 5 people who would know one another much better by the end of the day. Of course we didn’t know this when making the decision.) There are also some slight cheaper but inconvenient tricks involving an in-Montana rental which drives to an Alberta town near the border, where one of the passengers rents a car there, and both cars drive to a Montana drop-off and then the Alberta car continues to Calgary. You would need a sense of adventure to do that.
Submitted by brad on Sun, 2006-11-05 23:00.
I'm in Edmonton. Turns out to be the farthest north I've been on land (53 degrees 37 minutes at the peak) after another turn through the Icefields Parkway, surely one of the most scenic drives on the planet. My 4th time along it, though this time it was a whiteout. Speaking tomorrow at the CIPS ICE conference on privacy, nanotechnology and the future at 10:15.
Idea of the day. I joined Fairmont Hotels President's Club while at the Chateau Lake Louise because it gave me free internet. When I got to the Fairmont Jasper Lodge my laptop just worked with no login, and I was really impressed -- I figured they had recorded my MAC address as belonging to a member of their club, and were going to let me use it with no login. Alas, no, the Jasper lodge internet (only in main lobby) was free for all. But wouldn't that be great if all hotels did that? Do any of the paid wireless roaming networks do this? (I guess they might be afraid of MAC cloning.) It would also allow, with a simple interface, a way for devices like Wifi SIP phones to use networks that otherwise require a login.
Of course, as we all know, the more expensive the hotel, the more likely the internet is not only not included, it's way overpriced. At least Fairmont gave one way around this. Of course I gave them a unique E-mail address created just for them, so if they spam me I can quickly disable them. But once again I, like most of us, find myself giving up privacy for a few hotel perks.
Submitted by brad on Thu, 2006-11-02 00:08.
When I’m having a problem with a company, I try sometimes to remind them of a principle of customer service I worked out when I was running ClariNet. Namely that when a company screws up, it should more than fix the problem, even to the point of losing money (for a while) on that customer. The reason, in brief, is that this does more than make the customer happy with the transaction. It signals in the strongest possible way that the screw-up is a rare event, which makes the customer come back for more.
I have outlined it in this page on Brad’s principle of customer service.
Submitted by brad on Mon, 2006-10-30 11:32.
You’ve seen the flap recently because a student, to demonstrate the fairly well discussed airport security flaw involving the ease forgeability of boarding passes, created a web site where you could easily create a fake Northwest boarding pass. Congressman Markey even called for the student’s arrest, then apologized, but in the meantime the FBI raided his house and took his stuff.
As noted, this flaw has been discussed for some time. I certainly saw it the first time I was able to print my own boarding pass. However, it’s not really limited to print-at-home boarding passes, and it’s a shame the likely reaction to this will be to disable that highly convenient service. Airline issued boarding passes are just thicker paper. I don’t see it being particularly difficult with modern colour printers — which are able to pull off passable money given the right paper — to produce good airline printed boarding passes.
It’s possible the reaction to this will be to simpy add a gate ID check for people with home printed boarding passes, which will at least retain those passes without slowing down the boarding process even more, but it doesn’t actually fix the problem.
The current system of easy to forge boarding passes, combined with ID check at TSA security and boarding pass check at the gate, has the following flaws:
- You can, as noted, fly if you are on the no-fly list with no problems. If I were named David Nelson I would consider it.
- You can bypass the selectee system, where they print SSSS on your boarding pass to mark you for “full service” searching. (I’ve been told an additional stamp is placed on your boarding pass after the search, you need to forge this too.)
- You can transfer your ticket to another person without telling the airline or paying them. You also earn flyer miles even though somebody else got on the plane
- It allows people to enter the gate area who aren’t actually flying. This is not a big security risk, but it slows down the security line. You don’t want to miss your flight because people slowed down the line to meet their friends at the gate.
Some airports have the TSA ID-checker put a a stamp on the boarding pass. However, this is also not particularly difficult to forge. Just have somebody go through once to get today’s stamp, have them come back out and now you can forge it.
The simplest answer is to have ID check at the gate. This slows boarding, however, which is bad enough as it is. The hard answer is to have unforgeable boarding passes or an unforgeable stamp or non-removable sticker at TSA.
Probably the best solution is that the TSA station be equipped with an electronic boarding pass reader which can read the barcodes on all types of boarding passes, which themselves must be cryptographically secure. Then the name printed on the pass becomes unimportant, except so you can tell yours from your companion’s. The scanner would scan the pass, and display the name of the passenger on the screen, which could then be compared to the ID.
Sadly, I fear this suggestion would go further, and the full panopticon-enabled system would display the photo of the passenger on the screen — no need to show your ID at all.
Though mind you, if we didn’t have the no-fly-list concept, one could actually develop a more privacy enhancing system with photos. When you bought your ticket, if you didn’t care about FF miles, you would provide a photo of the passenger, not their name or anything else about them. The photo would be tied to the boarding record. To go through security or board the aircraft, you would present the boarding pass number or bar-code, and TSA, gate and luggage check agents would see your photo, and pass you through. The photo confirms that the person pictured has a valid ticket. This meets most of the goals of the current system, except for these:
- It doesn’t allow a no-fly-list. But the no-fly-list is bad security. Only random screening is good security
- It doesn’t allow gathering marketing data on passengers. But the frequent flyer system does.
- It doesn’t allow the airline to generate a list of dead passengers in the event of a crash.
As noted, the marketing data goal is met by the FF program. It would be possible, by the way to build a fairly
private FF program where you don’t give your name or address for the program. You just create an FF account
online, and get a password, and you can place a picture in it and associate it with flights. You can then
redeem flights from it, all online. But I doubt the airlines will rush to do this, they love selling data about you.
The dead-passenger problem can be solved to some degree. They would have, after all, pictures of all the passengers so they could be identified by people who know them. In a pinch, identity could also be escrowed, with the escrow agency requiring proof of the death of the passenger before revealing their identity. That’s pretty complex.
There’s no good way to solve the no-fly-list problem unless you have credible face recognition software. Even that wouldn’t work because it’s not hard to modify a photo to screw up what the face recognition software is looking for but still have it look like you. But frankly the no-fly-list is bad security and it’s not a bug that it doesn’t work in this system.
Submitted by brad on Sat, 2006-10-28 15:59.
In furtherance of my prior ideas on smart power, I wanted to add another one — the concept of backup power.
As I wrote before, I want power plugs and jacks to be smart, so they can negotiate how much power the device needs and how much the supply can provide, and then deliver it.
However, sometimes, what the supply can provide changes. The most obvious example is a grid power failure. It would not be hard, in the event of a grid power failure, to have a smaller, low capacity backup system in place, possibly just from batteries. In the event of failure of the main power, the backup system would send messages to indicate just how much power it can deliver. Heavy power devices would just shut off, but might ask for a few milliwatts to maintain internal state. (Ie. your microwave oven clock would not need an internal battery to retain the time of day and its memory.) Lower power devices might be given their full power, or they might even offer a set of power modes they could switch to, and the main supply could decide how much power to give to each device.
Of course, devices not speaking this protocol, would just shut off. But things like emergency lights need not be their own system — though there are reasons from still having that in a number of cases, since one emergency might involve the power system being destroyed. However, battery backup units could easily be distributed around a building.
In effect, one could have a master UPS, for example, that keeps your clocks, small DC devices and even computers running in a power failure, but shuts down ovens and incandescent bulbs and the like, or puts devices into power-saving modes.
We could go much further than this, and consider a real-time power availability negotiation, when we have a power supply or a wire with a current limit. For example, a device might normally draw 100mw, but want to burst to 5w on occasion. If it has absolutely zero control over the bursts, we may have to give it a full 5w power supply at all times. However, it might be able to control the burst, and ask the power source if it can please have 5w. The source could then accept that and provide the power, or perhaps indicate the power may be available later. The source might even ask other devices if they could briefly reduce their own power usage to provide capacity to the bursting device.
For example, a computer that only uses a lot of power when it’s in heavy CPU utilization might well be convinced to briefly pause a high-intensity non-interactive task to free up power for something else. In return, it could ask for more power when it needs it. A clothes-dryer or oven our furnace or other such items could readily take short pauses in their high power drain activities — anything that uses a cycle rather than 100% on can do this.
This is also useful for items with motors. A classic problem in electrical design is that things like motors and incandescent lightbulbs draw a real spike of high current when they first turn on. This requires fuses and circuit breakers to be “slow blow” because the current is often briefly more than the circuit should sustain. Smart devices could arrange to “load balance” their peaks. You would know that the air conditioner compressor would simply never start at the same time as the fridge or a light bulb, resulting in safer circuits even though they have lower ratings. Not that overprovisioning for safety is necessarily a bad thing.
This also would be useful in alternative energy, where the amount of power available changes during the day.
Of course, this also applies to when the price of power changes during the day, which is one application we already see in the world. Many power buyers have time-based pricing of their power, and have timers to move when they use the power. In many cases whole companies agree their power can be cut off during brown-outs in order to get a cheaper price when it’s on. With smart power and real-time management, this could happen on a device by device basis.
These ideas also make sense in power over ethernet (which is rapidly dropping in price) which is one of the 1st generation smart power technologies. There the amount of power you can draw over the thin wires is very low, and management like this can make sense.
Submitted by brad on Thu, 2006-10-26 23:47.
In the 90s, when I had more money, I did some angel investing. One of the companies I invested in, Sierra Sciences was started by an old friend and business associate, Dan Fylstra, who had also founded Personal Software/VisiCorp, the company that sold VisiCalc.
Sierra Sciences was also founded by Bill Andrews, who had done important work on Telomeres at Geron. Together, we hoped to follow promising leads on now to safely lengthen the telomere.
Telomeres are strands on the end of chromosomes. Each time a chromosome is duplicated, they shorten, acting like a decremating “counter.” After so many duplications (50 to 60) the telomere is too short and the cell can’t divide. That gives a fresh gamete 2^50 cells to produce, which is a ton, but of course we are the result of highly specialized duplication so it turns out to not be enough. Telomeres are in part a defence against cancer. If a cancer forms, and starts duplicatating like crazy, it hits the limit of the telomere and stops — unless it has found a way to generate Telomerase, the enzyme that resets the counter. We need that enzyme in order to make babies, and certain types of immune cells and IIRC marrow, but in most of our cells it is repressed, in order to stop cancer.
They’ve known how to turn on telomerase and make immortal cell lines for a while, but this would increase the risk of cancer. The trick is to lengthen them just a bit. This would, in theory, give you some of the healing ability of a baby. Old people’s skin wounds heal very slowly because their cells are all divided out — they can’t produce endless new cells quickly.
A study a few years ago showed that people with naturally longer telomeres (just a bit longer) live about 4 years longer on average than those with shorter ones. That’s a big difference, and we hoped even a larger effect could be generated. We identified the sites that repress telomerase and found antagonists for the chemicals binding those sites.
But, after several years and a lot of money, have not yet found a drug to make the magic happen. The major investors have decided not to go forward. The company is for sale. While the investors won’t make much, if anything from it, I hope it is bought not just for the lab equipment but by somebody interested in carrying on the research. Most of the investors not only knew that anti-aging drugs would be very lucrative, they sort of hoped to be on the customer list someday.
It generated some interesting issues. Getting approval for such drugs would be a hard slog. It was debated that an animal drug be developed, as people would pay a lot for longer lived pets and racehorses. I was scared of this, knowing that humans would take the animal drug in desperation — with possible scary results due to lack of testing and refinement. The other hope was for a topical skin cream that really made skin be younger, not just look younger. This would be medically valuable and of course sell a lot for cosmetic applications. But it’s not to be for now.
Wanna buy a biotech company cheap? Check out the web site.
Submitted by brad on Wed, 2006-10-25 23:13.
In thinking about how to reduce the cost of bringing fiber to everybody (particulaly for block-area-networks built by neighbours) I have started wondering if we could build a robot that is able to traverse utility poles by crawling along wires — either power, phone or cable-TV wires. The robot would unspool fiber optic cable behind it and deploy wire-ties to keep it attached. Human beings would still have to eventually climb the poles and install taps or junctions and secure these items, but their job would be much easier.
Robots that can crawl along cables already exist. The hard part is traversing the poles. Now it turns out finding live electric wires is something that’s very easy for a robot to do. They stick out like a live wire in the EM spectrum. The poles of course have insulators, junctions, tie downs and other obstacles. Crossing them may be hard in certain cases (in which case a human would have to help, either by tele-operation, or by climbing the pole.)
It may be possible to have a very small robot that is able to follow the current (easy to tell the lines to the houses from the main lines) and cross a pole like a bug and then, once safely on the other side, pulls the larger robot with a small tether. Again, it won’t always work but if you can get it to work enough of the time, you can install fiber with far less time and labour than the manual approach. Fiber of course can be tied to power lies because it is non-conductive material, though it’s even better if you can run it along phone or cable lines.
Not that any of these companies will want to give permission to competitors. And you want to pull multiple fibers, not so much for the bandwidth — we can do terabits in a single fiber if we want to — but for the backup when one fiber breaks.
If the robots get good enough, they could even string fiber into rural areas, following long chains of power or phone lines with just a single human assistant. Of course overhead wires are going to be more prone to breakage, but with these robots, repairs could be fast and cheap.
There are already robots out there which can crawl storm sewers to install fiber. This is another alternative, though that’s good too. Indeed, a robot that can even crawl real sewage lines to put in fiber which comes out your household stack is not out of the question, if it’s in a strong enough casing.
Submitted by brad on Mon, 2006-10-23 18:22.
Over 15 years ago I proposed that USENET support the concept of “replacing” an article (which would mean updating it in place, so people who had already read it would not see it again) in addition to superseding an article, which presented the article as new to those who read it before, but not in both versions to those who hadn’t. Never did get that into the standard, but now it’s time to beg for it in USENET’s successor, RSS and cousins.
I’m tired of the fact that my blog reader offers only two choices — see no updates to articles, or see the articles as new when they are updated. Often the updates are trivial — even things like fixing typos — and I should not see them again. Sometimes they are serious additions or even corrections, and people who read the old one should see them.
Because feed readers aren’t smart about this, it not only means annoying minor updates, but also people are hesitant to make minor corrections because they don’t want to make everybody see the article again.
Clearly, we need a checkbox in updates to say if the update is minor or major. More than a checkbox, the composition software should be able to look at the update, and guess a good default. If you add a whole paragraph, it’s major. If you change the spelling of a word, it’s minor. In addition to providing a good guess for the author, it can also store in the RSS feed a tag attempting to quantify the change in terms of how many words were changed. This way feed readers can be told, “Show me only if the author manually marked the change as major, or if it’s more than 20 words” or whatever the user likes.
Wikis have had the idea of a minor change checkbox for a while, it’s time for blogs to have it too.
Of course, perhaps better would be a specific type of update or new post that preserves thread structure, so that a post with an update is a child of a parent. Which means it is seen with the parent by those who have not yet seen the parent, but as an update on its on for those who did see it. For those who skipped the parent (if we know they skipped) the update also need not be shown.