Topic

Teaching an old blog new tricks (moving to Drupal)

I’ve switched the blog from Movable Type to drupal. Drupal is a PHP based, open source blog and community system that will allow me in the future to support all sorts of fancy things, such as discussion forums, polls, multi-user blogging and a lot of other stuff. Drupal is entirely another class of application beyond MT, though I won’t be using all of what it has at first.

For now, you will of course see a different look for the blog. Categories can be expanded from the navigation menu and you can do more things with them. You can also create a userid and password to log in. If you do this, comments appear under your name and they appear immediately without need for me to approve them. You also can configure how the site looks for you and turn on other features. I can even give users a blog in the future if you like, when the permission system improves. If you have a login at many other drupal sites you can use it here, by using the userid username@thedrupalsite. (Or use userid@ideas.4brad.com on other drupal sites.)

Let me know if there are any problems.

Dealing with a pandemic in the post-internet world

There’s a lot of talk about the coming threat of Avian H5N1 flu, how it might kill many millions, far beyond the 1918 flu and others, because of how much people travel in the modern world. Others worry about bioterrorism.

Plans are underway to deal with it, but are they truly thinking about some of the tools the modern world has that it didn’t have in 1918 which might make up for our added risks? We have the internet, and a lot of dot-coms, both living and dead, created all sorts of interesting tools for living in the world without having to leave your house.

In the event of an outbreak, we’ll have limited vaccine available, if there’s much at all. Everybody will want it, and society will have to prioritize who gets what. While some choices are obvious — medical staff and other emergency crews — there may be other ideas worth considering.

Today, a significant fraction of the population can work from home, with phone, computer and internet. The economy need not shut down just because people must avoid congregating. Plans should be made, even at companies that prefer not to allow telecommuting, to be able to switch to it in an emergency.

Schools might have to close but education need not stop. We can easily devote TV channels in each area to basic curriculum for each grade. Individual schools can modify that for students who have internet access or even just a DVD player or VCR. For example, teachers could teach their class to a camera, and computers can quickly burn DVDs for distribution. Students can watch the DVDs, pause them and phone questions to the teacher. (However, ideally most students are able to make use of the live lectures on TV, and can phone their particular teacher, or chat online, to ask questions.) Parents, stuck at home would also help their children more.

Delivery people (USPS, UPS etc.) would be high in line for vaccination to keep goods flowing to people in their homes. You can of course buy almost anything online already. Systems like Webvan, for efficient grocery ordering and delivery could be brought back up, with extra vaccinated delivery drivers making rounds of every street.

Of course not everybody has a computer, but that need not be a problem. With so many people at home, volunteers would come forward who did have broadband. They would take calls from those who do not have computers and do their computer tasks for them, making sure they got in their orders for food and other supplies. Of course all food handlers would need to be vaccinated and use more sterile procedures.  read more »

Intermittent free wifi

I recently read the story of the coffee shop that's shutting down their free wifi on weekends because it mostly gets them moochers who, far worse than simply not buying anything, sit and stare at computers and don't talk to anybody. They found that when they shut down the free network, they not only got people to buy more coffee, the place was also more social.

So while there are a variety of solutions to sell or control access to a network, such as printing tokens that give a period of access on every receipt, or selling the access as they do at Starbucks, here's another idea -- intermittent access.

In such a system the access point lets you on for a modest amount of time. Enough for a quick web search or two, a checking of your e-mail or even a modest phone call. Then it denies you access. It doesn't have to deny it for long, perhaps just 5 minutes before you can get on again. No authentication, though during the period of denied access, it may redirect all web requests to a page that explains the situation, and optionally offers continuous access for money.

Though that's not the main goal. The main goal is to create an atmosphere where you're coming to the shop to do other than stare at your computer, but in which you can use it on occasion to get your fix.

Who knows, if the sale option for continuous access was popular, it might even make more money than an always charging system. Of course, fancy users could change their MAC address to get around it -- but if they're going to go that far, let them. Most won't.  read more »

Why order at the drive-through itself?

Fast food outlets all have drive-throughs, and they are popular though sometimes it's hard to figure out why, since you get a slow simulation of being stuck in traffic. "Oooh, are we going to move! Yes, he's released his brakelights!" You may also have heard that McDonalds is outsourcing the order-taking part at some restaurants to teleworkers in the midwest, where wages are lower. (Not India, yet.) They reason that there is no reason the order-taker, who just punches the order into a computer, need be at the actual location, and in fact, when things are at their most busy, it makes sense to put everybody onto filling orders, not taking them.

You go to a board with a menu and a bad intercom to place your order. Why do this? Cell phone penetration is very high now, so why not phone it in? Either a direct number for that restaurant, or an 800 number where you can say which branch you are at or going to. You can't see the menu but you probably know it, and the order taker has the time to help you through it. They might be at the restaurant if they have spare capacity, or might be in a call center entering it on the computer. They can tell you pretty accurately when your order will be up.

Yeah, I just re-invented phone-in takeout, but this time based on the drive-through concept. Worst case you call it in while already at the restaurant, which is where you order today. But if you think about it, you're phoning it in on the way there. And they might tell you, "You know, it will be 15 minutes here, and just 3 minutes at the branch down the road" to load balance.

Now when you get to the restaurant, you probably should just park in the lot and go inside for your order. But they could also have a parking area with an LED display with the order numbers (or even sufficiently unique suffixes of phone numbers) displayed to say who can now enter the drive-through lane for instant pickup.

And of course, if you want to pay on credit card, and they know you, you can even pick up the food without the timewasting cash-handover.

This makes more sense at make-to-order places than at McDonalds of course. And it can apply to more than fast-food, though usually only fast food places have computerized order management. Perhaps people might order better food if it were more convenient?

Gotta have a Revenge of the Sith Review

When I was a teenager, my father lived in a downtown appartment tower with a cinema in the basement. Due to his press credentials he had an unlimited free movie pass. Star Wars played there for over a year, and when we would visit him, if we were ever sitting around wondering what to do, somebody would suggest, "Why don't we go downstairs and see Star Wars?" Today everybody does this but then the VCR was just dawning, so this was something really cool.

So of course that movie held a special place in my heart, and it was indeed groundbreaking, particularly in effects, grand story and perhaps most of all, good editing. "The circle is now complete" as Lord Vader would say.

So I'll repeat what everybody else has said, Revenge of the Sith is far better than episodes 1 and 2 of the modern trilogy, better perhaps than the Ewok-burdened Return of the Jedi. It's an astounding triumph of visuals as well, with a much more moving and interesting story. Yes, the acting is sub par, the dialogue well sub par and the romantic scenes are non-credible, but the good parts more than make up for this.

At the same time I am left with a disappointment, because it could have been so much more. Lucas is cursed because the bar was so high. He built an empire on that first movie but only delivered some of what he could. I'll get into spoilers in the after-the-break part of this posting, and here I'll speak more generally.

The entire new trilogy is the story of the fall of Darth Vader. This movie contains its climax, as he changes from troubled Jedi to evil lord. Powerful as it is, it's still not credible. Lucas had 8 hours of film all leading up to that one moment, so there's no reason it had to be that way.

Tied in with the moral fall of Vader is the more literal fall of the Jedi. As we know, they are betrayed, but that story too could have been much richer.

In addition, the biggest thing missing from trilogy 2 is the humour. Yoda, the imp who stole Empire barely cracks a smile in all the other movies. Almost nobody does. And the movies suffer for it.

On to spoiler-based discussion  read more »

Getting the top spammers

A recent item posted on politech and Farber’s IP mailing lists caused some controversy, so I thought I should expand on it here.

The spam law debate has been going on for close to a decade. There are people with many views, and we’ve all heard the other side’s views many times as well. The differences lie in more fundamental values that are hard to change through argument.

Because of that there are giant spam law battles among people who are generally all on the same side — getting rid of spam. Each spam law proposal has people who feel it does too much and chills legitimate speech on one side, and those who feel it does too little and legitimizes some spam on the other. (With many other subtleties as well.)

It’s commonly reported that most spam is sent by a relatively small group of hardcore, heavy volume spammers. In theory much from a group of 20, and the bulk from a group of around 200. I have never known if this is true or not, but a recent conversation with a leading antispam activist gave evidence that it was. Antispammers have tracked down a lot of spam, seen billions of spams come into spam-traps and even infiltrated spammer “bulker” message boards to learn who’s who and how they operate.

So let’s assume for the moment that it’s true that most spam comes from this core group. Let’s focus spam law efforts on a law designed just to get them. A law so narrowly targetted that nobody need fear a chilling effect on legitimate speech, that everybody can get behind. (A law that also makes it clear that it’s not precluding other laws or giving blessing to lesser spammers.)

I would see such a law demanding many criteria. It would require the spammer send millions of spams. It would require the spammer do this with wilful disregard for the consequences — ie. a malicious intent. It could require the spammer have made $10,000 from their spamming. It would also provide funding and direction for law enforcement to actually go after these spammers. It would fine them into bankruptcy (all they ever made from spamming plus punative fines) and possibly jail them, particularly if other criminal actions like fraud, sale of illegal products and computer breakins were involved.

This wouldn’t stop all spammers, but it might well put a real dent in the volume of spam, and scare off many from entering the upper echelons of spamming. This is a great deal more than any other spam law has managed to do.  read more »

Company to fill out rebates

As many of you may know, the rebate system is based on the idea that most folks will not get around to filling out a rebate form, or will fill it out improperly. Estimates run that 60% or more of people don't get their rebate. In some cases, the companies do everything they can to not redeem, some are even accused of illegal behaviour. Some companies are rumoured to be rejecting all rebates then only redeeming to those people who complain.

What this means of course is that they can give a very attractive rebate, in many cases selling the product below cost. We've seen rebates for the full purchase price in some cases.

Now this is actually good for you if you are very good at getting rebates back, because you get to buy a product below cost, subsidized by the people who aren't good at getting the rebates back, who ended up paying an above average price. It's a form of differential pricing. Those who care get a lower price, those who are richer and care less pay more.

So is the time ripe for a company that, for a fee, will do your rebate paperwork for you? Of course, you would still need to cut off the proof of purchase, check over the rebate for any special requirements (like signatures or serial numbers not found on the proof of purchase) and stuff them in a preprinted envelope, and get it to the post office in time to make it to the rebate paperwork house in time for them to mail it in to the vendor. (Not really the vendor, but the vender's outsourced rebate house.)

I imagine you would pay something like $5 plus some small fraction of the rebate, charged on your credit card, and refunded to your credit card if you don't get the rebate. That seems like a lot for what should be a few minutes work, but if you factor in the time required to fill out forms carefully, print envelopes, copy receipts and other items, and get to the mailbox, I think it's not out of line.

Of course for the rebate facilitator, they are even more efficient. They have all your relevant info on file, filled out in a web form. They have all the popular rebates similarly encoded and scanned. They can either automatically print out a rebate form with your info clearly filled in, or they can print a custom sticky label with your info and apply it to the original if the original is needed.

They can copy the receipts and scan the proof of purchase. And then mail them out at bulk postage rates to the rebate center, or even have staff who hand deliver them to the major rebate centers in certain cases if volume is high enough.  read more »

Doubleheaded Rear Lens Cap

I shoot with an SLR, and all lenses need a rear lens cap when not on the camera. Every SLR shooter knows the three-handed ritual. (Four handed if the Camera's not on a strap.) You take one lens off the camera. You pick another lens and remove the rear cap from it. Holding the old lens, new lens and rear cap and camera, you put the new lens on the camera, then put the rear cap on the old lens. (Or you put the cap on the old lens first, put it down and put the new lens on the camera.)

Anyway, a simple invention I have already built is a doubleheaded rear lens cap, namely two lens caps glued together. Custom-built it would be a lot smaller and solve some of the problems I have experienced.

With the doubleheader, you can take your lens off the camera and put it immediately onto the open end of the doubleheader cap on the new lens. Then with a twist you remove the new lens from the resulting docked lens pair, and put it on the camera. In theory one less hand or less dexterity.

However, the catch is the docked lens configuration tightens both as you twist one way and loosens both as you twist the other way. So you must master the art of making sure the lens you want comes loose.

How this works varies from lens to lens and how well it fits the rear cap. Sometimes pressing them both together causes one to undo reliably. The most reliable trick is to grab the old lens around the rear neck so you can get a finger on the cap, and then pull the new lens off.

It seems one might be able to design ways to make this more reliable, such as a small flange on the cap to hold with your finger to make sure of what twists off, or a ratcheting twist-off that requires a release button.

If both become equally lose when you untwist, then gravity will help you in that the cap will stay on the lower lens. You must later twist it back to stay on. I think the ideal motion would be to twist on so both are tight, then either hold the cap or release a ratchet so only the lens you want comes off without loosening the old lens.  read more »

Identity systems change the client/server decision

There have been many efforts at internet "identity" systems, such as Microsoft Passport, Liberty Alliance, and a variety of others. A recent conference was held in SF, though I didn't go, but I thought it was time to put forward one important idea.

These days, it's common in designing computer systems to make an engineering decision on where they will fit in the client/server model. Putting functions on a server has many advantages -- it's often cheaper to code, and there is central control, maintenance and updating of the servers. It easily allows roaming and can be more efficient. Tools on clients are more work to code and maintain but can be much faster, make use of client resources and provide much better user interfaces. Often the decision is to work out the right mix of both. The more client tools like javascript become available, the easier it is to take a commonly client based app like Email and turn it into gmail.

Also, sometimes something goes into a server because business rules demand it. You can only make money from it as a service you sell, so you build it that way.  read more »

Mailing list wiki combo?

I've written before about the dichotomies between serial and browseable, between writer-friendly and reader-friendly.

One idea that now seems obvious is to integrate wiki functions into a mailing list manager (particularly one that does a web interface to the mailng list.)

In particular, one should be able to "cc" a message to sections of the wiki and have it added. For example, to an FAQ section. In addition, readers of a message should be able to promote it into sections of the wiki either by clicking links in the HTML version of the message, or by forwarding the message back to some magic addresses at the mailing list manager.

Thus when sombody on a mailing list makes a useful answer to a question, it could go quickly into a wiki style knowledge base, for easier browsing and searching. Many mailing lists today allow you to search the list archives, but unless you know your vocabulary, you may not find the answer to problems you are trying to solve, even though they exist there.

We strike down the broadcast flag!

On both a personal and professional note, I am happy to report that the federal courts have unanimously ruled to strike down the FCC's broadcast flag (that's a PDF) due to our lawsuit against them.

I participated directly in this lawsuit, filing an affadavit on how, as a builder of a MythTV system and writer of software for MythTV, I would be personally harmed if the flag rule went into effect. The thrust of the case was that the FCC, which is empowered to regulated interstate communications, had no authority to regulate what goes on inside your PC. The court bought that, but we had to show that the actual plaintiffs in the case would be harmed, not simply the general public, thus the declarations by myself and various other members of EFF and other plaintiffs.

The broadcast flag was an insidious rule because, as I like to put it, it didn't prohibit Tivo from making a Tivo (as long as they got it certified as having pledged allegiance to the flag.) It stopped somebody from designing the next Tivo, the metaphorical Tivo, meaning bold new innovation in recording TV.

I would like to particularly thank Public Knowledge, which spearheaded this effort and funded most of it.

Here's an AP Interview with me on the issue.

On the invention of the internet

Update: A more active thread on how this relates to Goodmail and other attempts at sender-pays traffic

There is much talk these days of “who invented the internet?” Most of the talk is done wearing a network engineer’s hat, defining the internet in terms of routing IP datatgrams, and TCP. Some relates to the end to end principle with a stupid network in the middle and smart endpoints. These two are valid and vital contributions, and recognition for those who built them is important.

But that’s not what the public thinks of when it hears “the internet.” They think of the collection of cool applications they use to interact with other people and distant computers. Web sites and mailing lists and newsgroups and filesharing and VoIP and downloading and chat and much more. Why did these spring into being in this way rather than on other networks?

I believe a large and necessary ingredient for “the internet” wasn’t a technological invention at all, but a billing system. The internet is based on what I call the “internet cost contract.” That contract says that each person pays for their own pipe to the center, and we don’t account for the individual traffic.

“I pay for my half, you pay for yours.”

While the end-to-end design allowed innovation and experimentation, the billing design really made it possible. In the early days of the internet, people dreamed up all sorts of bizarre applications, some serious, some entirely frivolous. They put them out there and people played with them and the most interesting thrived.

Many other networks had users paying not by the pipe, but based on traffic. In that world, had you decided to host a mailing list, or famously put a webcam up in front of your company fishtank, the next day the company beancounter would have called you into the office to ask why the company got a big bandwidth bill in order to show off the fishtank. The webcam — or FTP site or mailing list — would have been shut down immediately, and for perfectly valid reasons.

Pay-based-on-usage demands that applications be financially justifiable to live. Pay-per-pipe allowed mailing lists, ftp sites, usenet, archie, gopher and the web to explode.  read more »

DHCP Option for street address, PSAP for VoIP E911

While for various reasons I believe that the efforts to enforce E911 requirements on Voice over IP phones are bogus and largely designed to make it harder for smaller players to compete with established companies, there is a legitimate need for ways to give your location to emergency services.

To protect privacy, I suggest that this be done in the endpoints. To assist this, I would propose a set of option extensions to the DHCP protocol to tell an endpoint what the server knows about its location, including address, zip and even what emergency contact center to use. This would start with RFC3825 for geolocation, and move on to other features. The endpoint device, when calling 911 or other emergency services, could include this information in the SIP invite, or provide it on request.

For those who don't know, DHCP is the system which lets a computer connect to an ethernet and ask for an IP address as well as important local network information (such as the addresses of routers, name servers, domain names etc.) Some DHCP servers know exactly who the client device is and effectively act as the client's memory. Some just give the next available address and return information about the local network area.

For example, most people with home networks, and almost all of them who use Voice over IP services like Vonage have a local network with its own DHCP server, built into the home-router they use. That home router could be told the address of the home, and all devices, including VoIP phones, could learn it. For companies, it is the same.

DHCP is also used for ISPs to give addresses to DSL and Cable modem customers who hook up to the internet without a home gateway because they have only one computer. That's pretty rare for VoIP users. In these cases they may or may not know the street address of the computer. DHCP is also very common for people who connect to wireless access points. The AP in a Starbucks could easily tell your device the address of the Starbucks.

As noted, we could start by the device fetching this address and forwarding it on with emergency calls, but not doing so for regular calls. This puts privacy control in the hands of the user, where it should be.

However, we could do even more than just give location as in rfc3825. The DHCP server could publish the direct contact information for the local area for police, fire, ambulance or general emergencies. They could simply include the contact number of a PSAP (Public Service Access Point, the gateway to emergency services) for the location, or in a corporate setting, might direct emergency calls to the corporate security desk, with the PSAP/911 as a fall-back. (There should be laws however about use of such features and protection of privacy. Network owners can already reroute any traffic but we want it to be clear how this might be done.)  read more »

Sermon on the Mount, as annotated by George W. Bush

George W. Bush names Jesus as the philosopher he admires the most. The most central of the teachings of Jesus can be found in the Sermon on the Mount.

I have come upong Bush's edited version of the sermon, amended to make the dictates of his Saviour easier to follow in these modern times.

Enjoy here in the Sermon on the Mount (George Bush Version)

Some fault for Phishing on the people who stopped encryption

During the 1990s, the US Government made a major effort to block the deployment of encryption by banning its export. We won that fight, but during the formative years of most internet protocols, they made it hard to add good authentication and privacy to internet tools. They forced vendors to jump through hoops, made users download special "encryption packs" and made encryption the exception rather than the norm in online work.

This, combined with bad design decisions made even without the help of the government, has caused some of the security windows that are bugging people today.

A recent issue is DNS poisoning, getting known by the name of pharming. The scammers send fake DNS answers in advance to buggy DNS servers running on MS Windows Service pack 2 or earlier, or very old *nix copies of bind. They tell the server that www.yourbank.com should really go to their address with a fake version of the site.

Now of course we should have made DNS reliable and secure to stop this, or at least done the very basic things found in the most up to date DNS servers, but even so, this attack should not have been enough.

That's because SSL certificates were supposed to assure that you were really talking to yourbank.com when the browswer said it was, even if somebody hijacked the connection like this. And they will. The phisher can't pretend to be yourbank.com with the little "lock" icon on the status bar of your browswer set to locked. But they can pretend it when the icon says unlocked.

And surprise, surprise, people forget to look at the icon. A lot. They turn off the warnings about transitions to insecure pages because they go off all the time, and nobody pays attention to an alarm that's always going off. Encryption and SSL are rare, special things limited to login screens. We tolerate all the rest of life being unencrypted and in the clear -- and vulnerable, just like the USDoJ wanted it.  read more »

Annotated TV with a DVR

When people watch TV with a hard disk video recorder, they always watch the show delayed, often by hours or many days. They all watch it at a different time.

It occurs to me it would be amusing to generate a system to allow the collaborative annotation of TV programs and DVD movies using the net, and DVRs like the open source MythTV, which would be a natural initial platform. Users watching a show would be able to make comments at various points in it. Either text comments, along the lines of "Pop-up Video" or even voice comments and jokes, along the lines of "Mystery Science Theatre 3000."

And indeed, people already do this real time. Just about every popular show generates a chat-room for people who watch it live near a computer. However, these are usually quite inane as they are done in real time with no filtering.

Thanks to delayed watching, we could change that. Each suggested annotation would be uploaded quickly to a server handling the particular TV show or movie. This would come with a pseudonym for the author, which would be tied to a reputation. All annotations would be sent out for viewing by a limited audience. For low-reputation contributors, a very limited audience. If that audience hits an "approve" button on their remote when they see the annotation, it would improve the score, and more and more early watchers would get to see and approve/disaprove of the annotation.

Eventually things would build up and you would have a series of highly approved comments for those who want to see a show with comments. I expect most comments would be jokes, but some would also be pointers to useful information or reasoned criticism. Authors might indicate what their goal is so that viewers could tune what sort of annotations they want to see. Viewers could also tune a threshold for how good the annotations have to be to see them.

Authors would indicate if their pop-up should show in a particular place on the screen (so that. like pop-up video, it doesn't block things.) Some viewers, especially those with big screen TVs, would shrink the image and redirect pop-ups outside the show.

However, there are some interesting problems to solve...  read more »

Moratorium on computers calling me by name (and form letters)

Dear [[blog-reader's name]]:

When it first started arising, in the 60s and 70s, everybody thought it was so cute and clever that computers could call us by name. Some programs even started by asking for your name, only to print "Hi, Bob!" to seem friendly in some way.

And of course a million companies were sold mailing list management tools to print form letters, filling in the name of the recipient and other attributes in varous places to make the letter seem personal. And again, it was cute in its way.

But not any more. We've all figured it out. Nobody says, "Wow, this letter has 'Dear Brad' in it, it must have been written personally for me." Nobody is fooled any more. In fact, the reverse is now true. It's bordering on offensive. If an E-mail starts with "Dear Brad" it is more likely than not to be spam.

Sometimes though, I get form letters from real companies I deal with, and they still like to put my name in it, like they used to on paper. As you probably know, in E-mail today, you don't put in salutations any more unless it's a mail to a stranger.

So let's get the word out. Stop it. No more form letters where the computer oh-so-cleverly manages to fill in a field with our name. (Unless it's amusing, and they are writing to "Dear Mr. Association") If it's legitimate bulk mail, don't try to pretend you're not bulk mail. That's what spammers do. Be honest that you're bulk mail.

If you have actual relevant data to fill in, fill it in, but put it in a table so I can skip the form letter garbage and get to the actual data about me you're trying to tell me. Put my name at the top in a nice computer-style box, "Prepared for: Brad Templeton."

Leave the use of my name to people writing messages for me. You're not fooling anybody.

Yours truly,
[[Insert name here]]

Why aren't concert tickets sold by dutch auction?

It seems that whenever you have a popular event, notably concerts in smaller venues and certain plays, the venue sells out their tickets quickly, and then ticket speculators leap in and sell the tickets at high margins. Ticket speculating (aka scalping) is legal in some areas and illegal in others. I don't think it should be illegal, but I wonder why the venues and performers tolerate so much of the revenue going to the speculators.

Or am I wrong, and this is not happening? Is it the case that often the speculators miscalculate and lose money so they only make a modest income? It doesn't seem that way to me. Now, there are many ticket brokers with large web presences (including some who sponsor my joke site) and tickets are commonly auctioned on eBay.

So why don't the venues or ticket companies create their own auction sites to auction tickets, with some fair system like a dutch auction, and keep all the money from high-demand events for themselves? Is it simply because this seems elitist and they feel it will annoy fans?

Currently, fans are annoyed because speculators scoop up tickets to high-demand events as soon as sales open, and such events sell out quickly, before actual fans can get them. That seems far worse to me. An auction system would actually allow lesser tickets to sell for less money and generate the same revenue for the event.

This seems so obvious, why isn't it taking place? Is it simply inertia, or a fear of requiring computer access in order to get tickets? While just about anybody can get computer access these days, dutch auctions can be done by phone if you trust the 3rd party managing the auction. Call in once, set your maximum bid for the various ticket classes you will accept, then find out the resulting price later. People at computers would have a small advantage, but not that much. The venue could set a floor/reserve price if they don't want to cheapen the value of their product.

Or is this a business opportunity for some company (or for Ticketmaster?)  read more »

Open Source's backwards-compatibility failure

Linux distributions with package managers like apt, promise an easy world of installing lots of great software. But they've fallen down in one respect here. There are thousands of packages for the major distributions (I run 3 of them, debian, Fedora Core and Gentoo) but most packages depend on several other packages.

The developers and packagers tend to run recent, even bleeding-edge versions of their systems. So when they package, the software claims it depends on very recent versions of other programs, even if it doesn't. This is not surprising -- testing on lots of old systems is drudgework nobody relishes doing.

So when you see a new software package you want, the ideal is you can just grab it with apt-get or yum. The reality is you can only do this if you're running a highly up-to-date system. Debian has become the worst offender. Debian's "Stable" distribution is several years old now. To run debian reasonably, even to just be able to upgrade to fix bugs in software you use, you have to run the testing distribution, and most probably the unstable one. I run the unstable, and it's more stable than the name implies, but ordinary users should not be expected to run an unstable distribution.

To get new software, you are often forced to upgrade, sometimes your whole OS. And that's free to do and often it works, but you can't depend on it. More than once I have lost a day of uptime to major upgrade efforts.

Let's contrast that with Windows. The vast majority of Windows programs will install, in their latest version, on 7 year old Windows 98, and almost all will install on 5 year old Windows 2000. This is partly because Windows has fewer milestones to test to, but also because coders know that it's quite a hurdle to insist users pay money to upgrade Windows. (And Windows upgrades are even more of a pain than linux ones.)

The linux approach ends up forcing the user to choose between the risky course of constant incremental upgrades, taking occasional random plunges into major upgrades, or simply not being able to run interesting new software or the latest versions and fixes of older software.

That's a failure. Non-guru users are not able to deal with any of those choices.

Testing with every different version of every dependent package (and every kernel) is not going to happen, but it would be nice if packagers worked hard to figure out what versions of dependencies they really need, even if they don't test it enough. Packages might say, "I was tested with 2.1, I probaby work with 1.0 though." Then wait for test reports and possibly report being tested with earlier and earlier dependencies.

This doesn't mean that sometimes you won't truly need the latest version of a dependency, and shouldn't say so. But it sure would make it easier for the ordinary user to particpate in linux if this was the exception, not the rule.

3-D art on machine built wall

In this article about a wall-building robot we see another step towards automatic construction, moving the 3-D printer concept onto the grand scale. This is very interesting and could be expanded quite a bit. It notes that arms could add texture to ceramic walls, but I would go further.

Why not create a texturing head which consists of strong metal pins on high-speed servos. You could drag this over the surface of maleable material, moving the servos back and forth under computer control line raster lines. This would allow the generation of any digital image in 3-D on the wall to a limited amount of depth.

You could do simple things like textures, or pleasing graphics of plants or nice patterns, but sculptors could also generate interesting forms of art for people to place in 3-D on their walls.

This could also be done on modern drywall. A set of rails could be mounted on a wall. A robot would run on the rails, first applying stucco, then when it is at the right consistency, run the "print head" to place patterns or sculpture into the stucco.

You might be able to do full 3-D printing though I see that as harder to do on a vertical surface, by having a "stucco-jet" with various coloured ceramics in the pipes, and individually controlled pumps to push out the right material at the right time, possibly for further shaping by the servo-pins, though I suspect they would be better with monocolour.