Submitted by brad on Wed, 2007-05-30 11:32.
I wrote recently about the paradox of identity management and how the easier it is to offer information, the more often it will be exchanged.
To address some of these issues, let me propose something different: The creation of an infrastructure that allows people to generate secure (effectively anonymous) pseudonyms in a manner that each person can have at most one such ID. (There would be various classes of these IDs, so people could have many IDs, but only one of each class.) I’ll call this a QID (the Q “standing” for “unique.”)
The value of a unique ID is strong — it allows one to associate a reputation with the ID. Because you can only get one QID, you are motivated to carefully protect the reputation associated with it, just as you are motivated to protect the reputation on your “real” identity. With most anonymous systems, if you develop a negative reputation, you can simply discard the bad ID and get a new one which has no reputation. That’s annoying but better than using a negative ID. (Nobody on eBay keeps an account that gets a truly negative reputation. An account is abandoned as soon as the reputation seems worse than an empty reputation.) In effect, anonymous IDs let you demonstrate a good reputation. Unique IDs let you demonstrate you don’t have a negative reputation. In some cases systems try to stop this by making it cost money or effort to generate a new ID, but it’s a hard problem. Anti-spam efforts don’t really care about who you are, they just want to know that if they ban you for being a spammer, you stay banned. (For this reason many anti-spam crusaders currently desire identification of all mailers, often with an identity tied to a real world ID.)
I propose this because many web sites and services which demand accounts really don’t care who you are or what your E-mail address is. In many cases they care about much simpler things — such as whether you are creating a raft of different accounts to appear as more than one person, or whether you will suffer negative consequences for negative actions. To solve these problems there is no need to provide personal information to use such systems. read more »
Submitted by brad on Tue, 2007-05-29 14:02.
I’ve just returned from the 25th reunion of my graduating class in Mathematics at the University of Waterloo. I had always imagined that a 25th reunion would be the “big one” so I went. In addition, while I found myself to have little in common with my high school classmates, even having spent 13 years growing up with many of them, like many techie people I found my true community at university, so I wanted to see them again. To top it off, it was the 40th anniversary of the faculty and the 50th anniversary of the university itself.
But what if they had a reunion and nobody came? Or rather, out of a class of several hundred, under 20 came, many of whom I only barely remembered and none of whom I was close to? read more »
Submitted by brad on Fri, 2007-05-18 14:41.
In 2005, John Scalzi burst on the scene with a remarkable first novel, Old Man’s War. It got nominated for a Hugo and won him the Campbell award for best new writer. Many felt it was the sort of novel Heinlein might be writing today. That might be too high a praise, but it’s close. The third book in this trilogy has just come out, so it was time to review the set.
It’s hard to review the book without some spoilers, and impossible for me to review the latter two books without spoiling the first, but I’ll warn you when that’s going to happen.
OMW tells the story of John Perry, a 75 year old man living on an Earth only a bit more advanced than our own, but it’s hundreds of years in the future. Earth people know they’re part of a collection of human colonies which does battle with nasty aliens, but they are kept in the dark about the realities. People in the third world are offered o ne way trips to join colonies. People in the 1st world can, when they turn 75, sign up for the colonial military, again a one-way trip. It’s not a hard choice to make since everybody presumes the military will make them young again, and the alternative is ordinary death by old age.
The protagonist and his wife sign up, but she dies before the enlistment date, so he goes on his own. The first half of the book depicts his learning the reality of the colonial union, and boot camp, and the latter half outlines his experiences fighting against various nasty aliens.
It’s a highly recommended read. If you loved Starship Troopers or The Forever War this is your kind of book.
Now I’ll go into some minor spoilers. read more »
Submitted by brad on Wed, 2007-05-16 16:34.
Since the dawn of the web, there has been a call for a “single sign-on”
facility. The web consists of millions of independently operated web sites,
many of which ask users to create “accounts” and sign-on to use the site.
This is frustrating to users.
Today the general single sign-on concept has morphed into what is now called
“digital identity management” and is considerably more complex. The most recent
project of excitement is OpenID which is a standard which allows users
to log on using an identifier which can be the URL of an identity service,
possibly even one they run themselves.
Many people view OpenID as positive for privacy because of what came before it.
The first major single sign-on project was Microsoft Passport which came
under criticism both because all your data was managed by a single company and
that single company was a fairly notorious monopoly. To counter that, the
Liberty Alliance project was brewed by Sun, AOL and many other companies,
offering a system not run by any single company. OpenID is simpler and even
However, I feel many of the actors in this space are not considering an inherent
paradox that surrounds the entire field of identity management. On the
surface, privacy-conscious identity management puts control over who gets
identity information in the hands of the user. You decide who to give identity
info to, and when. Ideally, you can even revoke access, and push for minimal
disclosure. Kim Cameron summarized a set of laws of identity
outlining many of these principles.
In spite of these laws one of the goals of most identity management
systems has been ease of use. And who, on the surface, can argue with ease
of use? Managing individual accounts at a thousand web sites is hard.
Creating new accounts for every new web site is hard. We want something
However, here is the contradiction. If you make something easy to do,
it will be done more often. It’s hard to see how this can’t be true.
The easier it is to give somebody ID information, the more often it will
be done. And the easier it is to give ID information, the more palatable
it is to ask for, or demand it. read more »
Submitted by brad on Wed, 2007-05-09 16:05.
In the 1980s, my brother Ty Templeton published his first independent comic book series, Stig’s Inferno. He went on to considerable fame writing and drawing comics for Marvel, D.C. and many others, including favourite characters like Superman, Batman and Spider-Man, as well as a lot of comics associated with TV shows like The Simpsons and Ren and Stimpy. But he’s still at his best doing original stuff.
You may not know it, but years ago I got most of Stig’s Inferno up on the web. Just this week however, a fan scanned in the final issue and I have converted it into web pages. The fan also scanned the covers and supplemental stories from the issues, they will be put up later.
So if you already enjoyed the other episodes journey now to Stig’s Inferno #7.
If you never looked go to The main Stig’s Inferno page. You can also check out small versions of all the issue covers.
I’ll announce when the supplemental stories are added.
The comic tells a variation of Dante’s Inferno, where our hero Stig is killed by the creatures that live in his piano and makes a strange journey through the netherworld. It’s funny stuff, and I’m not just saying it because he’s my brother. Give it a read.
Submitted by brad on Mon, 2007-05-07 18:49.
First, let me introduce a new blog topic, Sysadmin where I will cover computer system administration and OS design issues, notably in Linux and related systems.
My goal is to reduce the nightmare that is system administration and upgrading.
One step that goes partway in my plan would be a special software system that would build for a user a specialized operating system “package” or set of packages. This magic package would, when applied to a virgin distribution of the operating system, convert it into the customized form that the user likes.
The program would work from a modified system, and a copy of a map (with timestamps and hashes) of the original virgin OS from which the user began. First, it would note what packages the user had installed, and declare dependencies for these packages. Thus, installing this magic package would cause the installation of all the packages the user likes, and all that they depend on.
In order to do this well, it would try to determine which packages the user actually used (with access or file change times) and perhaps consider making two different dependency setups — one for the core packages that are frequently used, and another for packages that were probably just tried and never used. A GUI to help users sort packages into those classes would be handy. It must also determine that those packages are still available, dealing with potential conflicts and name change concerns. Right now, most package managers insist that all dependencies be available or they will abort the entire install. To get around this, many of the packages might well be listed as “recommended” rather than required, or options to allow install of the package with missing 1st level (but not 2nd level) dependencies would be used. read more »
Submitted by brad on Sun, 2007-05-06 23:52.
At our new favourite Indian buffet (Cafe Bombay) they run Bollywood videos on big screens all the time. In Bollywood, as you probably know, everybody is dancing all the time, in wonderful synchronization, like Broadway but far more. I’ve never been to an Indian dance club to see if people try to do that in real life, but I suspect they want to.
I started musing about a future where brain implants let you give a computer control of your limbs so you could participate in such types of dance, but I realized we might be able to do something much sooner.
Envision either a special suit or a set of cuffs placed around various parts of the arms and legs. The cuffs would be able to send stimuli to the skin, possibly by vibrating or a mild electric current, or even the poke of a small actuator.
With these cuffs, we would develop a language of dance that people could learn. Dancers have long used Dance notation to record dances and communicate them, and more sophisticated sytems are used to have computerized figures dance. (Motion capture is also used to record dances, and often to try to distill them to some form of encoding.) In this case, an association would be made between stimuli and moves. If you feel the poke on one part of your left wrist, move you left arm in a certain way, a different set of pokes commands a different move. There would no doubt have to be chords (multiple stimulators on the same cuff) to signal more complex moves.
Next, people would have to train so that they develop an intuitive response, so that as soon as they feel a stimulus, they make the move. People with even modest dance skill of course learn to make moves as they are told them or as they see them, without having to consciously think about it a great deal. The finest dancers, as we have seen, can watch a choreographer dance and duplicate the moves with great grace due to their refined skill.
I imagine people might learn this language with something like a video game. We’ve already seen the popularity of Dance Dance Revolution (DDR) where people learn to make simple foot moves by seeing arrows on the screen. A more advanced game would send you a stimulus and test how quickly you make the move.
The result would be to become a sort of automaton. As the system fed you a dance, you would dance it. And more to the point, if it fed a room full of people a dance, they would all dance the same dance, in superb synchronization (at least for those of lower skill.) Even without the music though normally this would all be coordinated with that. Dance partners could even be fed complimentary moves. Indeed, very complex choreographies could be devised combined with interesting music to be done at dance clubs in moves that would go way beyond techno. I can see even simple moves, getting people to raise and move hands in patterns and syncs being very interesting, and more to the point, fun to participate in.
In addition, this could be a method to train people in new and interesting dances. Once one danced a dance under remote control several times one would presumably then be able to do it without the cuffs, and perhaps more naturally. Just like learning a piece of music with the sheet music and eventually being able to take the music away.
I suspect the younger people were when they started this, the better they would be at it.
It could also have application in the professional arena, to bring a new member of a troupe up to speed, or for a dance to be communicated quickly. Even modest dancers might be able to perform a complex dance immediately. It could also possibly become a companion to Karaoke.
There are other means besides cuffs to communicate moves to people of course, including spoken commands into earphones (probably cheapest and easiest to put on) and visual commands (like DDR) into an eyeglass heads-up-display once they become cheap. The earphone approach might be good for initial experiments. One advantage of cuffs is the cuffs could contain accelerometers which track how the limb moved, and thus can confirm that the move was done correctly. This would be good in video game training mode. In fact, the cuffs could even provide feedback for the correct move, offering a stimulus if the move is off in time or position.
There have been some “use people as robots” experiments before, but let’s face it, dance is more fun. And an actual Bollywood movie could come to life.
Submitted by brad on Fri, 2007-05-04 18:38.
Self-driving cars are still some ways in the future, but there are some things they will want that human drivers can also make use of.
I think it would be nice if the urban data networks were to broadcast the upcoming schedule for traffic light changes in systems with synchronized traffic lights. Information like “The light at location X will go green westbound at 3:42:15.3, amber at 3:42.45.6 and red at 3:42.47.8” and so on. Data for all directions and for turn arrow lights etc. This could be broadcast on data networks, or actually even in modulations of the light from the LEDs in the traffic lights themselves (though you could not see that around turns and over hills.)
Now a simple device that could go in the car could be a heads-up-display (perhaps even just an audio tone) that tells you whether you are in the “zone” for a green light. As you move through the flow, if you started getting so fast that you would get to the intersection too early for it to be green, it could show you in the too-fast zone with a blinking light or a tone that rises in pitch the faster you are. A green light (no tone) would appear when you were in the zone.
It would arrange for you to arrive at the light after it had been green for a second or two, to avoid the risk of hitting cars running the red light in the other direction. Sometimes when I drive down a street with timed lights I will find myself trusting the timing a bit too much, so I am blowing through the moment the light is green, which actually is a bit risky because of red light runners.
(Perhaps the city puts in a longer all-red gap on such lights to deal with this?)
More controversial is the other direction, a tone telling you that you will need to speed up to catch this green before it goes amber. This might encourage people to drive recklessly fast and might be a harder product to legally sell. Though perhaps it could tell you that if you sped up to the limit you would make the light but stop telling you after no legal speed can make it. Of course, people would learn to figure it out.
We figure that out already of course. Many walk/don’t walk signs now have red light countdown timers, and how many of us have not sped up upon seeing the counter getting low? Perhaps this isn’t that dangerous. Just squeaking through a light rarely helps, of course, because the way the timing works you usually are even more likely to miss the next one, and you have to go even faster to make it — to the point that even a daredevil won’t try.
This simple device could be just the start of it. Knowledge of this data for the city (combined with a good GPS map system of course) could advise you of good alternate routes where you will get better traffic light timing. It could advise you to turn if you’re first at a red light (which it will know thanks to GPS) if your destination is off to the right anwyay. Of course it could do better combined with real traffic data and information on construction, gridlock etc.
This is not a cruise control, you would still control the gas. However, if you pressed too hard on the gas your alert would start making the tone, and you would soon learn it is quite unproductive to keep pressing. (You could make this a cruise control but you need to be able to speed up some times to avoid things and change lanes.) People tend more often to speed up and then have to break for a short while waiting for the green, which doesn’t get you there any faster, and is a jerky ride.
The system I describe could be a nice add-on for car GPS systems.
Submitted by brad on Fri, 2007-05-04 14:14.
Most search engines now have a search box in the toolbar, which is great, and like most people mine defaults to Google. I can change the engine with a drop down menu to other places, like Amazon, Wikipedia, IMDB, eBay, Yahoo and the like. But that switch is a change in the default, rather than a temporary change — and I don’t want that, I want it to snap back to Google.
However, I’ve decided I want something even more. I’ll make a plea to somebody who knows how to do firefox add-ons to make a plug-in so I can chose my search engine with some text in the query I type. In other words, if I go to the box (which defaults to Google) I could type “w: foobar” to search Wikipedia, and “e: foobar” to search eBay and so on. Google in fact uses a syntax with keyword and colon to trigger special searches, though it tends not to use one letter. If this bothers people, something else like a slash could be used. While it would not be needed, “g: foobar” would search on Google, so “g: w: foobar” would let you search for “w: foobar” on Google. The actual syntax of the prefix string is something the user could set, or it could be offered by the XML that search engine entries are specified with.
Why is this the right answer? It’s no accident that Google uses this. They know. Whatever your thoughts on the merits of command line interfaces and GUIs, things often get worse when you try to mix them. Once you have me typing on the keyboard, I should be able to set everything from the keyboard. I should not be forced to move back and forth from keyboard to pointing device if I care to learn the keyboard interface. You can have the GUI for people who don’t remember, but don’t make it be the only route.
What’s odd is that you can do this from the Location bar and not the search bar. In Firefox, go to any search engine, and right click on the search box. Select “Add a Keyword for this Search” and this lets you create a magic bookmark which you can stuff anywhere, whose real purpose is not to be a bookmark, but a keyword you can use to turn your URL box into a search box that is keyword driven.
You don’t really even need the search box, which makes me wonder why they did it this way.
Submitted by brad on Thu, 2007-05-03 18:03.
High posting volume today. I just find it remarkable that in the last 2 weeks I’ve seen several incredible breakthrough level stories on health and life extension.
Today sees this story on understanding how caloric restriction works which will appear in Nature. We’ve been wondering about this for a while, obviously I’m not the sort of person who would have an easy time following caloric restriction. Some people have wondered if Resveratrol might mimic the actions of CR, but this shows we’re coming to a much deeper understanding of it.
Yesterday I learned that we have misunderstood death and in particular how to revive the recently dead. New research suggests that when the blood stops flowing, the cells go into a hibernation that might last for hours. They don’t die after 4 minutes of ischemia the way people have commonly thought. In fact, this theory suggests, the thing that kills patients we attempt to revive is the sudden inflow of oxygen we provide for revival. It seems to trigger a sort of “bug” in the [[w:mitochondria], triggering apoptosis. As we learn to restore oxygen in a way that doesn’t do this, especially at cool temperatures, it may be possible to revive the “dead” an hour later, which has all sorts of marvelous potential for both emergency care and cryonics.
Last week we were told of an absolutely astounding new drug which treats all sorts of genetic disorders. A pill curing all those things sounds like a miracle. It works by altering the ribosome so that it ignores certain errors in the DNA which normally make it abort, causing complete absence of an important protein. If the errors are minor, the slightly misconstructed protein is still able to do its job. As an analogy, this is like having parity memory and disabling the parity check in a computer. It turns out parity errors are quite rare, so most of the time this works fine. When a parity check fails the whole computer often aborts, which is the right move in the global scale — you don’t want to risk corrupting data or not knowing of problems — but in a human being, aborting the entire person due to a parity check is a bit extreme from the individualistic point of view.
These weren’t even all the big medical stories of the past week. There have been cancer treatments and more, along with a supercomputer approaching the power of a mouse brain.
Submitted by brad on Thu, 2007-05-03 13:28.
While I was at Tim O’Reilly’s Web 2.0 Expo, I did an interview with an online publication called Web Pro News. I personally prefer written text to video blogging, but for those who like to see video, you can check out:
Video Interview on Privacy and Web 2.0
The video quality is pretty good, if not the lighting.
The main focus was to remind people that as we return to timesharing, which is to say, move our data from desktop applications to web based applications, we must be aware that putting our private data in the hands of 3rd parties gives it less constitutional protection. We’re effectively erasing the 4th Amendment.
I also talk about hints at an essay I am preparing on the evils of user-controlled identity management software. And my usual rant about thinking about how you would design software if you were living in China or Saudi Arabia.
I also was interviewed some time ago about Google and other issues by a French/German channel. That’s a 90 minute long program entitled Faut-il avoir peur de Google ? (Should we fear Google). It’s also available in German. It was up for free when I watched it, but it may now require payment. (I only appear for a few minutes, my voice dubbed over.)
When I was interviewed for this I offered to, with some help, speak in French. I am told I have a pretty decent accent, though I no longer have the vocabulary to speak conversationally in French. I thought it would be interesting if they helped me translate and then I spoke my words in French (perhaps even dubbing myself later if need be.) They were not interested since they also had to do German.
Another video interview by a young French documentarian producing a show called Mix-Age Beta can be found here. The lighting isn’t good, but this time it’s in English. It’s done under the palm tree in my back yard.
Submitted by brad on Thu, 2007-05-03 12:36.
I wasn’t going to make any special commemoration, but it seems a whole ton of other blogs are linking today to my articles on the history of Spam, so I should blog them as well.
Many years ago I got interested in the origins of the term “spam” to mean net abuse. I mean I had lived through most of its origin and seen most of the early spams myself, but it wasn’t clear why people took the name of the meat product and applied it to junk mail. I knew it came from USENET, so I used the USENET search engines to trace the origins.
This resulted in my article on the origins of word spam to mean net abuse.
In doing the research, I was pointed to what was probably the earliest internet spam, though it far predates the term.
I documented that in Reactions to the first spam.
4 years ago, on the 25th anniversary of that spam, I was interviewed on NPR’s All Things Considered and write an article reflecting on the history. For that article I dug out Gary Thuerk, the sender of that first spam, and interviewed him for more details.
You can read that in Reflections on the 25th anniversary of Spam.
Of course, you can find all these and many more in my collection of articles on Spam. Many years ago I wrote a wide variety of essays on the spam problem. Not simply about solutions, but analysis of why the fight was so nasty, and concern over the rights people were willing to give up in the name of fighting spam.
I will probably update them, and do some more research for the 30th anniversary, next year.
Submitted by brad on Wed, 2007-05-02 19:38.
I really wish I could find a really good calendaring tool. I’ve seen many of the features I want scattered in various tools, though some are nowhere to be found. I thought it would be good to itemize some of them. I’m mostly interested in *nix — I know that on Windows, MS Outlook is the most common choice, with Exchange for sharing. read more »
Submitted by brad on Tue, 2007-05-01 14:05.
I’ve been writing a lot about self-driving cars which have automatic accident avoidance and how they will change our cities. I was recently talking again with Robin Chase, whose new company, goloco attempts to set people up for ad-hoc carpools and got into the issues again. She believes we should use more transit in cities and there’s a lot of merit to that case.
However, in the wealthy USA, we don’t, outside of New York City. We love our cars, and we can afford their much higher cost, so they still dominate, and even in New York many people of means rely strictly on taxis and car services.
Transit is, at first glance, more energy efficient. When it shares right of way with cars it reduces congestion. Private right of way transit also reduces congestion but only when you don’t consider the cost of the private right-of-way, where the balance is harder to decide. (The land only has a many-person vehicle on it a small fraction of the time compared to 1-3 passenger vehicles almost all the time on ordinary roads.)
However, my new realization is that transit may not be as energy efficient as we hope. During rush hour, packed transit vehicles are very efficient, especially if they have regenerative braking. But outside those hours it can be quite wasteful to have a large bus or train with minimal ridership. However, in order to give transit users flexibility, good service outside of rush-hour is important. read more »
Submitted by brad on Mon, 2007-04-30 14:39.
I’ve been remiss in updating my panoramas, so I just did some work on the site and put up a new page full of Alberta panoramas, as well as some others I will point to shortly.
The Alberta rockies are among the most scenic mountains in the world. Many have called the Icefields Parkway, which goes between Banff and Jasper national parks, the most scenic drive in the world. I’ve taken it several times in both summer and winter and it is not to be missed. I have a wide variety of regular photos I need to sort and put up as well from various trips.
This image is of Moraine Lake, which is close to the famous Lake Louise. All the lakes of these parks glow in incredible colours of teal, blue and green due to glacial silt. In winter they are frozen and the colour is less pronounced, but the mountains are more snow-capped, so it’s hard to say which is the best season. (This photo is available as a jigsaw puzzle from Ratzenberger.)
Enjoy the Panoramics of Alberta. And I recommend you book your own trip up to Calgary or Edmonton to do the drive yourself. I think you’ll find this to be among my best galleries of panoramas.
I also recently rebuilt and improved my shot of Ginza-5-Chome, Tokyo’s most famous street corner. While it was handheld I have been able to remove almost all the ghosts with new software.
Submitted by brad on Wed, 2007-04-25 23:54.
As part of my series on the horrors of modern system administration and upgrading, let me propose the need for a universal API, over all operating systems, for accessing data from, and some control of the package management system.
There have been many efforts in the past to standardize programming APIs within all the unix-like operating systems, some of them extending into MS Windows, such as Posix. Posix is a bit small to write very complex programs fully portably but it’s a start. Any such API can make your portability easier if it can’t make it trivial the way it’s supposed to.
But there has been little effort to standardize the next level, machine administration and configuration. Today a large part of that is done with the package manager. Indeed, the package manager is the soul (and curse) of most major OS distributions. One of the biggest answers to “what’s the difference between debian and Fedora” is “dpkg and apt, vs. rpm and yum.” (Yes you can, and I do, use apt with rpm.)
Now the truth is that from a user perspective, these package managers don’t actually look very different. They all install and remove packages by name, perform upgrades, handle dependencies etc. Add-ons like apt and GUI package managers help users search and auto-install all dependencies. To the user, the most common requests are to find and install a package, and to upgrade it or the system. read more »
Submitted by brad on Mon, 2007-04-23 00:00.
Many people accumulate a lot of frequent flyer miles they will never use. Some of the airlines allow you to donate miles to a very limited set of charities. I can see why they limit it — they would much rather have you not use the miles than have the charity use them. Though it’s possible that while the donor does not get any tax credit for donated miles, the airline does.
However, it should be possible for a clever web philanthropist to set up a system to allow people to donate miles to any charity they wish. This is not a violation of the terms of service on flyer miles, which only forbid trading them for some valuable consideration, in particular money.
The site would allow charities to register and donors to promise miles to the charities. A charity could then look at its balance, and go to the airline’s web site before they book travel to see if the flight they want can be purchased with miles. If so, they would enter the exact itinerary into the web site, and a suitable donor would be mailed the itinerary and passenger’s name. They would make the booking, and send the details back to the charity. (Several donors could be mailed, the first to claim would do the booking.) In a few situations, the available seats would vanish before the donor could do the booking, in which case the charity would need to try another airline or paid seat.
Donors could specify what they would donate, whether they are willing to buy upgrades or business class tickets (probably not) and so on.
Now it turns out that while the donor can’t accept money for the miles, the charity might be able to. Oftentimes non-profit representatives travel for things like speaking engagements where the host has a travel budget. Some hosts would probably be happy to cover something other than airfare, such as other travel expenses, or a speaking honorarium with the money. In this case, the charity would actually gain real money for the donation, a win for all — except the airline. But in the case of the airline, we are talking about revenue it would have lost if the donor had used the miles for a flight for themselves or an associate. So the real question is whether the airline can be indignant about having miles that would have gone unused suddenly find a useful home.
Now it’s true the booking interfaces on the airline sites are not great, but they are improving. And some employee of the non-profit would need to have an account, possibly even one with enough miles, just to test what flights are available. But this will be true in many cases.
Would the airlines try to stop it? I doubt it, because this would never be that big, and they would be seen as pretty nasty going after something that benefits charities.
Miles could also be used for hotel stays and other travel items.
Submitted by brad on Sat, 2007-04-21 00:38.
An eBay reputation is important if you’re going to sell there. Research shows it adds a decent amount to the price, and it’s very difficult to sell at all with just a few feedbacks. Usually sellers will buy a few items first to get a decent feedback — sometimes even scam items sold just for feedback. Because savvy buyers insist on selling feedback, it’s harder, and sometimes sellers will also sell bogus items just for feedback as a seller. eBay has considered offering a feedback score based on the dollar volume of positive and negative transactions but has not yet done this. Some plugins will do that.
One thing I recommend to low feedback sellers it to offer to reverse the “normal” payment system. If the seller has little feedback and the buyer has much better feedback, the seller should send the item without payment, and the buyer pay on receipt. Many people find this foreign but in fact it makes perfect sense. In real stores you don’t pay until you get the item, and many big reputation merchants allow payment on credit for known buyers. Another idea is to offer to pay for escrow. This costs money, but will make it back in higher sale prices.
However, here’s a new idea. Allow high-reputation sellers to “lease out” feedback, effectively acting as a co-signer. This means they vouch for the brand new seller. If the new seller gets a negative feedback on the transaction, it goes on both the new seller’s feedback and the guarantor’s. Positive feedback goes on the seller and possibly into a special bucket on the guarantor’s. The guarantor would also get to be involved in any disputes.
Seems risky, and because of that, guarantors would only do this for people they trusted well, or who paid them a juicy bond, which is the whole point of the idea. Guarantors would probably use bonds to issue refunds to badly treated customers to avoid a negative, though you want to be careful about blackmail risks. It’s possible the breakdown of true and as-guarantor negatives might be visible on a guarantor if you look deep, but the idea is the guarantor should be strongly motivated to keep the new seller in line.
With lendable reputation, new sellers could start pleasing customers and competing from day one.
Submitted by brad on Tue, 2007-04-17 17:36.
Yesterday I attended the online community session of Web2Open, a barcamp-like meeting going on within Tim O’Reilly’s Web 2.0 Expo. (The Expo has a huge number of attendees, it’s doing very well.)
I put forward a number of questions I’ve been considering for later posts, but one I want to make here is this: Where has the innovation been in online discussion software? Why are most message boards and blog comment systems so hard to use?
I know this is true because huge numbers of people are still using USENET, and not just for downloading binaries. USENET hasn’t seen much technical innovation since the 80s. As such, it’s aging, but it shouldn’t be simply aging, it should have been superseded long ago. We’ve gone through a period of tremendous online innovation in the last few decades, unlike any in history. Other old systems, like the Well, continue to exist and even keep paying customers in spite of minimal innovation. This is like gopher beating Firefox, or a CD Walkman being superior in some ways to an iPod. It’s crazy. (The users aren’t crazy, it’s the fact that their choice is right that’s crazy.) read more »
Submitted by brad on Sun, 2007-04-15 16:45.
The use of virtual machines is getting very popular in the web hosting world. Particularly exciting to many people is Amazon.com’s EC2 — which means Elastic Compute Cloud. It’s a large pool of virtual machines that you can rent by the hour. I know people planning on basing whole companies on this system, because they can build an application that scales up by adding more virtual machines on demand. It’s decently priced and a lot cheaper than building it yourself in most cases.
In many ways, something like EC2 would be great for all those web sites which deal with the “slashdot” effect. I hope to see web hosters, servers and web applications just naturally allow scaling through the addition of extra machines. This typically means either some round-robin-DNS, or a master server that does redirects to a pool of servers, or a master cache that processes the data from a pool of servers, or a few other methods. Dealing with persistent state that can’t be kept in cookies requires a shared database among all the servers, which may make the database the limiting factor. Rumours suggest Amazon will release an SQL interface to their internal storage system which presumably is highly scalable, solving that problem.
As noted, this would be great for small to medium web sites. They can mostly run on a single server, but if they ever see a giant burst of traffic, for example by being linked to from a highly popular site, they can in minutes bring up extra servers to share the load. I’ve suggested this approach for the Battlestar Galactica Wiki I’ve been using — normally their load is modest, but while the show is on, each week, predictably, they get such a huge load of traffic when the show actually airs that they have to lock the wiki down. They have tried to solve this the old fashioned way — buying bigger servers — but that’s a waste when they really just need one day a week, 22 weeks a year, of high capacity.
However, I digress. What I really want to talk about is using such systems to get access to all sorts of platforms. As I’ve noted before, linux is a huge mishmash of platforms. There are many revisions of Ubuntu, Fedora, SuSE, Debian, Gentoo and many others out there. Not just the current release, but all the past releases, in both stable, testing and unstable branches. On top of that there are many versions of the BSD variants. read more »