Submitted by brad on Mon, 2007-06-11 14:39.
Yesterday, I wrote about election goals. Today I want to talk about one of the sub-goals, the non-provable ballot, because I am running into more people who argue it should be abandoned in favour of others goals. Indeed, they argue, it has already been abandoned.
As I noted, our primary goal is that voters cast their true desire, independent of outside pressure. If voters can’t demonstrate convincingly how they voted (or indeed if it’s easy to lie) then they can say one thing to those pressuring them and vote another way without fear of consequences. This is sometimes called “secret ballot” but in fact that consists of two different types of secrecy.
The call to give this up is compelling. We can publish, to everybody, copies of all the ballots cast — for example, on the net. Thus anybody can add up the ballots and feel convinced the counts are correct, and anybody can look and find their own ballot in the pool and be sure their vote was counted. If only a modest number of random people take the time to find their ballot in the published pool, we can be highly confident that no significant number of ballots have not been counted, nor have they been altered or miscounted. It becomes impossible to steal a ballot box or program a machine not to count a vote. It’s still possible to add extra ballots — such as the classic Chicago dead voters, though with enough checking even this can be noticed by the public if it’s done in one place.
The result is a very well verified election, and one the public feels good about. No voter need have any doubt their vote was counted, or that any votes were altered, miscounted, lost or stolen. This concept of “transparency” has much to recommend it.
Further, it is argued, many jurisdictions long ago gave up on unprovable ballots when they allowed vote by mail. The state of Oregon votes entirely by mail, making it trivial to sell your ballot or be pushed into showing it to your spouse. While some jurisdictions only allow limited vote by mail for people who really can’t get to the polls, some allow it upon request. In California, up to 40% of voters are taking advantage of this.
Having given up the unprovable ballot, why should we not claim all the advantages the published ballot can give us? Note that the published ballots need not have names on them. One can give voters a receipt that will let them find their true ballot but not let anybody who hasn’t seen the receipt look up any individual’s vote. So disclosure can still be optional. read more »
Submitted by brad on Sun, 2007-06-10 11:02.
This week I was approached by two different groups seeking to build better voting
systems, something I talk about here in my new democracy
topic. The discussions quickly got into all the various goals we have for voting
systems, and I did some more thinking I want to express here, but I want to start
by talking about the goals. Then shortly I will talk about the one goal both systems wanted to
abandon, namely the inability to prove how you voted.
Many of the goals we talk about are actually sub-goals of the core high-level goals I
will outline here. The challenge comes because no system yet proposed doesn’t have to
trade off one goal for another. This forces us to examine these goals and see which
ones we care about more.
The main goals, as I break them out are: Accuracy, Independence, Enfranchisement,
Confidence and Cost. I seek input on refining these goals, though I realize there will
be some overlap. read more »
Submitted by brad on Fri, 2007-06-08 14:43.
For many of us, E-mail has become our most fundamental tool. It is not just the way we communicate with friends and colleagues, it is the way that a large chunk of the tasks on our “to do” lists and calendars arrive. Of course, many E-mail programs like Outlook come integrated with a calendar program and a to-do list, but the integration is marginal at best. (Integration with the contact manager/address book is usually the top priority.)
If you’re like me you have a nasty habit. You leave messages in your inbox that you need to deal with if you can’t resolve them with a quick reply when you read them. And then those messages often drift down in the box, off the first screen. As a result, they are dealt with much later or not at all. With luck the person mails you again to remind you of the pending task.
There are many time management systems and philosophies out there, of course. A common theme is to manage your to-do list and calendar well, and to understand what you will do and not do, and when you will do it if not right away.
I think it’s time to integrate our time management concepts with our E-mail. To realize that a large number of emails or threads are also a task, and should be bound together with the time manager’s concept of a task.
For example, one way to “file” an E-mail would be to the calendar or a day oriented to-do list. You might take an E-mail and say, “I need 20 minutes to do this by Friday” or “I’ll do this after my meeting with the boss tomorrow.” The task would be tied to the E-mail. Most often, the tasks would not be tied to a specific time the way calendar entries are, but would just be given a rough block of time within a rough window of hours or days.
It would be useful to add these “when to do it” attributes to E-mails, because now delegating a task to somebody else can be as simple as forwarding the E-mail-message-as-task to them.
In fact, because, as I have noted, I like calendars with free-form input (ie. saying “Lunch with Peter 1pm tomorrow” and having the calender understand exactly what to do with it) it makes sense to consider the E-mail window as a primary means of input to the calendar. For example, one might add calendar entries by emailing them to a special address that is processed by the calendar. (That’s a useful idea for any calendar, even one not tied at all to the E-mail program.)
One should also be able to assign tasks to places (a concept from the “Getting Things Done” book I have had recommended to me.) In this case, items that will be done when one is shopping, or going out to a specific meeting, could be synced or sent appropriately to one’s mobile device, but all with the E-mail metaphor.
Because there are different philosophies of time management, all with their fans, one monolithic e-mail/time/calendar/todo program may not be the perfect answer. A plug-in architecture that lets time managers integrate nicely with E-mail could be a better way to do it.
Some of these concepts apply to the shared calendar concepts I wrote about last month.
Submitted by brad on Wed, 2007-06-06 20:19.
Even people outside of California have heard about proposition 13, the tax-revolt referendum which, exactly 29 years ago, changed the property tax law so that one’s property taxes only go up marginally while you own a property. Your tax base remains fixed at the price you paid for your house, with minor increments. If you sell and buy a house of similar value (or inherit in many cases) your tax basis and tax bill can jump alarmingly.
The goal of Prop 13 was that people would not find themselves with a tax bill they couldn’t handle just because soaring real estate values doubled or tripled the price of their home, as has often taken place in California. (Yes, I can hear your tears of sympathy.) In particular older people living off savings were sometimes forced to leave, always unpopular.
However, there have been negative consequences. One, it has stopped tax revenues from rising as fast as the counties like, resulting in underfunding of schools and other public programs. (This could be fixed by jacking up the rates even more on more recent buyers of homes but that has its own problems.)
Two, it generates a highly inequitable situation. Two identical families living in two identical houses — but one has a tax bill of $4,000 per year and the other has a tax bill of $15,000 per year, based entirely on when they bought or inherited their house. I would think this is unconstitutional but the courts said it is not.
Three it’s an impediment to moving (as if the realtor monopoly’s 6% scam were not enough.) There are exemptions in most counties for moves within California by seniors.
Here’s my fix: Each house would, as in most jurisdictions, be fairly appraised, and receive a tax bill based on that. Two identical houses — same tax bill. However, those who had a low basis value in their home could elect to defer some of that bill (ie. the difference between the real bill and their base bill derived from the price they paid for their home) until they sold the home. There would be interest on this unpaid amount, in effect they would be borrowing against the future equity of the home in order to have a lower tax bill. read more »
Submitted by brad on Mon, 2007-06-04 11:01.
Here’s a new approach to linux adoption. Create a linux distro which converts a Windows machine to linux, marketed as a way to solve many of your virus/malware/phishing woes.
Yes, for a long time linux distros have installed themselves on top of a windows machine dual-boot. And there are distros that can run in a VM on windows, or look windows like, but here’s a set of steps to go much further, thanks to how cheap disk space is today. read more »
- Yes, the distro keeps the Windows install around dual boot, but it also builds a virtual machine so it can be run under linux. Of course hardware drivers differ when running under a VM, so this is non-trivial, and Windows XP and later will claim they are stolen if they wake up in different hardware. You may have to call Microsoft, which they may eventually try to stop.
- Look through the Windows copy and see what apps are installed. For apps that migrate well to linux, either because they have equivalents or run at silver or gold level under Wine, move them into linux. Extract their settings and files and move those into the linux environment. Of course this is easiest to do when you have something like Firefox as the browser, but IE settings and bookmarks can also be imported.
- Examine the windows registry for other OS settings, desktop behaviours etc. Import them into a windows-like linux desktop. Ideally when it boots up, the user will see it looking and feeling a lot like their windows environment.
- Using remote window protocols, it’s possible to run windows programs in a virtual machine with their window on the X desktop. Try this for some apps, though understand some things like inter-program communication may not do as well.
- Next, offer programs directly in the virtual machine as another desktop. Put the windows programs on the windows-like “start” menu, but have them fire up the program in the virtual machine, or possibly even fire up the VM as needed. Again, memory is getting very cheap.
- Strongly encourage the Windows VM be operated in a checkpointing manner, where it is regularly reverted to a base state, if this is possible.
- The linux box, sitting outside the windows VM, can examine its TCP traffic to check for possible infections or strange traffic to unusual sites. A database like the siteadvisor one can help spot these unusual things, and encourage restoring the windows box back to a safe checkpoint.
Submitted by brad on Mon, 2007-06-04 00:20.
You’ve all seen it many times. You hit the ‘back’ button and the browser tells you it has to resubmit a form, which may be dangerous, in order to go back. A lot of the blame for this I presume lies on pages not setting suitable cache TTLs on pages served by forms, but I think we could be providing more information here, even with an accurate cache note.
I suggest that when responding to a form POST, the HTTP response should be able to indicate how safe it is to re-post the form, effectively based on what side-effects (other than returning a web page) posting the form had. There are forms that are totally safe to re-POST, and the browser need not ask the user about it, instead treating them more like they do a GET.
(Truth be told, the browser should not really treat GET and POST differently, my proposed header would be a better way to do it on both of them.)
The page could report that the side effects are major (like completing a purchase, or launching an ICBM) and thus that re-posting should be strongly warned against. The best way to do this would be a string, contained in the header or in the HTML so the browser can say, “This requires resubmitting the form which will ” for example.
This is, as noted, independent of whether the results will be the same, which is what the cache is for. A form that loads a webcam has no side effects, but returns a different result every time that should not be cached.
We could also add some information on the Request, telling the form that it has been re-posted from saved values rather than explicit user input. It might then decide what to do. This becomes important when the user has re-posted without having received a full response from the server due to an interruption or re-load. That way the server can know this happened and possibly get a pointer to the prior attempt.
In addition, I would not mind if the query on the back button about form repost offered me the ability to just see the expired cache material, since I may not want the delay of a re-post.
With this strategy in mind, it also becomes easier to create the deep bookmarks I wrote of earlier, with less chance for error.
Some possible levels of side-effects could be None, Minor, Major and Forbidden. The tag could also appear as an HTML attribute to the form itself, but then it can’t reveal things that can only be calculated after posting, such as certain side effects.
Submitted by brad on Sun, 2007-06-03 20:27.
In a chat I had recently with another communications geek, we talked about the well known problem of videoconferencing systems. You look at a person on the screen, and the camera is not where you are looking, so eye contact is not possible.
There have been a few solutions tried for this. You can have a display with a beam-splitting mirror that allows a camera to see a well lit subject, at some cost of quality of the image. You still need to keep the camera on the eyes. There has been some experimentation with software that would have cameras at the left and right of the screen and combine the two images to make one from a virtual camera at the eye point, or sometimes more simply to rewrite the image of the eye to move the pupil to the right place. That turns out to be hard to do because we are very discerning about eyes looking “natural” though it may become possible.
Another approach has been semi-transparent displays a camera can look through, but we like our displays to be crisp and bright. A decade ago I saw guys claiming they could build a display that could focus light without a lens, so each cell could have a sensor, but I have not seen anything come of this. In the end, most people try to place the camera near the top of the screen, and the image right under it.
Having the image under the camera makes the person look like they are looking down. This causes some women to perceive this as something else they frequently see — men staring at their chests when they talk to them. Yes, we’re pretty much all guilty of this.
So I came up with an amusing, not entirely serious answer, namely to put the camera below the image and then, for men at least, stare at her chest, or an imaginary one below the edge of the screen. Then you would be looking at the camera and thus at the other person.
Amusingly, when videophones are shown on TV, we almost always see the people staring right into them, because they are TV actors who know how to find their camera.
Submitted by brad on Sat, 2007-06-02 11:34.
Ok, I couldn’t resist. If this makes no sense to you, sorry, explaining isn’t going to make it funny. Look up lolcats.
Thanks to David Farrar for the original ICANN board picture.
Submitted by brad on Wed, 2007-05-30 11:32.
I wrote recently about the paradox of identity management and how the easier it is to offer information, the more often it will be exchanged.
To address some of these issues, let me propose something different: The creation of an infrastructure that allows people to generate secure (effectively anonymous) pseudonyms in a manner that each person can have at most one such ID. (There would be various classes of these IDs, so people could have many IDs, but only one of each class.) I’ll call this a QID (the Q “standing” for “unique.”)
The value of a unique ID is strong — it allows one to associate a reputation with the ID. Because you can only get one QID, you are motivated to carefully protect the reputation associated with it, just as you are motivated to protect the reputation on your “real” identity. With most anonymous systems, if you develop a negative reputation, you can simply discard the bad ID and get a new one which has no reputation. That’s annoying but better than using a negative ID. (Nobody on eBay keeps an account that gets a truly negative reputation. An account is abandoned as soon as the reputation seems worse than an empty reputation.) In effect, anonymous IDs let you demonstrate a good reputation. Unique IDs let you demonstrate you don’t have a negative reputation. In some cases systems try to stop this by making it cost money or effort to generate a new ID, but it’s a hard problem. Anti-spam efforts don’t really care about who you are, they just want to know that if they ban you for being a spammer, you stay banned. (For this reason many anti-spam crusaders currently desire identification of all mailers, often with an identity tied to a real world ID.)
I propose this because many web sites and services which demand accounts really don’t care who you are or what your E-mail address is. In many cases they care about much simpler things — such as whether you are creating a raft of different accounts to appear as more than one person, or whether you will suffer negative consequences for negative actions. To solve these problems there is no need to provide personal information to use such systems. read more »
Submitted by brad on Tue, 2007-05-29 14:02.
I’ve just returned from the 25th reunion of my graduating class in Mathematics at the University of Waterloo. I had always imagined that a 25th reunion would be the “big one” so I went. In addition, while I found myself to have little in common with my high school classmates, even having spent 13 years growing up with many of them, like many techie people I found my true community at university, so I wanted to see them again. To top it off, it was the 40th anniversary of the faculty and the 50th anniversary of the university itself.
But what if they had a reunion and nobody came? Or rather, out of a class of several hundred, under 20 came, many of whom I only barely remembered and none of whom I was close to? read more »
Submitted by brad on Fri, 2007-05-18 14:41.
In 2005, John Scalzi burst on the scene with a remarkable first novel, Old Man’s War. It got nominated for a Hugo and won him the Campbell award for best new writer. Many felt it was the sort of novel Heinlein might be writing today. That might be too high a praise, but it’s close. The third book in this trilogy has just come out, so it was time to review the set.
It’s hard to review the book without some spoilers, and impossible for me to review the latter two books without spoiling the first, but I’ll warn you when that’s going to happen.
OMW tells the story of John Perry, a 75 year old man living on an Earth only a bit more advanced than our own, but it’s hundreds of years in the future. Earth people know they’re part of a collection of human colonies which does battle with nasty aliens, but they are kept in the dark about the realities. People in the third world are offered o ne way trips to join colonies. People in the 1st world can, when they turn 75, sign up for the colonial military, again a one-way trip. It’s not a hard choice to make since everybody presumes the military will make them young again, and the alternative is ordinary death by old age.
The protagonist and his wife sign up, but she dies before the enlistment date, so he goes on his own. The first half of the book depicts his learning the reality of the colonial union, and boot camp, and the latter half outlines his experiences fighting against various nasty aliens.
It’s a highly recommended read. If you loved Starship Troopers or The Forever War this is your kind of book.
Now I’ll go into some minor spoilers. read more »
Submitted by brad on Wed, 2007-05-16 16:34.
Since the dawn of the web, there has been a call for a “single sign-on”
facility. The web consists of millions of independently operated web sites,
many of which ask users to create “accounts” and sign-on to use the site.
This is frustrating to users.
Today the general single sign-on concept has morphed into what is now called
“digital identity management” and is considerably more complex. The most recent
project of excitement is OpenID which is a standard which allows users
to log on using an identifier which can be the URL of an identity service,
possibly even one they run themselves.
Many people view OpenID as positive for privacy because of what came before it.
The first major single sign-on project was Microsoft Passport which came
under criticism both because all your data was managed by a single company and
that single company was a fairly notorious monopoly. To counter that, the
Liberty Alliance project was brewed by Sun, AOL and many other companies,
offering a system not run by any single company. OpenID is simpler and even
However, I feel many of the actors in this space are not considering an inherent
paradox that surrounds the entire field of identity management. On the
surface, privacy-conscious identity management puts control over who gets
identity information in the hands of the user. You decide who to give identity
info to, and when. Ideally, you can even revoke access, and push for minimal
disclosure. Kim Cameron summarized a set of laws of identity
outlining many of these principles.
In spite of these laws one of the goals of most identity management
systems has been ease of use. And who, on the surface, can argue with ease
of use? Managing individual accounts at a thousand web sites is hard.
Creating new accounts for every new web site is hard. We want something
However, here is the contradiction. If you make something easy to do,
it will be done more often. It’s hard to see how this can’t be true.
The easier it is to give somebody ID information, the more often it will
be done. And the easier it is to give ID information, the more palatable
it is to ask for, or demand it. read more »
Submitted by brad on Wed, 2007-05-09 16:05.
In the 1980s, my brother Ty Templeton published his first independent comic book series, Stig’s Inferno. He went on to considerable fame writing and drawing comics for Marvel, D.C. and many others, including favourite characters like Superman, Batman and Spider-Man, as well as a lot of comics associated with TV shows like The Simpsons and Ren and Stimpy. But he’s still at his best doing original stuff.
You may not know it, but years ago I got most of Stig’s Inferno up on the web. Just this week however, a fan scanned in the final issue and I have converted it into web pages. The fan also scanned the covers and supplemental stories from the issues, they will be put up later.
So if you already enjoyed the other episodes journey now to Stig’s Inferno #7.
If you never looked go to The main Stig’s Inferno page. You can also check out small versions of all the issue covers.
I’ll announce when the supplemental stories are added.
The comic tells a variation of Dante’s Inferno, where our hero Stig is killed by the creatures that live in his piano and makes a strange journey through the netherworld. It’s funny stuff, and I’m not just saying it because he’s my brother. Give it a read.
Submitted by brad on Mon, 2007-05-07 18:49.
First, let me introduce a new blog topic, Sysadmin where I will cover computer system administration and OS design issues, notably in Linux and related systems.
My goal is to reduce the nightmare that is system administration and upgrading.
One step that goes partway in my plan would be a special software system that would build for a user a specialized operating system “package” or set of packages. This magic package would, when applied to a virgin distribution of the operating system, convert it into the customized form that the user likes.
The program would work from a modified system, and a copy of a map (with timestamps and hashes) of the original virgin OS from which the user began. First, it would note what packages the user had installed, and declare dependencies for these packages. Thus, installing this magic package would cause the installation of all the packages the user likes, and all that they depend on.
In order to do this well, it would try to determine which packages the user actually used (with access or file change times) and perhaps consider making two different dependency setups — one for the core packages that are frequently used, and another for packages that were probably just tried and never used. A GUI to help users sort packages into those classes would be handy. It must also determine that those packages are still available, dealing with potential conflicts and name change concerns. Right now, most package managers insist that all dependencies be available or they will abort the entire install. To get around this, many of the packages might well be listed as “recommended” rather than required, or options to allow install of the package with missing 1st level (but not 2nd level) dependencies would be used. read more »
Submitted by brad on Sun, 2007-05-06 23:52.
At our new favourite Indian buffet (Cafe Bombay) they run Bollywood videos on big screens all the time. In Bollywood, as you probably know, everybody is dancing all the time, in wonderful synchronization, like Broadway but far more. I’ve never been to an Indian dance club to see if people try to do that in real life, but I suspect they want to.
I started musing about a future where brain implants let you give a computer control of your limbs so you could participate in such types of dance, but I realized we might be able to do something much sooner.
Envision either a special suit or a set of cuffs placed around various parts of the arms and legs. The cuffs would be able to send stimuli to the skin, possibly by vibrating or a mild electric current, or even the poke of a small actuator.
With these cuffs, we would develop a language of dance that people could learn. Dancers have long used Dance notation to record dances and communicate them, and more sophisticated sytems are used to have computerized figures dance. (Motion capture is also used to record dances, and often to try to distill them to some form of encoding.) In this case, an association would be made between stimuli and moves. If you feel the poke on one part of your left wrist, move you left arm in a certain way, a different set of pokes commands a different move. There would no doubt have to be chords (multiple stimulators on the same cuff) to signal more complex moves.
Next, people would have to train so that they develop an intuitive response, so that as soon as they feel a stimulus, they make the move. People with even modest dance skill of course learn to make moves as they are told them or as they see them, without having to consciously think about it a great deal. The finest dancers, as we have seen, can watch a choreographer dance and duplicate the moves with great grace due to their refined skill.
I imagine people might learn this language with something like a video game. We’ve already seen the popularity of Dance Dance Revolution (DDR) where people learn to make simple foot moves by seeing arrows on the screen. A more advanced game would send you a stimulus and test how quickly you make the move.
The result would be to become a sort of automaton. As the system fed you a dance, you would dance it. And more to the point, if it fed a room full of people a dance, they would all dance the same dance, in superb synchronization (at least for those of lower skill.) Even without the music though normally this would all be coordinated with that. Dance partners could even be fed complimentary moves. Indeed, very complex choreographies could be devised combined with interesting music to be done at dance clubs in moves that would go way beyond techno. I can see even simple moves, getting people to raise and move hands in patterns and syncs being very interesting, and more to the point, fun to participate in.
In addition, this could be a method to train people in new and interesting dances. Once one danced a dance under remote control several times one would presumably then be able to do it without the cuffs, and perhaps more naturally. Just like learning a piece of music with the sheet music and eventually being able to take the music away.
I suspect the younger people were when they started this, the better they would be at it.
It could also have application in the professional arena, to bring a new member of a troupe up to speed, or for a dance to be communicated quickly. Even modest dancers might be able to perform a complex dance immediately. It could also possibly become a companion to Karaoke.
There are other means besides cuffs to communicate moves to people of course, including spoken commands into earphones (probably cheapest and easiest to put on) and visual commands (like DDR) into an eyeglass heads-up-display once they become cheap. The earphone approach might be good for initial experiments. One advantage of cuffs is the cuffs could contain accelerometers which track how the limb moved, and thus can confirm that the move was done correctly. This would be good in video game training mode. In fact, the cuffs could even provide feedback for the correct move, offering a stimulus if the move is off in time or position.
There have been some “use people as robots” experiments before, but let’s face it, dance is more fun. And an actual Bollywood movie could come to life.
Submitted by brad on Fri, 2007-05-04 18:38.
Self-driving cars are still some ways in the future, but there are some things they will want that human drivers can also make use of.
I think it would be nice if the urban data networks were to broadcast the upcoming schedule for traffic light changes in systems with synchronized traffic lights. Information like “The light at location X will go green westbound at 3:42:15.3, amber at 3:42.45.6 and red at 3:42.47.8” and so on. Data for all directions and for turn arrow lights etc. This could be broadcast on data networks, or actually even in modulations of the light from the LEDs in the traffic lights themselves (though you could not see that around turns and over hills.)
Now a simple device that could go in the car could be a heads-up-display (perhaps even just an audio tone) that tells you whether you are in the “zone” for a green light. As you move through the flow, if you started getting so fast that you would get to the intersection too early for it to be green, it could show you in the too-fast zone with a blinking light or a tone that rises in pitch the faster you are. A green light (no tone) would appear when you were in the zone.
It would arrange for you to arrive at the light after it had been green for a second or two, to avoid the risk of hitting cars running the red light in the other direction. Sometimes when I drive down a street with timed lights I will find myself trusting the timing a bit too much, so I am blowing through the moment the light is green, which actually is a bit risky because of red light runners.
(Perhaps the city puts in a longer all-red gap on such lights to deal with this?)
More controversial is the other direction, a tone telling you that you will need to speed up to catch this green before it goes amber. This might encourage people to drive recklessly fast and might be a harder product to legally sell. Though perhaps it could tell you that if you sped up to the limit you would make the light but stop telling you after no legal speed can make it. Of course, people would learn to figure it out.
We figure that out already of course. Many walk/don’t walk signs now have red light countdown timers, and how many of us have not sped up upon seeing the counter getting low? Perhaps this isn’t that dangerous. Just squeaking through a light rarely helps, of course, because the way the timing works you usually are even more likely to miss the next one, and you have to go even faster to make it — to the point that even a daredevil won’t try.
This simple device could be just the start of it. Knowledge of this data for the city (combined with a good GPS map system of course) could advise you of good alternate routes where you will get better traffic light timing. It could advise you to turn if you’re first at a red light (which it will know thanks to GPS) if your destination is off to the right anwyay. Of course it could do better combined with real traffic data and information on construction, gridlock etc.
This is not a cruise control, you would still control the gas. However, if you pressed too hard on the gas your alert would start making the tone, and you would soon learn it is quite unproductive to keep pressing. (You could make this a cruise control but you need to be able to speed up some times to avoid things and change lanes.) People tend more often to speed up and then have to break for a short while waiting for the green, which doesn’t get you there any faster, and is a jerky ride.
The system I describe could be a nice add-on for car GPS systems.
Submitted by brad on Fri, 2007-05-04 14:14.
Most search engines now have a search box in the toolbar, which is great, and like most people mine defaults to Google. I can change the engine with a drop down menu to other places, like Amazon, Wikipedia, IMDB, eBay, Yahoo and the like. But that switch is a change in the default, rather than a temporary change — and I don’t want that, I want it to snap back to Google.
However, I’ve decided I want something even more. I’ll make a plea to somebody who knows how to do firefox add-ons to make a plug-in so I can chose my search engine with some text in the query I type. In other words, if I go to the box (which defaults to Google) I could type “w: foobar” to search Wikipedia, and “e: foobar” to search eBay and so on. Google in fact uses a syntax with keyword and colon to trigger special searches, though it tends not to use one letter. If this bothers people, something else like a slash could be used. While it would not be needed, “g: foobar” would search on Google, so “g: w: foobar” would let you search for “w: foobar” on Google. The actual syntax of the prefix string is something the user could set, or it could be offered by the XML that search engine entries are specified with.
Why is this the right answer? It’s no accident that Google uses this. They know. Whatever your thoughts on the merits of command line interfaces and GUIs, things often get worse when you try to mix them. Once you have me typing on the keyboard, I should be able to set everything from the keyboard. I should not be forced to move back and forth from keyboard to pointing device if I care to learn the keyboard interface. You can have the GUI for people who don’t remember, but don’t make it be the only route.
What’s odd is that you can do this from the Location bar and not the search bar. In Firefox, go to any search engine, and right click on the search box. Select “Add a Keyword for this Search” and this lets you create a magic bookmark which you can stuff anywhere, whose real purpose is not to be a bookmark, but a keyword you can use to turn your URL box into a search box that is keyword driven.
You don’t really even need the search box, which makes me wonder why they did it this way.
Submitted by brad on Thu, 2007-05-03 18:03.
High posting volume today. I just find it remarkable that in the last 2 weeks I’ve seen several incredible breakthrough level stories on health and life extension.
Today sees this story on understanding how caloric restriction works which will appear in Nature. We’ve been wondering about this for a while, obviously I’m not the sort of person who would have an easy time following caloric restriction. Some people have wondered if Resveratrol might mimic the actions of CR, but this shows we’re coming to a much deeper understanding of it.
Yesterday I learned that we have misunderstood death and in particular how to revive the recently dead. New research suggests that when the blood stops flowing, the cells go into a hibernation that might last for hours. They don’t die after 4 minutes of ischemia the way people have commonly thought. In fact, this theory suggests, the thing that kills patients we attempt to revive is the sudden inflow of oxygen we provide for revival. It seems to trigger a sort of “bug” in the [[w:mitochondria], triggering apoptosis. As we learn to restore oxygen in a way that doesn’t do this, especially at cool temperatures, it may be possible to revive the “dead” an hour later, which has all sorts of marvelous potential for both emergency care and cryonics.
Last week we were told of an absolutely astounding new drug which treats all sorts of genetic disorders. A pill curing all those things sounds like a miracle. It works by altering the ribosome so that it ignores certain errors in the DNA which normally make it abort, causing complete absence of an important protein. If the errors are minor, the slightly misconstructed protein is still able to do its job. As an analogy, this is like having parity memory and disabling the parity check in a computer. It turns out parity errors are quite rare, so most of the time this works fine. When a parity check fails the whole computer often aborts, which is the right move in the global scale — you don’t want to risk corrupting data or not knowing of problems — but in a human being, aborting the entire person due to a parity check is a bit extreme from the individualistic point of view.
These weren’t even all the big medical stories of the past week. There have been cancer treatments and more, along with a supercomputer approaching the power of a mouse brain.
Submitted by brad on Thu, 2007-05-03 13:28.
While I was at Tim O’Reilly’s Web 2.0 Expo, I did an interview with an online publication called Web Pro News. I personally prefer written text to video blogging, but for those who like to see video, you can check out:
Video Interview on Privacy and Web 2.0
The video quality is pretty good, if not the lighting.
The main focus was to remind people that as we return to timesharing, which is to say, move our data from desktop applications to web based applications, we must be aware that putting our private data in the hands of 3rd parties gives it less constitutional protection. We’re effectively erasing the 4th Amendment.
I also talk about hints at an essay I am preparing on the evils of user-controlled identity management software. And my usual rant about thinking about how you would design software if you were living in China or Saudi Arabia.
I also was interviewed some time ago about Google and other issues by a French/German channel. That’s a 90 minute long program entitled Faut-il avoir peur de Google ? (Should we fear Google). It’s also available in German. It was up for free when I watched it, but it may now require payment. (I only appear for a few minutes, my voice dubbed over.)
When I was interviewed for this I offered to, with some help, speak in French. I am told I have a pretty decent accent, though I no longer have the vocabulary to speak conversationally in French. I thought it would be interesting if they helped me translate and then I spoke my words in French (perhaps even dubbing myself later if need be.) They were not interested since they also had to do German.
Another video interview by a young French documentarian producing a show called Mix-Age Beta can be found here. The lighting isn’t good, but this time it’s in English. It’s done under the palm tree in my back yard.
Submitted by brad on Thu, 2007-05-03 12:36.
I wasn’t going to make any special commemoration, but it seems a whole ton of other blogs are linking today to my articles on the history of Spam, so I should blog them as well.
Many years ago I got interested in the origins of the term “spam” to mean net abuse. I mean I had lived through most of its origin and seen most of the early spams myself, but it wasn’t clear why people took the name of the meat product and applied it to junk mail. I knew it came from USENET, so I used the USENET search engines to trace the origins.
This resulted in my article on the origins of word spam to mean net abuse.
In doing the research, I was pointed to what was probably the earliest internet spam, though it far predates the term.
I documented that in Reactions to the first spam.
4 years ago, on the 25th anniversary of that spam, I was interviewed on NPR’s All Things Considered and write an article reflecting on the history. For that article I dug out Gary Thuerk, the sender of that first spam, and interviewed him for more details.
You can read that in Reflections on the 25th anniversary of Spam.
Of course, you can find all these and many more in my collection of articles on Spam. Many years ago I wrote a wide variety of essays on the spam problem. Not simply about solutions, but analysis of why the fight was so nasty, and concern over the rights people were willing to give up in the name of fighting spam.
I will probably update them, and do some more research for the 30th anniversary, next year.