Brad Templeton is an EFF
director, Singularity U
faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Submitted by brad on Thu, 2005-05-19 09:30.
I shoot with an SLR, and all lenses need a rear lens cap when not on the camera. Every SLR shooter knows the three-handed ritual. (Four handed if the Camera's not on a strap.) You take one lens off the camera. You pick another lens and remove the rear cap from it. Holding the old lens, new lens and rear cap and camera, you put the new lens on the camera, then put the rear cap on the old lens. (Or you put the cap on the old lens first, put it down and put the new lens on the camera.)
Anyway, a simple invention I have already built is a doubleheaded rear lens cap, namely two lens caps glued together. Custom-built it would be a lot smaller and solve some of the problems I have experienced.
With the doubleheader, you can take your lens off the camera and put it immediately onto the open end of the doubleheader cap on the new lens. Then with a twist you remove the new lens from the resulting docked lens pair, and put it on the camera. In theory one less hand or less dexterity.
However, the catch is the docked lens configuration tightens both as you twist one way and loosens both as you twist the other way. So you must master the art of making sure the lens you want comes loose.
How this works varies from lens to lens and how well it fits the rear cap. Sometimes pressing them both together causes one to undo reliably. The most reliable trick is to grab the old lens around the rear neck so you can get a finger on the cap, and then pull the new lens off.
It seems one might be able to design ways to make this more reliable, such as a small flange on the cap to hold with your finger to make sure of what twists off, or a ratcheting twist-off that requires a release button.
If both become equally lose when you untwist, then gravity will help you in that the cap will stay on the lower lens. You must later twist it back to stay on. I think the ideal motion would be to twist on so both are tight, then either hold the cap or release a ratchet so only the lens you want comes off without loosening the old lens. read more »
Submitted by brad on Thu, 2005-05-12 05:50.
There have been many efforts at internet "identity" systems, such as Microsoft Passport, Liberty Alliance, and a variety of others. A recent conference was held in SF, though I didn't go, but I thought it was time to put forward one important idea.
Also, sometimes something goes into a server because business rules demand it. You can only make money from it as a service you sell, so you build it that way. read more »
Submitted by brad on Mon, 2005-05-09 07:48.
I've written before about the dichotomies between serial and browseable, between writer-friendly and reader-friendly.
One idea that now seems obvious is to integrate wiki functions into a mailing list manager (particularly one that does a web interface to the mailng list.)
In particular, one should be able to "cc" a message to sections of the wiki and have it added. For example, to an FAQ section. In addition, readers of a message should be able to promote it into sections of the wiki either by clicking links in the HTML version of the message, or by forwarding the message back to some magic addresses at the mailing list manager.
Thus when sombody on a mailing list makes a useful answer to a question, it could go quickly into a wiki style knowledge base, for easier browsing and searching. Many mailing lists today allow you to search the list archives, but unless you know your vocabulary, you may not find the answer to problems you are trying to solve, even though they exist there.
Submitted by brad on Fri, 2005-05-06 03:55.
On both a personal and professional note, I am happy to report that the federal courts have unanimously ruled to strike down the FCC's broadcast flag (that's a PDF) due to our lawsuit against them.
I participated directly in this lawsuit, filing an affadavit on how, as a builder of a MythTV system and writer of software for MythTV, I would be personally harmed if the flag rule went into effect. The thrust of the case was that the FCC, which is empowered to regulated interstate communications, had no authority to regulate what goes on inside your PC. The court bought that, but we had to show that the actual plaintiffs in the case would be harmed, not simply the general public, thus the declarations by myself and various other members of EFF and other plaintiffs.
The broadcast flag was an insidious rule because, as I like to put it, it didn't prohibit Tivo from making a Tivo (as long as they got it certified as having pledged allegiance to the flag.) It stopped somebody from designing the next Tivo, the metaphorical Tivo, meaning bold new innovation in recording TV.
I would like to particularly thank Public Knowledge, which spearheaded this effort and funded most of it.
Here's an AP Interview with me on the issue.
Submitted by brad on Wed, 2005-05-04 05:21.
Update: A more active thread on how this relates to Goodmail and other attempts at sender-pays traffic
There is much talk these days of “who invented the internet?” Most of the talk is done wearing a network engineer’s hat, defining the internet in terms of routing IP datatgrams, and TCP. Some relates to the end to end principle with a stupid network in the middle and smart endpoints. These two are valid and vital contributions, and recognition for those who built them is important.
But that’s not what the public thinks of when it hears “the internet.” They think of the collection of cool applications they use to interact with other people and distant computers. Web sites and mailing lists and newsgroups and filesharing and VoIP and downloading and chat and much more. Why did these spring into being in this way rather than on other networks?
I believe a large and necessary ingredient for “the internet” wasn’t a technological invention at all, but a billing system. The internet is based on what I call the “internet cost contract.” That contract says that each person pays for their own pipe to the center, and we don’t account for the individual traffic.
“I pay for my half, you pay for yours.”
While the end-to-end design allowed innovation and experimentation, the billing design really made it possible. In the early days of the internet, people dreamed up all sorts of bizarre applications, some serious, some entirely frivolous. They put them out there and people played with them and the most interesting thrived.
Many other networks had users paying not by the pipe, but based on traffic. In that world, had you decided to host a mailing list, or famously put a webcam up in front of your company fishtank, the next day the company beancounter would have called you into the office to ask why the company got a big bandwidth bill in order to show off the fishtank. The webcam — or FTP site or mailing list — would have been shut down immediately, and for perfectly valid reasons.
Pay-based-on-usage demands that applications be financially justifiable to live. Pay-per-pipe allowed mailing lists, ftp sites, usenet, archie, gopher and the web to explode. read more »
Submitted by brad on Mon, 2005-05-02 06:51.
While for various reasons I believe that the efforts to enforce E911 requirements on Voice over IP phones are bogus and largely designed to make it harder for smaller players to compete with established companies, there is a legitimate need for ways to give your location to emergency services.
To protect privacy, I suggest that this be done in the endpoints. To assist this, I would propose a set of option extensions to the DHCP protocol to tell an endpoint what the server knows about its location, including address, zip and even what emergency contact center to use. This would start with RFC3825 for geolocation, and move on to other features. The endpoint device, when calling 911 or other emergency services, could include this information in the SIP invite, or provide it on request.
For those who don't know, DHCP is the system which lets a computer connect to an ethernet and ask for an IP address as well as important local network information (such as the addresses of routers, name servers, domain names etc.) Some DHCP servers know exactly who the client device is and effectively act as the client's memory. Some just give the next available address and return information about the local network area.
For example, most people with home networks, and almost all of them who use Voice over IP services like Vonage have a local network with its own DHCP server, built into the home-router they use. That home router could be told the address of the home, and all devices, including VoIP phones, could learn it. For companies, it is the same.
DHCP is also used for ISPs to give addresses to DSL and Cable modem customers who hook up to the internet without a home gateway because they have only one computer. That's pretty rare for VoIP users. In these cases they may or may not know the street address of the computer. DHCP is also very common for people who connect to wireless access points. The AP in a Starbucks could easily tell your device the address of the Starbucks.
As noted, we could start by the device fetching this address and forwarding it on with emergency calls, but not doing so for regular calls. This puts privacy control in the hands of the user, where it should be.
However, we could do even more than just give location as in rfc3825. The DHCP server could publish the direct contact information for the local area for police, fire, ambulance or general emergencies. They could simply include the contact number of a PSAP (Public Service Access Point, the gateway to emergency services) for the location, or in a corporate setting, might direct emergency calls to the corporate security desk, with the PSAP/911 as a fall-back. (There should be laws however about use of such features and protection of privacy. Network owners can already reroute any traffic but we want it to be clear how this might be done.) read more »
Submitted by brad on Thu, 2005-04-28 08:01.
George W. Bush names Jesus as the philosopher he admires the most. The most central of the teachings of Jesus can be found in the Sermon on the Mount.
I have come upong Bush's edited version of the sermon, amended to make the dictates of his Saviour easier to follow in these modern times.
Enjoy here in the Sermon on the Mount (George Bush Version)
Submitted by brad on Tue, 2005-04-19 14:05.
During the 1990s, the US Government made a major effort to block the deployment of encryption by banning its export. We won that fight, but during the formative years of most internet protocols, they made it hard to add good authentication and privacy to internet tools. They forced vendors to jump through hoops, made users download special "encryption packs" and made encryption the exception rather than the norm in online work.
This, combined with bad design decisions made even without the help of the government, has caused some of the security windows that are bugging people today.
A recent issue is DNS poisoning, getting known by the name of pharming. The scammers send fake DNS answers in advance to buggy DNS servers running on MS Windows Service pack 2 or earlier, or very old *nix copies of bind. They tell the server that www.yourbank.com should really go to their address with a fake version of the site.
Now of course we should have made DNS reliable and secure to stop this, or at least done the very basic things found in the most up to date DNS servers, but even so, this attack should not have been enough.
That's because SSL certificates were supposed to assure that you were really talking to yourbank.com when the browswer said it was, even if somebody hijacked the connection like this. And they will. The phisher can't pretend to be yourbank.com with the little "lock" icon on the status bar of your browswer set to locked. But they can pretend it when the icon says unlocked.
And surprise, surprise, people forget to look at the icon. A lot. They turn off the warnings about transitions to insecure pages because they go off all the time, and nobody pays attention to an alarm that's always going off. Encryption and SSL are rare, special things limited to login screens. We tolerate all the rest of life being unencrypted and in the clear -- and vulnerable, just like the USDoJ wanted it. read more »
Submitted by brad on Sun, 2005-04-17 13:20.
When people watch TV with a hard disk video recorder, they always watch the show delayed, often by hours or many days. They all watch it at a different time.
It occurs to me it would be amusing to generate a system to allow the collaborative annotation of TV programs and DVD movies using the net, and DVRs like the open source MythTV, which would be a natural initial platform. Users watching a show would be able to make comments at various points in it. Either text comments, along the lines of "Pop-up Video" or even voice comments and jokes, along the lines of "Mystery Science Theatre 3000."
And indeed, people already do this real time. Just about every popular show generates a chat-room for people who watch it live near a computer. However, these are usually quite inane as they are done in real time with no filtering.
Thanks to delayed watching, we could change that. Each suggested annotation would be uploaded quickly to a server handling the particular TV show or movie. This would come with a pseudonym for the author, which would be tied to a reputation. All annotations would be sent out for viewing by a limited audience. For low-reputation contributors, a very limited audience. If that audience hits an "approve" button on their remote when they see the annotation, it would improve the score, and more and more early watchers would get to see and approve/disaprove of the annotation.
Eventually things would build up and you would have a series of highly approved comments for those who want to see a show with comments. I expect most comments would be jokes, but some would also be pointers to useful information or reasoned criticism. Authors might indicate what their goal is so that viewers could tune what sort of annotations they want to see. Viewers could also tune a threshold for how good the annotations have to be to see them.
Authors would indicate if their pop-up should show in a particular place on the screen (so that. like pop-up video, it doesn't block things.) Some viewers, especially those with big screen TVs, would shrink the image and redirect pop-ups outside the show.
However, there are some interesting problems to solve... read more »
Submitted by brad on Fri, 2005-04-15 12:45.
Dear [[blog-reader's name]]:
When it first started arising, in the 60s and 70s, everybody thought it was so cute and clever that computers could call us by name. Some programs even started by asking for your name, only to print "Hi, Bob!" to seem friendly in some way.
And of course a million companies were sold mailing list management tools to print form letters, filling in the name of the recipient and other attributes in varous places to make the letter seem personal. And again, it was cute in its way.
But not any more. We've all figured it out. Nobody says, "Wow, this letter has 'Dear Brad' in it, it must have been written personally for me." Nobody is fooled any more. In fact, the reverse is now true. It's bordering on offensive. If an E-mail starts with "Dear Brad" it is more likely than not to be spam.
Sometimes though, I get form letters from real companies I deal with, and they still like to put my name in it, like they used to on paper. As you probably know, in E-mail today, you don't put in salutations any more unless it's a mail to a stranger.
So let's get the word out. Stop it. No more form letters where the computer oh-so-cleverly manages to fill in a field with our name. (Unless it's amusing, and they are writing to "Dear Mr. Association") If it's legitimate bulk mail, don't try to pretend you're not bulk mail. That's what spammers do. Be honest that you're bulk mail.
If you have actual relevant data to fill in, fill it in, but put it in a table so I can skip the form letter garbage and get to the actual data about me you're trying to tell me. Put my name at the top in a nice computer-style box, "Prepared for: Brad Templeton."
Leave the use of my name to people writing messages for me. You're not fooling anybody.
[[Insert name here]]
Submitted by brad on Wed, 2005-04-13 07:13.
It seems that whenever you have a popular event, notably concerts in smaller venues and certain plays, the venue sells out their tickets quickly, and then ticket speculators leap in and sell the tickets at high margins. Ticket speculating (aka scalping) is legal in some areas and illegal in others. I don't think it should be illegal, but I wonder why the venues and performers tolerate so much of the revenue going to the speculators.
Or am I wrong, and this is not happening? Is it the case that often the speculators miscalculate and lose money so they only make a modest income? It doesn't seem that way to me. Now, there are many ticket brokers with large web presences (including some who sponsor my joke site) and tickets are commonly auctioned on eBay.
So why don't the venues or ticket companies create their own auction sites to auction tickets, with some fair system like a dutch auction, and keep all the money from high-demand events for themselves? Is it simply because this seems elitist and they feel it will annoy fans?
Currently, fans are annoyed because speculators scoop up tickets to high-demand events as soon as sales open, and such events sell out quickly, before actual fans can get them. That seems far worse to me. An auction system would actually allow lesser tickets to sell for less money and generate the same revenue for the event.
This seems so obvious, why isn't it taking place? Is it simply inertia, or a fear of requiring computer access in order to get tickets? While just about anybody can get computer access these days, dutch auctions can be done by phone if you trust the 3rd party managing the auction. Call in once, set your maximum bid for the various ticket classes you will accept, then find out the resulting price later. People at computers would have a small advantage, but not that much. The venue could set a floor/reserve price if they don't want to cheapen the value of their product.
Or is this a business opportunity for some company (or for Ticketmaster?) read more »
Submitted by brad on Tue, 2005-04-12 05:07.
Linux distributions with package managers like apt, promise an easy world of installing lots of great software. But they've fallen down in one respect here. There are thousands of packages for the major distributions (I run 3 of them, debian, Fedora Core and Gentoo) but most packages depend on several other packages.
The developers and packagers tend to run recent, even bleeding-edge versions of their systems. So when they package, the software claims it depends on very recent versions of other programs, even if it doesn't. This is not surprising -- testing on lots of old systems is drudgework nobody relishes doing.
So when you see a new software package you want, the ideal is you can just grab it with apt-get or yum. The reality is you can only do this if you're running a highly up-to-date system. Debian has become the worst offender. Debian's "Stable" distribution is several years old now. To run debian reasonably, even to just be able to upgrade to fix bugs in software you use, you have to run the testing distribution, and most probably the unstable one. I run the unstable, and it's more stable than the name implies, but ordinary users should not be expected to run an unstable distribution.
To get new software, you are often forced to upgrade, sometimes your whole OS. And that's free to do and often it works, but you can't depend on it. More than once I have lost a day of uptime to major upgrade efforts.
Let's contrast that with Windows. The vast majority of Windows programs will install, in their latest version, on 7 year old Windows 98, and almost all will install on 5 year old Windows 2000. This is partly because Windows has fewer milestones to test to, but also because coders know that it's quite a hurdle to insist users pay money to upgrade Windows. (And Windows upgrades are even more of a pain than linux ones.)
The linux approach ends up forcing the user to choose between the risky course of constant incremental upgrades, taking occasional random plunges into major upgrades, or simply not being able to run interesting new software or the latest versions and fixes of older software.
That's a failure. Non-guru users are not able to deal with any of those choices.
Testing with every different version of every dependent package (and every kernel) is not going to happen, but it would be nice if packagers worked hard to figure out what versions of dependencies they really need, even if they don't test it enough. Packages might say, "I was tested with 2.1, I probaby work with 1.0 though." Then wait for test reports and possibly report being tested with earlier and earlier dependencies.
This doesn't mean that sometimes you won't truly need the latest version of a dependency, and shouldn't say so. But it sure would make it easier for the ordinary user to particpate in linux if this was the exception, not the rule.
Submitted by brad on Sat, 2005-04-09 18:51.
In this article about a wall-building robot we see another step towards automatic construction, moving the 3-D printer concept onto the grand scale. This is very interesting and could be expanded quite a bit. It notes that arms could add texture to ceramic walls, but I would go further.
Why not create a texturing head which consists of strong metal pins on high-speed servos. You could drag this over the surface of maleable material, moving the servos back and forth under computer control line raster lines. This would allow the generation of any digital image in 3-D on the wall to a limited amount of depth.
You could do simple things like textures, or pleasing graphics of plants or nice patterns, but sculptors could also generate interesting forms of art for people to place in 3-D on their walls.
This could also be done on modern drywall. A set of rails could be mounted on a wall. A robot would run on the rails, first applying stucco, then when it is at the right consistency, run the "print head" to place patterns or sculpture into the stucco.
You might be able to do full 3-D printing though I see that as harder to do on a vertical surface, by having a "stucco-jet" with various coloured ceramics in the pipes, and individually controlled pumps to push out the right material at the right time, possibly for further shaping by the servo-pins, though I suspect they would be better with monocolour.
Submitted by brad on Thu, 2005-04-07 11:36.
Earlier I reported on Peerflix, which is implementing a P2P DVD sharing system with similarities to some of my own ideas. I have tried it out a bit now, and learned a bit more. I also have updated experiences with Peerflix.
The web site is marked beta and still very buggy, which is bad, but my first try on the service was first-rate. I mailed off my first DVD, Eternal Sunshine of the Spotless Mind, on Wednesday to somebody in San Jose (who almost surely got it today) and got the replacement for it — by strange coincidence another memory-related movie called Memento in the mail today. That is faster than most of the services, though people like Netflix could be this fast if they decided to take the same step and trust you when you said you mailed a disk, rather than waiting for it to arrive.
All this is good, but there’s still a killer flaw in the idea of actually selling the DVDs. All DVDs will have a limited lifetime of high-demand. As demand drops below supply, somebody holding the DVD at that time will get “stuck” with it, though you can fix that by being fast on the draw in agreeing to be the one to mail any new requesters that do come along. read more »
Submitted by brad on Mon, 2005-04-04 14:38.
Perhaps this is one of those ideas that some car has implemented and I haven't yet seen it. As many people know, in several years ago a number of cars arranged so that their interior lights would not go off immediately when you closed up the car. This gives you the ability to still see shortly after closing up the car and walking away.
Of course this also drives people nuts, because in many cases you can't tell if the lights stayed on because you didn't close a door properly, and you would end up waiting around to see if they would go off.
Some cars fixed this by having the light fade out, but that's still pretty slow and of course elminates the light you were hoping for.
I would suggest that cars develop some more overt signal, to be triggered immediately when the car has decided that all doors are closed and the car is off, and the lights will be going off in 20 seconds. Such as a quick blink pattern when you close the door, or a flash of the headlights, or a quiet sound or bright internal LED.
Seeing this blink pattern, you would be 100% confident the car is closed and you haven't left the lights on, and could walk away, lit for a few seconds like you want.
Submitted by brad on Tue, 2005-03-29 13:10.
Death Valley normally gets 1.5" of rain a year, but this year it got over six, so we headed down the greatest spring wildflower show in 50 years and were not disappointed.
My preliminary gallery of Death Valley Wildflower Photos is now up. Of course I also shot many panoramas but have not yet assembled them. (I've been barely using Windows of late so I need to get a box rebuilt.) I will announce when the panoramas are available.
Submitted by brad on Fri, 2005-03-25 07:10.
Here's a business idea for both mobile phone companies and people who operate those giant digital signs in public places (such as malls and the Times Square jumbotron.)
Let people text a message to the sign for a lucrative but affordable fee. It would then display ASAP, though possibly a human would have to check for "offensive" messages, whatever that means. You could see people putting up love notes to their valentines as they both go by the sign, rivals having battles and debates in their messages etc. Could be both entertaining and lucrative. The texted billboards (or from a web form with graphics) would contain a bar with the texting number or URL to enter your own. If it were cheap enough you might see crowds stopping to enjoy the battles on the jumbotron.
Submitted by brad on Tue, 2005-03-22 06:12.
For the past couple of years, I've been mulling over an idea for a different kind of DVD "rental" company, similar in ways to the popular NetFlix. Now I have encountered a new company called Peerflix which is doing something similar. Is it annoying or vindicating to see somebody else run with something? :-)
So instead I will comment on Peerflix, which I am going to try out, and what I planned to do differently.
The rough idea is a movie network that doesn't own the movies. The members do. The members declare what disks they have that are available to go out (key in or scan UPC codes or just put disks in drives) and, just like netflix, they also browse the list of DVDs and pick what they would like to rent. For each disk you have out, you are entitled to one in (approximately), and somebody close to you, who has the disk you want, is told to mail it to you.
Once scaled up, it's faster than netflix (the disk is mailed to you directly from the last person to have it, rather than going through the warehouse) but mainly it's vastly cheaper. In theory it could even run for free, with postage and mailers being the only cost -- plus of course the initial disks you introduce into the system. Netflix 3-at-a-time is $216/year, the one at a time is $120 per year.
There are, however, a number of interesting problems to solve in doing this, and some special factors you may not know about Netflix. read more »
Submitted by brad on Mon, 2005-03-21 07:26.
Here John Dunn suggests sending an AI to negotiate with any aliens we discover via SETI.
This raises an interesting question. If SETI worked, and we got a signal from an alien intelligence, and the signal was understood to be a description of a computer architecture and then a big long, and undecipherably complex computer program -- possibly an AI -- could we dare run it?
Oh, it would be so tempting to run it. Contact with an alien species, possible untold wealths of knowledge, solutions to all our problems and more. But if it can contain those things it's probably smarter than us. And as an alien, it has its own goals which are alien to ours.
AI pundit Eliezer Yudkowsky spends much of his time warning about the dangers of even a human-designed AI, and has developed a convincing argument that it's next to impossible to keep something much smarter than you locked up in a box no matter how much you resolve to do so. It's probable we couldn't keep the alien AI in a box either as it does a superhumanly good job of convincing us just what wonderful things it could do for humanity (or just the people with keys to the box) if released.
Indeed, a good strategy for a growth-oriented AI creature would be to broadcast itself out at lightspeed, in the hope that other creatures would run it, and it could then use their resources to build more computers on which to run itself and transmitters with which to transmit itself. It might even do that at the same time as providing wonderful benefits for the host culture, or of course it could toss them by the wayside as it saw fit.
Remind you of Pandora? In Contact by Carl Sagan, the aliens send plans for an FTL transporter, which presumably is a physical device with no AI, so they are able to build it. They debate building even that, worrying if it's a weapon, but the debate would be much more on an AI, and probably end up in the negative.
Submitted by brad on Sun, 2005-03-20 10:35.
I have looked at a lot of image management programs, though not all of them, and been surprised that none match what I think should be a very common workflow. Sure, they all let you browse your photos and thumbnails of them, move them around, and rename them. And some let you do the functions I describe but usually doing them to a lot of photos is cumbersome because they only have a slow mouse interface or a poor keyboard interface.
Here's what I want to do, and right now use a combination of programs to make happen.
- First, pick the "potential winners" from a set of photos. That means letting me with a single keystroke copy the selected photo or mark it for later copying to a directory of the best shots I will actually put on the web. Two keystrokes here is two many. This must be done from full-screen view, not from thumbnails or reduced views. You can only truly judge a winner in full screen view. Thus, in this view, we should have basic movement on keys (space for next photo, backspace for previous is common) and a keystroke to tag/copy and go to the next, or at least to tag/copy and then I will hit space for the next. A way to go back and undo it would be nice. xzgv almost does this.
- Then scan the winners again and remove the duplicates. Often you will have 2 or 3 good shots of a subject that all were potential winners. So now it's time to quickly delete (no confirmations here, these are just copies) the other candidates and leave the winner. Quick switch between full screen view and a multi-photo view is a plus here.
Because serious photographers take several shots of everything interesting, scanning for the winner often involves comparison with the other shots in the photo sequence. A perfect UI for this is hard, though a clever program could spot images bunched together in time or even (with advanced algorithms) similar in composition. A strip of thumbnails to get a sense of all the shots of an item while picking the one winner would be good. A quick switch to a tiled view of all the potential winners at maximum size, with a way to pick the winner (here mouse click makes sense) also could be good. This ability is of use not just in duplicate scanning but also initial winner picking. I tend to find that I will see an image, tag it as a winner, then move on to next image to notice the next one is even better. It would be nice to know in advance that might be so (thus the thumbnail strip.)
- Once I have the winners, put them into categories. Create a series of named directories, and quickly move the photos into them. Here's where a traditional thumbnail browswer which lets you select multiple photos and move them works well. Most programs do this step OK.
- Once I have the winners in categories, caption them. Again, it should be really fast. View photo (at least 1/4 screen size, not a thumbnail) and type in the caption. Then a single keystroke to go to next photo to caption it. Caption should go into jpeg caption, or a simple file that can be worked with later. ACDsee comes close to doing this but they use ugly keystrokes.
- Next, order them for presentation on a web page. Not necessarily by date or sequence number or caption.
- Finally, generate a web gallery or slide show based on the order and captions and sorting. Or, in my case leave available the data for my own scripts to do this.
Some programs as I note, come close. However often they use cumbersome keys (alt keys and ctrl-keys when regular letters would do) or they require confirmations on frequently performed acts (useless as you quickly learn to automatically confirm, just wasting your time and providing now protection.)
But does any system do all this, for linux or windows? Let me know.