Best Of Blog

Giving up the unprovable ballot

Yesterday, I wrote about election goals. Today I want to talk about one of the sub-goals, the non-provable ballot, because I am running into more people who argue it should be abandoned in favour of others goals. Indeed, they argue, it has already been abandoned.

As I noted, our primary goal is that voters cast their true desire, independent of outside pressure. If voters can’t demonstrate convincingly how they voted (or indeed if it’s easy to lie) then they can say one thing to those pressuring them and vote another way without fear of consequences. This is sometimes called “secret ballot” but in fact that consists of two different types of secrecy.

The call to give this up is compelling. We can publish, to everybody, copies of all the ballots cast — for example, on the net. Thus anybody can add up the ballots and feel convinced the counts are correct, and anybody can look and find their own ballot in the pool and be sure their vote was counted. If only a modest number of random people take the time to find their ballot in the published pool, we can be highly confident that no significant number of ballots have not been counted, nor have they been altered or miscounted. It becomes impossible to steal a ballot box or program a machine not to count a vote. It’s still possible to add extra ballots — such as the classic Chicago dead voters, though with enough checking even this can be noticed by the public if it’s done in one place.

The result is a very well verified election, and one the public feels good about. No voter need have any doubt their vote was counted, or that any votes were altered, miscounted, lost or stolen. This concept of “transparency” has much to recommend it.

Further, it is argued, many jurisdictions long ago gave up on unprovable ballots when they allowed vote by mail. The state of Oregon votes entirely by mail, making it trivial to sell your ballot or be pushed into showing it to your spouse. While some jurisdictions only allow limited vote by mail for people who really can’t get to the polls, some allow it upon request. In California, up to 40% of voters are taking advantage of this.

Having given up the unprovable ballot, why should we not claim all the advantages the published ballot can give us? Note that the published ballots need not have names on them. One can give voters a receipt that will let them find their true ballot but not let anybody who hasn’t seen the receipt look up any individual’s vote. So disclosure can still be optional.  read more »

The paradox of identity management

Since the dawn of the web, there has been a call for a “single sign-on” facility. The web consists of millions of independently operated web sites, many of which ask users to create “accounts” and sign-on to use the site. This is frustrating to users.

Today the general single sign-on concept has morphed into what is now called “digital identity management” and is considerably more complex. The most recent project of excitement is OpenID which is a standard which allows users to log on using an identifier which can be the URL of an identity service, possibly even one they run themselves.

Many people view OpenID as positive for privacy because of what came before it. The first major single sign-on project was Microsoft Passport which came under criticism both because all your data was managed by a single company and that single company was a fairly notorious monopoly. To counter that, the Liberty Alliance project was brewed by Sun, AOL and many other companies, offering a system not run by any single company. OpenID is simpler and even more distributed.

However, I feel many of the actors in this space are not considering an inherent paradox that surrounds the entire field of identity management. On the surface, privacy-conscious identity management puts control over who gets identity information in the hands of the user. You decide who to give identity info to, and when. Ideally, you can even revoke access, and push for minimal disclosure. Kim Cameron summarized a set of laws of identity outlining many of these principles.

In spite of these laws one of the goals of most identity management systems has been ease of use. And who, on the surface, can argue with ease of use? Managing individual accounts at a thousand web sites is hard. Creating new accounts for every new web site is hard. We want something easier.

The paradox

However, here is the contradiction. If you make something easy to do, it will be done more often. It’s hard to see how this can’t be true. The easier it is to give somebody ID information, the more often it will be done. And the easier it is to give ID information, the more palatable it is to ask for, or demand it.  read more »

Stig's Inferno Final Issue

In the 1980s, my brother Ty Templeton published his first independent comic book series, Stig’s Inferno. He went on to considerable fame writing and drawing comics for Marvel, D.C. and many others, including favourite characters like Superman, Batman and Spider-Man, as well as a lot of comics associated with TV shows like The Simpsons and Ren and Stimpy. But he’s still at his best doing original stuff.

You may not know it, but years ago I got most of Stig’s Inferno up on the web. Just this week however, a fan scanned in the final issue and I have converted it into web pages. The fan also scanned the covers and supplemental stories from the issues, they will be put up later.

So if you already enjoyed the other episodes journey now to Stig’s Inferno #7.

If you never looked go to The main Stig’s Inferno page. You can also check out small versions of all the issue covers.

I’ll announce when the supplemental stories are added.

The comic tells a variation of Dante’s Inferno, where our hero Stig is killed by the creatures that live in his piano and makes a strange journey through the netherworld. It’s funny stuff, and I’m not just saying it because he’s my brother. Give it a read.

Updating the Turing Test

Alan Turing proposed a simple test for machine intelligence. Based on a parlour game where players try to tell if a hidden person is a man or a woman just by passing notes, he suggested we define a computer as intelligent if people can’t tell it from a human being through conversations with both over a teletype.

While this seemed like a great test (for those who accept that external equivalence is sufficient) in fact to the surprise of many people, computers passed this test long ago with ordinary, untrained examiners. Today there has been an implicit extension of the test, that the computer must be able to fool a trained examiner, typically an AI researcher or expert in brain sciences or both.

I am going to propose updating it further, in two steps. Turing proposed his test perhaps because at the time, computer speech synthesis did not exist, and video was in the distant future. He probably didn’t imagine that we would solve the problems of speech well before we got handles on actual thought. Today a computer can, with a bit of care in programming inflections and such into the speech, sound very much like a human, and we’re much closer to making that perfect than we are to getting a Turing-level intelligence. Speech recognition is a bit behind, but also getting closer.

So my first updated proposal is to cast aside the teletype, and make it be a phone conversation. It must be impossible to tell the computer from another human over the phone or an even higher fidelity audio channel.

The second update is to add video. We’re not as far along here, but again we see more progress, both in the generation of digital images of people, and in video processing for object recognition, face-reading and the like. The next stage requires the computer to be impossible to tell from a human in a high-fidelity video call. Perhaps with 3-D goggles it might even be a 3-D virtual reality experience.

A third potential update is further away, requiring a fully realistic android body. In this case, however, we don’t wish to constrain the designers too much, so the tester would probably not get to touch the body, or weigh it, or test if it can eat, or stay away from a charging station for days etc. What we’re testing here is the being’s “presence” — fluidity of motion, body language and so on. I’m not sure we need this test as we can do these things in the high fidelity video call too.

Why these updates, which may appear to divert from the “purity” of the text conversation? For one, things like body language, nuance of voice and facial patterns are a large part of human communication and intelligence, so to truly accept that we have a being of human level intelligence we would want to include them.

Secondly, however, passing this test is far more convincing to the general public. While the public is not very sophisticated and thus can even be fooled by an instant messaging chatbot, the feeling of equivalence will be much stronger when more senses are involved. I believe, for example, that it takes a much more sophisticated AI to trick even an unskilled human if presented through video, and not simply because of the problems of rendering realistic video. It’s because these communications channels are important, and in some cases felt more than they are examined. The public will understand this form of turing test better, and more will accept the consequences of declaring a being as having passed it — which might include giving it rights, for example.

Though yes, the final test should still require a skilled tester.

Replacing the FCC with "don't be spectrum selfish."

Radio technology has advanced greatly in the last several years, and will advance more. When the FCC opened up the small “useless” band where microwave ovens operate to unlicenced use, it generated the greatest period of innovation in the history of radio. As my friend David Reed often points out, radio waves don’t interfere with one another out in the ether. Interference only happens at a receiver, usually due to bad design. I’m going to steal several of David’s ideas here and agree with him that a powerful agency founded on the idea that we absolutely must prevent interference is a bad idea.

My overly simple summary of a replacement regime is just this, “Don’t be selfish.” More broadly, this means, “don’t use more spectrum than you need,” both at the transmitting and receiving end. I think we could replace the FCC with a court that adjudicates problems of alleged interference. This special court would decide which party was being more selfish, and tell them to mend their ways. Unlike past regimes, the part 15 lesson suggests that sometimes it is the receiver who is being more spectrum selfish.

Here are some examples of using more spectrum than you need:

  • Using radio when you could have readily used wires, particularly the internet. This includes mixed mode operations where you need radio at the endpoints, but could have used it just to reach wired nodes that did the long haul over wires.
  • Using any more power than you need to reliably reach your receiver. Endpoints should talk back if they can, over wires or radio, so you know how much power you need to reach them.
  • Using an omni antenna when you could have used a directional one.
  • Using the wrong band — for example using a band that bounces and goes long distance when you had only short-distance, line of sight needs.
  • Using old technology — for example not frequency hopping to share spectrum when you could have.
  • Not being dynamic — if two transmitters who can’t otherwise avoid interfering exist, they should figure out how one of them will fairly switch to a different frequency (if hopping isn’t enough.)

As noted, some of these rules apply to the receiver, not just the transmitter. If a receiver uses an omni antenna when they could be directional, they will lose a claim of interference unless the transmitter is also being very selfish. If a receiver isn’t smart enough to frequency hop, or tell its transmitter what band or power to use, it could lose.

Since some noise is expected not just from smart transmitters, but from the real world and its ancient devices (microwave ovens included) receivers should be expected to tolerate a little interference. If they’re hypersensitive to interference and don’t have a good reason for it, it’s their fault, not necessarily the source’s.  read more »

How to stop people from putting widescreen TVs in stretch mode

(Note I have a simpler article for those just looking for advice on how to get their Widescreen TV to display properly.)

Very commonly today I see widescreen TVs being installed, both HDTV and normal. Flat panel TVs are a big win in public places since they don’t have the bulk and weight of the older ones, so this is no surprise, even in SDTV. And they are usually made widescreen, which is great.

Yet almost all the time, I see them configured so they take standard def TV programs, which are made for a 4:3 aspect ratio, and stretch them to fill the 16:9 screen. As a result everybody looks a bit fat. The last few hotel rooms I have stayed in have had widescreen TVs configured like this. Hotel TVs disable you from getting at the setup mode, offering a remote control which includes the special hotel menus and pay-per-view movie rentals. So you can’t change it. I’ve called down to the desk to get somebody to fix the TV and they often don’t know what I’m talking about, or if somebody comes it takes quite a while to get somebody who understands it.

This is probably because I routinely meet people who claim they want to set their TV this way. They just “don’t like” having the blank bars on either side of the 4:3 picture that you get on a widescreen TV. They say they would rather see a distorted picture than see those bars. Perhaps they feel cheated that they aren’t getting to use all of their screen. (Do they feel cheated with a letterbox movie on a 4:3 TV?)

It is presumably for those people that the TVs are set this way. For broadcast signals, a TV should be able to figure out the aspect ratio. NTSC broadcasts are all in 4:3, though some are letterboxed inside the 4:3 which may call for doing a “zoom” to expand the inner box to fill the screen, but never a “stretch” which makes everybody fat. HDTV broadcasts are all natively in widescreen, and just about all TVs will detect that and handle it. (All U.S. stations that are HD always broadcast in the same resolution, and “upconvert” their standard 4:3 programs to the HD resolution, placing black “pillarbox” bars on the left and right. Sometimes you will see a program made for SDTV letterbox on such a channel, and in that case a zoom is called for.)

The only purpose the “stretch” function has is for special video sources like DVD players. Today, almost all widescreen DVDs use the superior “anamorphic” widescreen method, where the full DVD frame is used, as it is for 4:3 or “full frame” DVDs. Because TVs have no way to tell DVD players what shape they are, and DVD players have no way to tell TVs whether the movie is widescreen or 4:3, you need to tell one or both of them about the arrangement. That’s a bit messy. If you tell a modern DVD player what shape TV you have, it will do OK because it knows what type of DVD it is. DVD players, presented with a widescreen movie and a 4:3 TV will letterbox the movie. However, if you have a DVD player that doesn’t know what type of TV it is connected to, and you play a DVD, you have to tell the TV to stretch or pillarbox. This is why the option to stretch is there in the first place.

However, now that it’s there, people are using it in really crazy ways. I would personally disable stretch mode when playing from a source known not to be a direct video input video player, but as I said people are actually asking for the image to be incorrectly stretched to avoid seeing the bars.

So what can we do to stop this, and to get the hotels and public TVs to be set right, aside from complaining? Would it make sense to create “cute” pillarbars perhaps with the image of an old CRT TV’s sides in them? Since HDTVs have tons of resolution, they could even draw the top and bottom at a slight cost of screen size, but not of resolution. Some TVs offer the option of gray, black and white pillars, but perhaps they can make pillars that somehow match the TV’s frame in a convincing way, and the frame could even be designed to blend with the pillars.

Would putting up fake drapes do the job? In the old days of the cinema, movies came in different widths sometimes, and the drapes would be drawn in to cover the left and right of the screen if the image was going to be 4:3 or something not as wide. They were presumably trying to deal with the psychological problem people have with pillarbars.

Or do we have to go so far as to offer physical drapes or slats which are pulled in by motors, or even manually? The whole point of flatscreen TVs is we don’t have a lot of room to do something like this, which is why it’s better if virtual. And of course it’s crazy to spend the money such things would cost, especially if motorized, to make people feel better about pillarbars.

I should also note that most TVs have a “zoom” mode, designed to take shows that end up both letterboxed and pillarbarred and zoom them to properly fit the screen. That’s a useful feature to have — but I also see it being used on 4:3 content to get rid of the pillarbars. In this case at least the image isn’t stretched, but it does crop off the top and bottom of the image. Some programs can tolerate this fine (most TV broadcasts expect significant overscan, meaning that the edges will be behind the frame of the TV) but of course on others it’s just as crazy as stretching. I welcome other ideas.

Update: Is it getting worse, rather than better? I recently flew on Virgin America airlines, which has widescreen displays on the back of each seat. They offer you movies (for $8) and live satellite TV. The TV is stretched! No setting box to change it, though if you go to their “TV chat room” you will see it in proper aspect, at 1/3 the size. I presume the movies are widescreen at least.

Cell carriers, let us have more than one phone on the same number

Everybody’s got old cell phones, which sit in closets. Why don’t the wireless carriers let customers cheaply have two or more phones on the same line. That would mean that when a call came in, both phones would ring (and your landlines if you desire) and you could answer in either place. You could make calls from either phone, though not both at the same time.

Right now they offer family plans, which let you have a 2nd “line” on the same account. That doesn’t save much money for the 2nd, though the 3rd and 4th are typically only $10 extra. That’s a whole extra number and account, however.

Letting customers do this should be good for the cell companies. You would be more likely to make a cell call. People would leave cells in their cars, or at other haunts (office, home or even in a coat pocket) for the “just in case I forget my cell” moments. That means more minutes billed.

The only downside is you might see people trying to share, both the very poor, or some couples, or perhaps families wanting to give a single number to a group of kids. While for most people the party line arrangement would be inconvenient, if it becomes a real problem, a number of steps could be taken to avoid it:

  • They know where the phones are. Thus don’t allow phone A to make/answer a call a short interval after phone B if they are far apart. If they are 1 hour apart, require an hour’s time.
  • To really stop things, require the non-default phones to register by calling a special number. When one phone registers, the others can’t make calls. Put limits on switching the active phone, possibly based on phone location as above.
  • Shortly after registering, or making/receiving a call, only the active phone receives calls.

I don’t think these steps are necessary, but if implemented they would make sharing very impractical and thus this service could be at no extra charge, or at worst a nominal charge of a buck or two. It could also be charged for only in months or days it’s actually used.

This is a great service for the customer and should make money for the cell co. So why don’t they?

Let the world search for the lost

There is a story that Ikonos is going to redirect a satellite to do a high-res shot of the area where CNet editor James Kim is missing in Oregon. That’s good, though sadly, too late, but they also report not knowing what to do with the data.

I frankly think that while satellite is good, for something like this, traditional aerial photography is far better, because it’s higher resolution, higher contrast, can be done under clouds, can be done at other than a directly overhead angle, is generally cheaper and on top of all this can possibly be done from existing searchplanes.

But what to do with such hi-res data? Load it into a geo-browsing system like Google Earth or Google Maps or Microsoft Live. Let volunteers anywhere in the world comb through the images and look for clues about the missing person or people. Ideally, allow the map to be annotated so that people don’t keep reporting the same clues or get tricked by the same mistakes. (In addition to annotation, you would want to track which areas had been searched the most, and offer people suggested search patterns that cover unsearched territory or special territory of interest.)

These techniques are too late for Kim, but the tools could be ready for the next missing person, so that a plane could be overflying an area on short notice, and the data processed and up within just minutes of upload and stitching.

Right now Google’s tools don’t have any facility for looking at shots from an angle, while Microsoft’s do but without the lovely interface of Keyhole/Google Earth. Angle shots can do things like see under some trees, which could be important. This would be a great public service for some company to do, and might actually make searches far faster and cheaper. Indeed, in time, people who are lost might learn that, if they can’t flash a mirror at a searchplane, they should find a spot with a view of the sky and build some sort of artificial glyph on the ground. If there were a standard glyph, algorithms could even be written to search for it in pictures. With high-res aerial photography the glyph need not be super large.

Update: It’s also noted the Kims had a cell phone, and were found because their phone briefly synced with a remote tower. They could have been found immediately if rescue crews had a small mini-cell base station (for all cell technologies) that could be mounted in a regular airplane and flown over the area. People might even know to turn on their cell phone if they are conserving power if they heard a plane. (In a car with a car charger, you can leave the phone on.) As soon as the plane gets within a few miles (range is very good for sky-based antenna) you could just call and ask “where are you?” or, in the sad case where they can’t answer, find it with signal strength or direction finding. There are plans to build cell stations to be flown over disaster areas, but this would be just a simple unit able to handle just one call. It could be a good application for software radio, which is able to receive on all bands at once with simple equipment, at a high cost in power. No problem on a plane.

Speaking of rescue, I should describe one of my father’s inventions from the 70s. He designed a very simple “sight” to be placed on a mirror. First you got a mirror (or piece of foil) and punched a hole in it you could look through. In his fancy version, he had a tube connected to the mirror with wires, but it could be handheld. The tube itself had a smaller exit hole (like a washer glued to the end of a toilet paper cardboard tube.)

Anyway, you could look through the hole in your mirror, sight the searchplane through the washer in the cardboard tube and adust the mirror so the back of the washer is illumnated by the sunlight from the mirror. Thus you could be sure you were flashing sunlight at the plane on a regular basis. He tried to sell military on putting a folded mirror and sighting tube in soldier’s rescue kits. You could probably do something with your finger in a pinch though, just put your finger next to the plane and move the mirror so your finger lights up. Kim didn’t think of it, but taking one of the mirrors off his car would have been a good idea as he left on his trek.

Can the big web sites save the political system

I’ve written before about one of the greatest flaws in the modern political system is the immense need of candidates to raise money (largely for TV ads) which makes them beholden to contributors, combined with the enhanced ability incumbents have at raising that money. Talk to any member of congress and they will tell you they start work raising money the day after the election.

Last year I proposed one radical idea, a special legitimizing of political spam done through the elections office. That will take some time as it requires a governmental change. So other factors are coming forward.

In some states and nations, efforts are already underway to have the government finance elections. The Presidential campaign fund that you contribute to whether you check the box on the tax return or not is one effort in this direction.

I propose that the operators of the big, advertising-supported web sites, in particular sites like Yahoo, Google, Microsoft, Myspace and the like join together to create a program to give free web advertising to registered candidates on a fair basis. This could be done by simply providing unsold inventory, which is close to free, or it could be real valuable inventory including credits for targetted ads.

Of course, not everybody reads the web all day, so this only reaches one segment of the population, but it reaches a lot. The main goal is to reduce the need, in the minds of candidates, to raise a lot of money for TV ads. They won’t stop entirey, but it might get scaled back.

Such a system would allow users the option of setting a cookie to provide preferences for the political ads they see. While each candidate would get one free shot, voters could opt-out of ads for specific candidates or races. (In some cases the geography-matcher would get it wrong and they would change the district the system think they are in.) They could also tone down the amount of advertising, or opt in or out of certain styles (flash, animated, text, video.)

It would be up to candidates to tune their message, and not overdo things or annoy voters, pushing them to opt out.

There can’t be too much opting out though, because the goal here is to deliver the same thing that candidates rely on TV for — pushing their message at voters who have not gone seeking it. If we don’t provide that, we’ll never cut the dependency on TV and other intrusive ads. Allowing these ads to be intrusive seems wrong, but the real thing to do is consider the competition, and what its thirst for money does to society. Thanks to the internet, we’ve reduced the price of advertising by an order of magnitude. If the price of advertising is what corrupts the political system, it seems we should have a shot of fixing the problem.

Ads would be served by the special consortium managing the opt-out system, not the candidate, in order to protect privacy. So if you click on an ad for a candidate, the first landing page is not hosted by the candidate, but may have links to their site.

A system would have to be devised to allocate “importance” to elections. Ie. how many ads do the candidates for President get vs. those for state comptroller.

One risk is that the IRS or other forces might try to declare this program a political contribution by the web sites. If applied fairly to all candidates, we’ll need a ruling that states it is not a contribution. This is needed, because otherwise sites will balk at the idea of running free ads for candidates they dispise.

If the system got powerful enough, it could even make a bolder claim. It could only allow the free advertising to candidates who agree to spending limits in other media. On one hand this is just what most campaign finance reform programs do to avoid the 1st amendment. On the other hand, it may seem like an antitrust violation — deliberately giving stuff away not just to kill the “competition” but actually forbidding the candidates from spending too much with the competition.

This need not be limited to the web of course. Other media could join in, though the ones that already make a ton of money from political advertising (TV, radio) are not so likely to join.

This won’t solve the whole problem, but it could make a dent, and even a dent is pretty important in a problem as major as this.

Cameras (Canon) -- handle reversion from specialty settings better

My Canon cameras have a variety of ways you can change their settings to certain specialty ones. You can set a manual white balance. You can set an exposure compensation for regular exposures or flash (to make it dimmer or brighter than the camera calculates it should be.) You can change various shooting parameters (saturation etc.) and how the images will be stored (raw or not, large/medium/small etc.) You can of course switch (this time with a physical dial) from manual exposure to various automatic and semi-automatic exposure modes. On the P&S cameras you can disable or enable flash with such settings. You can change shooting modes (single-shot, multi-shot.) You can turn on bracketing of various functions.

And let’s face it, I bet all of you who have such cameras have found yourself shooting by accident in a very wrong mode, not discovering it quite for some time. If you’re in a fast shooting mode, not looking at the screen, it can be easy to miss things like a manual white balance or even a small exposure compensations.

The camera already features an option to auto-revert on exposure bracketing, since they decided few would want to leave such a feature on full time. But auto bracketing isn’t dangerous, it just wastes a couple of shots that you can just delete later. And it’s also very obvious when it’s on. Of all the things to consider auto-revert for, this was the least necessary.

To my mind, the thing I would like auto-revert on most of all is manual white balancing. I recently was shooting fast an furious in a plane, and learned after lots of shots I still had the camera in an artificial light balance setting from the night before. The camera can do a good job here because it can usually tell what the temperature of the ambient light is, and can notice that the balance is probably wrong. In addition, it can tell that lots of time has passed since the white balance was set manually. It really should have a good idea if it’s out in daylight or indoors, if it’s night or day.

And I’m not even asking for an auto-revert here. Rather, an error beep which also pops a message on the screen that the white balance may be wrong. And yes, for those who don’t want this feature they can disable it. However, what would be cool would be if the screen that pops up to warn about a possibly bad retained setting, would be the ability then and there to say, “Thanks, revert” or “Don’t warn me about this again” or “Don’t warn me about this until the next ‘session.’” The camera knows about ‘sessions’ because it sees pauses in shooting with the camera off, and as noted, changes from night to day, indoors to out.

Of course it would still keep shooting. For extra credit if it suspected something wrong, it could hold the image in RAW mode in its buffer memory, and if you ask to go to another setting that only changes the jpegs, it could actually redo the jpeg right.

Now of course, photographers often shoot in manual modes for a very good reason, and they are doing it because they don’t want the camera’s automatic settings. But that doesn’t mean they can’t be reminded if, after a longish bout with the camera off, they are shooting in a way that’s very different from what the camera wants. That can include exposure. I’ve often left the camera in manual and then forgotten about it until I saw the review screen. (Of course P&S users almost always look at the review screen, they don’t get this trouble.) Again, I want the camera to shoot when I tell it to, but to consider warning me if I turn warnings on that the image is totally overexposed or underexposed. At night it would take a more serious warning since in night shots there often no “right” exposure to compare with.

A smart camera could even notice when you aren’t looking at the review screen, because you are shooting so fast. But like I said, those who want the old way could always turn such warnings off.

Another option would be an explicit button to say, “I’m going to make a bunch of specialty settings now. Please warn me if I don’t revert them at the next session.” This could extend even to warning you that you turned off autofocus. Review screens don’t show minor focus errors, so it would be nice to be reminded of this.

(I actually think an even better warning would be one where the camera beeps if nothing in the shot is in focus, as is often the case here. The camera can easily tell if there are no high contrast edges in the shot. Yes, there are a few scenes that have nothing sharp in them, I don’t mind the odd beep on those.)

Location aware phone to call a local expert

People are always looking for location aware services for their mobile devices, including local info. But frankly the UIs on small mobile devices often are poor. When you are on a cell phone, voice to a smart person is the interface you often want.

So here’s a possible location aware service. Let people register as a “local expert” for various coordinates. That’s probably folks who live in a neighbourhood or know it very well. They would then, using a presence system on their own phone or computer, declare when they are available to take calls about that location.

Somebody sitting with a cell phone in a location could call a special 900-like number. Their phone could just transmit their location, or they would quickly say it to a human for entry. Then, their call would be routed to a local expert who is marked as available for calls. (In some cases it may simultaneously ring several experts of possible but unsure availability and give the call to whoever answers first.)

Then they could, for a fee (perhaps $1/minute?) ask the expert questions.

  • “Where’s the best Thai food?”
  • “How do I get transit to such and such location?”
  • “What’s a good Taxi company to call? Can you call me one?”
  • “Is there a shop around here that sells widgets?”
  • “Is this museum worth it?”
  • “What parts of the area are dangerous?”
  • “How much is real estate here?”

The expert would be expected to know how to answer questions about most of the restaurants, bars and shops. And they could also — so long as they disclosed any kickbacks very clearly — provide coupon codes to people that would rebate the cost of the call.

At the end of any call, the caller would stay on the line and be asked to rate the quality of the expert. They could also rate later. Experts would gain reputations for their skill, and the ones with the highest ratings would be given more calls, or be able to charge more.

Charging could be per minute, fixed-rate, or as noted, rebated with validation from a recommended merchant (though I would want to design a system so that advice is never biased by this.)

This could also be done by texting, which would be easier for experts to do, and probably be cheaper, but of course is slower for the mobile user. Many mobile users are getting pretty good at their texting. The experts would presumably be at computers with IM clients, but they could be at mobile phones as well.

To make this cheaper, one could arrange for trading minutes. Which is to say, if you put minutes into the system advising others, you can in turn use minutes getting advice when you need it. Some people might prefer to do this in a friendly way rather than charge or pay.

Experts could very well be just around the corner, physically, if they are being an expert on their local neighbourhood. It’s not out of the question they could then agree to help in person. In this case you would need to have some way to certify they’re not up to something nefarious. The fact that the call is logged and you know the home address of the expert in the database should be enough. The client might be up to something nefarious, but this seems a pretty low risk.

On your web page, give a different customer service number that knows I've been to the web site

When you call most companies today, you get a complex “IVR” (menu with speech or touch-tone commands.) In many cases the IVR offers you a variety of customer service functions which can be done far more easily on the web site. And indeed, the prompts usually tell you to visit the web site to do such things.

However, have we all not shouted, “I am already at your damned web site, I would not be calling you to do those things!”

And they should know this. So if you’re on the web site, and you’ve done more than just click on the “Contact Us” tab, then when you finally do click on the tab asking for a phone number, you should not get the same phone number that is given to newcomers or printed in non-web locations.

You should get a special phone number that says, “This customer is already on the web site. Don’t bother offering things that can be done far more easily on the web site.”

Now I understand why they offer these things. Agents cost money and they want to divert customers to automated systems if at all possible. But If I’m already at the automated system, I am usually calling for just a few reasons. Perhaps I want web site support, but I probably need an agent to do something that’s hard or impossible to do on the web site. Why frustrate me?

Of course, even better is if you have an eCRM system that integrates the call center and the web experience. Many companies now have a click-to-call link on their page. Some even connect you with an agent who has your information already from the history on the web site, but this is annoyingly rare. All this stuff is expensive and involves buying new tools and fancy reprogramming. What I propose is pretty trivial — a much simpler menu gated by the phone number the person came in on. Any IVR can do that with a small amount of work.

Now I see one hole. The “Gets to an agent fast” number might of course be spread around, and people would want to use it for all their calls, defeating (to the company) the purpose of all those menus. But today, numbers are cheap. You can get a block of 100 numbers and change the magic one every day. Or, with a little bit of programming, really not that much, you can have the web site tell the true web-sourced callers “Dial extension xxxx when you get connected.” That’s a little fancier, requires the IVR be programmed to know about a changing extension, but again it’s not nearly so hard as buying a whole eCRM system.

I know that companies don’t want to frustrate their customers, they think the IVRs are saving them enough money to offset the frustration. But in this case, they are costing money, as the person wastes time listening to a pointles s IVR. Let’s stop it!

RSS aggregator to pull threads from multiple intertwined blogs

It’s common in the blogosphere for bloggers to comment on the posts of other bloggers. Sometimes blogs show trackbacks to let you see those comments with a posting. (I turned this off due to trackback spam.) In some cases we effectively get a thread, as might appear in a message board/email/USENET, but the individual components of the thread are all on the individual blogs.

So now we need an RSS aggregator to rebuild these posts into a thread one can see and navigate. It’s a little more complex than threading in USENET, because messages can have more than one parent (ie. link to more than one post) and may not link directly at all. In addition, timestamps only give partial clues as to position in a thread since many people read from aggregators and may not have read a message that was posted an hour ago in their “thread.”

At a minimum, existing aggregators (like bloglines) could spot sub-threads existing entirely among your subscribed feeds, and present those postings to you. You could also define feeds which are unsubscribed but which you wish to see or be informed of postings from in the event of a thread. (Or you might have a block-list of feeds you don’t want to see contributions from.) They could just have a little link saying, “There’s a thread including posts from other blogs on this message” which you could expand, and that would mark those items as read when you came to the other blog.

Blog search tools, like Technoratti could also spot these threads, and present a typical thread interface for perusing them. Both readers and bloggers would be interested in knowing how deep the threads go.

Please don't videoblog (vlog)

At the blogger panel at Fall VON (repurposed to be both video on the net as well as voice) Vlogger and blip.tv advocate Dina Kaplan asked bloggers to start vlogging. It’s started a minor debate.

My take? Please don’t.

I’ve written before on what I call the reader-friendly vs. writer-friendly dichotomy. My thesis is that media make choices about where to be on that spectrum, though ideal technology reduces the compromises. If you want to encourage participation, as in Wikis, you go for writer friendly. If you have one writer and a million readers, like the New York Times, you pay the writer to work hard to make it as reader friendly as possible.

When video is professionally produced and tightly edited, it can be reader (viewer) friendly. In particular if the video is indeed visual. Footage of tanks rolling into a town can convey powerful thoughts quickly.

But talking head audio and video has an immediate disadvantage. I can read material ten times faster than I can listen to it. At least with podcasts you can listen to them while jogging or moving where you can’t do anything else, but video has to be watched. If you’re just going to say your message, you’re putting quite a burden on me to force me to take 10 times as long to consume it — and usually not be able to search it, or quickly move around within it or scan it as I can with text.

So you must overcome that burden. And most videologs don’t. It’s not impossible to do, but it’s hard. Yes, video allows better expression of emotion. Yes, it lets me learn more about the person as well as the message. (Though that is often mostly for the ego of the presenter, not for me.)

Recording audio is easier than writing well. It’s writer friendly. Video has the same attribute if done at a basic level, though good video requires some serious work. Good audio requires real work too — there’s quite a difference between “This American Life” and a typical podcast.

Indeed, there is already so much pro quality audio out there like This American Life that I don’t have time to listen to the worthwhile stuff, which makes it harder to get my attention with ordinary podcasts. Ditto for video.

There is one potential technological answer to some of these questions. Anybody doing an audio or video cast should provide a transcript. That’s writer-unfriendly but very reader friendly. Let me decide how I want to consume it. Let me mix and match by clicking on the transcript and going right to the video snippet.

With the right tools, this could be easy for the vlogger to do. Vlogger/podcaster tools should all come with trained speech recognition software which can reliably transcribe the host, and with a little bit of work, even the guest. Then a little writer-work to clean up the transcript and add notes about things shown but not spoken. Now we have something truly friendly for the reader. In fact, speaker-independent speech recognition is starting to almost get good enough for this but it’s still obviously the best solution to have the producer make the transcript. Even if the transcript is full of recognition errors. At least I can search it and quickly click to the good parts, or hear the mis-transcribed words.

If you’re making podcaster/vlogger tools, this is the direction to go. In addition, it’s absolutely the right thing for the hearing or vision impaired.

Better handling of reading news/blogs after being away

I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.

You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.

When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.

There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.

I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.

It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.

For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.

Anti-Phishing -- warn if I send a password somewhere I've never sent it

There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. “Pet names” given to web sites so you can confirm you’re where you were before.

I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults. Now I don’t store my most important passwords (bank, etc.) in my password vault, but I do store most medium importance ones there (accounts at various billing entities etc.) I just use a simple common password for web boards, blogs and other places where the damage from compromise is nil to minimal.

So when I go to such a site, I expect the password vault to fill in the password. If it doesn’t, that’s a big warning flag for me. And so I can’t easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to bankofthevvest.com (Two “v”s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.

The browser should store passwords in the vault, and even the “don’t store this” passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is “secretword” and I use it on bankofthewest.com, no problem. “secretword” isn’t stored in my password vault, but the hash of it is. If I ever type in “secretword” to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I’ll do a good job of examining — it’s still possible I’ll be fooled by bankofthevvest.com, but other tricks won’t fool me.

The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at “who-cares” sites, we don’t want to be warned all the time. The second time we use that password, we’ll get a warning, and we need a box to say, “Don’t warn me about re-use of this password.”

Read on for subtleties…  read more »

Switching to popular vote from electoral college

A proposal by a Stanford CS Prof for a means to switch the U.S. Presidential race from electoral college to popular vote is gaining some momentum. In short, the proposal calls for some group of states representing a majority of the electoral college to agree to an inter-state compact that they will vote their electoral votes according to the result of the popular vote.

State compacts are like treaties but are enforceable by both state courts and federal law, so this has some merit. In addition, you actually don’t even need to get 270 electoral votes in the compact. All you really need is a much smaller number of “balanced” states. For example perhaps 60 typically republican electoral votes and 60 typically democratic electoral votes. Maybe even less. For example I think a compact with MA, IL, MN (42 Dem) and IN, AB, OK, UT, ID, KA (42 Rep) might well be enough, certainly to start. Not that it hurts if CA, NY or TX join.

That’s because normally the electoral college already follows the popular vote. If it’s not going to, the race is very close, and a fairly small number of states in the compact would be assured to swing the electoral college to the popular vote in that case. There are a few exceptions I’ll talk about below, but largely this would work.

This is unlike proposals for states to, on their own, do things like allocate their electors based on popular vote within the state, as Maine does. Such proposals don’t gain traction because there is generally going to be somebody powerful in the state who loses under such a new rule. In a state solidly behind one party, they would be fools to effectively give electoral votes to the minority party. In a balanced state, they would be giving up their coveted “swing state” status, which causes presidential candidates to give them all the attention and election-year gifts.

Even if, somehow, many states decided to switch to a proportional college, it is an unstable situation. Suddenly, any one state that is biased towards one party (both in state government and electoral college history) is highly motivated to put their candidate over the top by switching back to winner-takes-all.

There’s merit in the popular-vote-compact because it can be joined by “safe” states, so long as a similar number of safe votes from the other side join up. The safe states resent the electoral college system, it gets them ignored. Since close races are typically decided by a single mid-sized state, even a very small compact could be surprisingly effective — just 3 or 4 states!

The current “swing state” set is AZ, AR, CO, FL, IA, ME, MI, MN, MO, NV, NH, NM, NC, OH, OR, PA, VA, WA, WV, and WI, though of course this set changes over time. However, once states commit to a compact, they will be stuck with it, even if it goes against their interests down the road.

The one thing that interferes with the small-compact is that even the giant states like New York, Texas and California can become swing states if the “other” party runs a native candidate. California in particular. (In 1984 Mondale won only Minnesota, and got just under 50% of the vote. Anything can happen.) That’s why you don’t just get an “instant effective compact” from just 3 states like California matching Texas and Indiana. But there are small sets that probably would work.

Also, a tiny compact such as I propose would not undo the “campaign only in swing states” system so easily. A candidate who worked only on swing states (and won them) could outdo the extra margin now needed because of the compact. In theory. If the compact grew (with non-swing states, annoyed at this, joining it) this would eventually fade.

Of course the next question may surprise you. Is it a good idea to switch from the electoral college system? 4 times the winner of the popular vote has lost (strangely, 3 of those have been the 3 times the winner was the son — GWB, Adams - or grandson - Harrison- of a President) the White House. The framers of the consitution, while they did not envision the two party system we see today, intended for the winner of the popular vote to be able to lose the electoral college.

When they designed the system, they wanted to protect against the idea of a “regional” president. A regional winner would be a candidate with extreme popularity in some small geographic region. Imagine a candidate able to take 90% of the vote in their home region, that region being 1/3 of the population. Imagine them being less popular in the other 2/3 of the country, only getting 31% of the vote there. This candidate wins the popular vote, but would lose the electoral college (quite solidly.) Real examples would not be so simple. The framers did not want a candidate who really represented only a small portion of the country in power. The wanted to require that a candidate have some level of national support.

The Civil War provides an example of the setting for such extreme conditions. In that sort of schism, it’s easy to imagine one region rallying around a candidate very strongly, while the rest of the nation remains unsure.

Do we reach their goal today? Perhaps not. However, we must take care before we abandon their goal to make sure it’s what we want to do.

Update: See the comments for discussion of ties. Also, I failed to discuss another important issue to me, that of 3rd parties. The electoral debacle of 2000 hurt 3rd parties a lot, with a major “Ralph don’t run” campaign that told 3rd parties, “don’t you dare run if you could actually make a difference.” A national popular vote would continue, and possibly strengthen the bias against 3rd parties. Some 3rd parties have been proposing what they call a “safe state” strategy, where they tell voters to only vote for their presidential candidate in the safe states. This allows them to demonstrate how much support they are getting (and with luck the press reports their safe-state percentage rather than national percentage) without spoiling or being accused of spoiling.

Of course, I think the answer for that would be a preferential ballot, which would have to be done on a state by state basis, and might not mesh well with the compact under discussion.

Remaining neutral on network neutrality -- it's the monopoly, stupid

People ask me about the EFF endorsing some of the network neutrality laws proposed in congress. I, and the EFF are big supporters of an open, neutral end-to-end network design. It’s the right way to build the internet, and has given us much of what we have. So why haven’t I endorsed coding it into law?

If you’ve followed closely, you’ve seen very different opinions from EFF board members. Dave Farber has been one of the biggest (non-business) opponents of the laws. Larry Lessig has been a major supporter. Both smart men with a good understanding of the issues.

I haven’t supported the laws personally because I’m very wary of encoding rules of internet operation into law. Just about every other time we’ve seen this attempted, it’s ended badly. And that’s even without considering the telephone companies’ tremendous experience and success in lobbying and manipulation of the law. They’re much, much better at it than any of the other players involved, and their track record is to win. Not every time, but most of it. Remember the past neutrality rules that forced them to resell their copper to CLECs so their could be competition in the DSL space? That ended well, didn’t it?

Read on…  read more »

End ringtones -- bluetooth "personal vibrator" watch.

No, not the sexual kind of personal vibrator. Today we regularly hear reminders to put phones on vibrate, and they are often ignored. The world is becoming rapidly swamped with loud, deliberately destracting cell phone ringtones. (The ringtones themselves are a business.)

I remember visiting Hong Kong 10 years ago, and a business lunch was a serious cacaphony of pagers in a crowded restaurant. They were going off ever few seconds, and this was acceptable there. I don't know how much worse it has gotten. I was on the train today and since that's a place people actually expect to take calls, ringing was quite regular.

Perhaps it's time to declare that cell phones should no longer ring at all, except in certain special circumstances. That the very idea of a ringer should be viewed as rude and pointless and in fact an invasion of your own privacy. Why should the world know you are getting a call?

To make this happen, I propose bluetooth based personal devices to be worn on the body. The most obvious one would be your watch. However, bluetooth based vibrating devices could be placed in glasses, belts, shoes, shirt collars or wallets. Anything the always-available wear on their body. Shoes and belts have the most potential for long battery life. Yes, you would have to charge your device once a week.

The vibrators would have a temperature transducer to know if they are indeed on the body. If that goes cold, a slowly rising ring could be issued from the device or the phone. The phone could also ring if the vibrating sensor is off or not connected to the phone. Or if the phone detects it is in a private car and plugged into car power, though frankly by this time we should all have cars with bluetooth handsfree anyway.

The phone itself, using temperature and other metrics, can also figure out if it is in a pocket, though this works mostly for men. Women tend to keep phones in purses.

Next step -- your cell phone should warn you when you are yelling. It knows if it is getting a good audio signal from you compared to ambient noise. As you probably know, people tend to talk loudly on cell phones if they are having trouble hearing the other party. Your phone should notice this, and give you some subtle "be quieter" tones. If you are using a headset yourself, the phone display could run a VU meter for constant reminder.

(Unfortunately most phones today shut down the backlight and even the processor in the phone during a call to save power, making this harder.)

Here's to a more peaceful public world.

A multi power supply for your desk from a PC power supply

I’ve blogged several times before about my desire for universal DC power — ideally with smart power, but even standardized power supplies would be a start.

However, here’s a way to get partyway, cheap. PC power supplies are really cheap, fairly good, and very, very powerful. They put out lots of voltages. Most of the power is at +5v, +12v and now +3.3v. Some of the power is also available at -5v and -12v in many of them. The positive voltages above can be available as much as 30 to 40 amps! The -5 and -12 are typically lower power, 300 to 500ma, but sometimes more.

So what I want somebody to build is a cheap adapter kit (or a series of them) that plug into the standard molex of PC power supplies, and then split out into banks at various voltages, using the simple dual-pin found in Radio Shack’s universal power supplies with changeable tips. USB jacks at +5 volts, with power but no data, would also be available because that’s becoming the closest thing we have to a universal power plug.

There would be two forms of this kit. One form would be meant to be plugged into a running PC, and have a thick wire running out a hole or slot to a power console. This would allow powering devices that you don’t mind (or even desire) turning off when the PC is off. Network hubs, USB hubs, perhaps even phones and battery chargers etc. It would not have access to the +3.3v directly, as the hard drive molex connector normally just gives the +5 and 12 with plenty of power.

A second form of the kit would be intended to get its own power supply. It might have a box. These supplies are cheap, and anybody with an old PC has one lying around free, too. Ideally one with a variable speed fan since you’re not going to use even a fraction of the capacity of this supply and so won’t get it that hot. You might even be able to kill the fan to keep it quiet with low use. This kit would have a switch to turn the PS on, of course, as modern ones only go on under simple motherboard control.

Now with the full set of voltages, it should be noted you can also get +7v (from 5 to 12), 8.7v (call it 9) from 3.3 to 12, 1.7v (probably not that useful), and at lower currents, 10v (-5 to +5), 17v (too bad that’s low current as a lot of laptops like this), 24v, 8.3v, and 15.3v.

On top of that, you can use voltage regulators to produce the other popular voltages, in particular 6v from 7, and 9v from 12 and so on. Special tips would be sold to do this. This is a little bit wasteful but super-cheap and quite common.

Anyway, point is, you would get a single box and you could plug almost all your DC devices into it, and it would be cheap-cheap-cheap, because of the low price of PC supplies. About the only popular thing you can’t plug in are the 16v and 22v laptops which require 4 amps or so. 12v laptops of course would do fine. At the main popular voltages you would have more current than you could ever use, in fact fuses might be in order. Ideally you could have splitters, so if you have a small array of boxes close together you can get simple wiring.

Finally, somebody should just sell nice boxes with all this together, since the parts for PC power supplies are dirt cheap, the boxes would be easy to make, and replace almost all your power supplies. Get tips for common cell phone chargers (voltage regulators can do the job here as currents are so small) as well as battery chargers available with the kit. (These are already commonly available, in many cases from the USB jack which should be provided.) And throw in special plugs for external USB hard drives (which want 12v and 5v just like the internal drives.)

There is a downside. If the power supply fails, everything is off. You may want to keep the old supplies in storage. Some day I envision that devices just don’t come with power supplies, you are expected to have a box like this unless the power need is very odd. If you start drawing serious amperage the fan will need to go on and you might hear it, but it should be pretty quiet in the better power supplies.

Syndicate content