Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

Digital cameras should have built-in tagging

So many people today are using tags to organize photos and to upload them to sites like flickr for people to search. Most types of tagging are easiest to do on a computer, but certain types of tagging would make sense to add to photos right in the camera, as the photos are taken.

For example, if you take a camera to an event, you will probably tag all the photos at the event with a tag for the event. A menu item to turn on such a tag would be handy. If you are always taking pictures of your family or close friends, you could have tags for them preprogrammed to make it easy to add right on the camera, or afterwards during picture review. (Of course the use of facial recognition and GPS and other information is even better.)

Tags from a limited vocabulary can also be set with limited vocabulary speech recognition, which cameras have the CPU and memory to do. Thus taking a picture of a group of friends, one could say their names right as you took the picture and have it tagged.

Of course, entering text on a camera is painful. You don’t want to try to compose a tag with arrow buttons over a keyboard or the alphabet. Some tags would be defined when the camera is connected to the computer (or written to the flash card in a magic file from the computer.) You would get menus of those tags. For a new tag, one would just select something like “New tag 5” from the menu, and later have an interface to rename the tag to something meaningful.

As a cute interface, tag names could also be assigned with pictures. Print the tag name on paper clearly and take a picture of it in “new tag” mode. While one could imagine OCR here, since it doesn’t matter if the OCR does it perfectly at first blush, you don’t actually need it. Just display the cropped handwritten text box in the menus of tags. Convert them to text (via OCR or human typing) when you get to a computer. You can also say sound associations for such tags, or for generic tags.

Cameras have had the ability to record audio with pictures for a while, but listening to all that to transcribe it takes effort. Trained speech recognition would be great here but in fact all we really have to identify is when the same word or phrase is found in several photos as a tag, and then have the person type what they said just once to automatically tag all the photos the word was said on. If the speech interface is done right, menu use would be minimal and might not even be needed.

Updating the Turing Test

Alan Turing proposed a simple test for machine intelligence. Based on a parlour game where players try to tell if a hidden person is a man or a woman just by passing notes, he suggested we define a computer as intelligent if people can’t tell it from a human being through conversations with both over a teletype.

While this seemed like a great test (for those who accept that external equivalence is sufficient) in fact to the surprise of many people, computers passed this test long ago with ordinary, untrained examiners. Today there has been an implicit extension of the test, that the computer must be able to fool a trained examiner, typically an AI researcher or expert in brain sciences or both.

I am going to propose updating it further, in two steps. Turing proposed his test perhaps because at the time, computer speech synthesis did not exist, and video was in the distant future. He probably didn’t imagine that we would solve the problems of speech well before we got handles on actual thought. Today a computer can, with a bit of care in programming inflections and such into the speech, sound very much like a human, and we’re much closer to making that perfect than we are to getting a Turing-level intelligence. Speech recognition is a bit behind, but also getting closer.

So my first updated proposal is to cast aside the teletype, and make it be a phone conversation. It must be impossible to tell the computer from another human over the phone or an even higher fidelity audio channel.

The second update is to add video. We’re not as far along here, but again we see more progress, both in the generation of digital images of people, and in video processing for object recognition, face-reading and the like. The next stage requires the computer to be impossible to tell from a human in a high-fidelity video call. Perhaps with 3-D goggles it might even be a 3-D virtual reality experience.

A third potential update is further away, requiring a fully realistic android body. In this case, however, we don’t wish to constrain the designers too much, so the tester would probably not get to touch the body, or weigh it, or test if it can eat, or stay away from a charging station for days etc. What we’re testing here is the being’s “presence” — fluidity of motion, body language and so on. I’m not sure we need this test as we can do these things in the high fidelity video call too.

Why these updates, which may appear to divert from the “purity” of the text conversation? For one, things like body language, nuance of voice and facial patterns are a large part of human communication and intelligence, so to truly accept that we have a being of human level intelligence we would want to include them.

Secondly, however, passing this test is far more convincing to the general public. While the public is not very sophisticated and thus can even be fooled by an instant messaging chatbot, the feeling of equivalence will be much stronger when more senses are involved. I believe, for example, that it takes a much more sophisticated AI to trick even an unskilled human if presented through video, and not simply because of the problems of rendering realistic video. It’s because these communications channels are important, and in some cases felt more than they are examined. The public will understand this form of turing test better, and more will accept the consequences of declaring a being as having passed it — which might include giving it rights, for example.

Though yes, the final test should still require a skilled tester.

The giant security hole in auto-updating software

It’s more and more common today to see software that is capable of easily or automatically updating itself to a new version. Sometimes the user must confirm the update, in some cases it is fully automatic or manual but non-optional (ie. the old version won’t work any more.) This seems like a valuable feature for fixing security problems as well as bugs.

But rarely do we talk about what a giant hole this is in general computer security. On most computers, programs you run have access to a great deal of the machine, and in the case of Windows, often all of it. Many of these applications are used by millions and in some cases even hundreds of millions of users.

When you install software on almost any machine, you’re trusting the software and the company that made it, and the channel by which you got it — at the time you install. When you have auto-updating software, you’re trusting them on an ongoing basis. It’s really like you’re leaving a copy of the keys to your office at the software vendor, and hoping they won’t do anything bad with them, and hoping that nobody untrusted will get at those keys and so something bad with them.  read more »

Internet oriented supper club

At various times I have been part of dinner groups that meet once a month or once a week at either the same restaurant or a different restaurant every time. There’s usually no special arrangement, but it’s usually good for the restaurant since they get a big crowd on a slow night.

I think there could be ways to make it better for the restaurant as well as the diners — and the rest of the web to boot. I’m imagining an application that coordinates these dinners with diners and the restaurants. The restaurants (especially newer ones) would offer some incentives to the diners, plus some kick back to the web site for organizing it. As part of the deal, the diners would agree to fairly review the restaurant — at first on public restaurant review sites and/or their own blogs, but with time at a special site just for this purpose. Diners would need to review at least 80% of the time to stay in.

Here’s what could be offered the diners:

  • Private rooms or private waiter, with special attention
  • Special menus with special items at reduced prices
  • Special billing, either separate bills or even pay online — no worrying about settling.
  • Advanced online ordering and planning for shared meals, possibly just before heading out.

For the restaurant there’s a lot:

  • A bunch of predictable diners on a slow night
  • If they order from a special menu, it can be easier and cheaper to prepare multiple orders of the same dish.
  • Billing assistance from the web site with online payment
  • A way to get trustable online reviews to bring in business — if the reviews are good.

Now normally a serious restaurant critic would not feel it appropriate to have the restaurant know they are being reviewed. In such cases they will not get typical service and be able to properly review it. However, this can be mitigated a lot if all the restaurants are aware of what’s going on, and if the reviews are comparative. In this case the restaurants are being compared by how they do at their best, rather than for a random diner. The latter is better, but the former is also meaningful. And of course it would be clear to readers that this is what went on.

In particular, I believe the reviewers should not simply give stars or numerical ratings to restaurants. They can do that, but mainly they should just place the restaurants in a ranking with the other restaurants they have scored, once they have done a certain minimal number. This fixes “star inflation.” With most online review sites, you don’t know if a 5-star rating is from somebody who gives everything 4 or 5 stars, or if it’s the only 5-star rating the reviewer ever gave. All these are averaged together.

In addition, the existing online review sites have self-selected reviewers, which is to say people who rate a restaurant only because they feel motivated to do so. Such results can be wildly inaccurate.

Finally, it is widely suspected that some fraction of the reviews on online sites are biased, placed there by the restaurant or friends of the restaurant. There are certainly few mechanisms to stop this at the sites I have seen. Certainly if you see a restaurant with just a few high ratings you don’t know what to think.

This dining system, with the requirement that everybody review, eliminates a good chunk of the self-selection. Members would need to review whether they felt the mood or not. (You could not stop them from not going to a restaurant that does not appeal to them, of course, so there is still some self-selection.) It is possible a restaurant might send its friends to dine at “enemy” restaurants via the club to rate them down, but I think the risk of this is much less than the holes in the other systems.

Restaurants with any confidence in their quality should be motivated to invite such an online dining club, especially new restaurants. Indeed, it’s not uncommon for new restaurants to offer the general public things like 2nd entree free or other discounts to get the public in, with no review bonus. If the site becomes popular, in fact, it might become the case that a new restaurant that doesn’t invite the amateur critics could be suspect, unwilling to risk a bad place in their rankings.

Understand the importance of a key in crypto design

I’ve written before about ZUI (Zero user interface) in crypto, and the need for opportunistic encryption based upon it. Today I want to further enforce the concept by pointing to mistakes we’ve seen in the past.

Many people don’t know it, but our good friends at Microsoft put opportunistic encryption into Outlook Express and other mailers many years ago. And their mailers were and still are the most widely used. Just two checkboxes in MSOE allowed you to ask that it sign all your outgoing mail, and further to encrypt all mail you sent to people whose keys you knew. If they signed their outgoing mail, you automatically learned their keys, and from then on your replies to them were encrypted.

However, it wasn’t just two checkboxes — you also had to get an E-mail certificate. Those are available free from THAWTE, but the process is cumbersome and was a further barrier to adoption of this.

But the real barrier? Microsoft’s code imagined you had one primary private key and certificate. As such, access to that private key was a highly important security act. Use of that private key must be highly protected, after all you might be signing important documents, even cheques with it.

As a result, every time you sent a mail with the “automatic sign” checkbox on, it put up a prompt telling you a program wanted to use your private key, and asked if you would approve that. Every time you received a mail that was encrypted because somebody else knew your key, it likewise prompted you to confirm access should be given to the private key. That’s the right approach on the private key that can spend the money in my bank account (in fact it’s not strong enough even for that) but it’s a disaster if it happens every time you try to read an E-mail!

We see the same with SSL/TLS certificates for web sites. Web sites can pay good money to the blessed CAs for a site certificate, which verifies that a site is the site you entered the domain name of. While these are overpriced, that’s a good purpose. Many people however want a TLS certificate simply to make sure the traffic is encrypted and can’t be spied upon or modified. So many sites use a free self-signed certificate. If you use one, however, the browser pops up a window, warning you about the use of this self-signed certificate, and you must approve its use, and say for how long you will tolerate it.

That’s OK for my own certification of my E-mail server, since only a few people use it, and we can confirm that once without trouble. However, if every time you visit a new web site you have to confirm use of its self-signed key, you’re going to get annoyed. And thus, while the whole web could be encrypted, it’s not, in part due to this.

What was needed was what security experts call an understanding of the “threat model” — what are you scared of, and why, and how much hassle do you want to accept in order to try to be secure?

It would be nice for a TLS certificate to say, “I’m not certifying anything about who this is” and just arrange for encryption. All that would tell you is that the site is the same site you visited before. The Lock icon in the browser would show encryption, but not any authentication. (A good way to show authentication would be to perhaps highlight the authenticated part of the URL in the title bar, which shows you just what was authenticated.)

In E-mail, it is clear what was needed was a different private key, used only to do signing and opportunistic encryption of E-mail, and not used for authorizing cheques. This lesser key could be accessed readily by the mail program, without needing confirmation from the user every time. (You might, if concerned, have it get confirmation or even a pass code on a once a day basis, to stop e-mail worms from sending mail signed as you at surprising times.)

Paranoid users could ask for warnings here too, but most would not need them.

TLS supports client side certificates too. They are almost never used. Clients don’t want to get certificates for most uses, but they might like to be able to tell a site they are the same person as visited before — which is mostly what the login accounts at web sites verify. A few also verify the account is tied to a particular e-mail address, but that’s about it.

Perhaps if we move to get the client part working, we can understand our threat model better.

Hybrid stickers in carpool lane should be sold at dutch auction.

In the SF Bay Area, there are carpool lanes. Drivers of fuel efficient vehicles, which mostly means the Prius and the Honda Civic/Insight Hybrids can apply for a special permit allowing them to drive solo in the carpool lanes. This requires both a slightly ugly yellow sticker on the bumper, and a special transponder for bridges, because the cars are allowed to use the carpool lane on the bridge but don’t get the toll exemption that real carpools get.

I think this is good, as long as there is capacity in the carpool lane, because the two goals of the carpool lane are to reduce congestion and also to reduce pollution. The hybrids do the latter. (Though it is argued that hybrids do their real gas saving on city streets, and only save marginally on the highway, comparable to some highly efficient gasoline vehicles.)

However, oddly, the government decided to allocate a fixed number of stickers (which makes sense) and to release them on a first-come, first-served basis, which makes no sense. After the allocation is issued, new buyers of these cars, or future efficient cars can’t get the stickers. (Or so they say — in fact the allocation has been increased once.)

The knowledge that time was running out to get a Prius with carpool privileges was much talked about. And it’s clear that a lot of people who buy a hybrid rush to get one of the scarce carpool permits simply because they can, even if they will almost never drive on the highways at rush hour with them.

Society seem to love first-come-first-served as a good definition of “fair” but it seems wrong here. At the very least there should be a yearly fee, so that people who truly don’t need the stickers will not get them “just in case.” I would go further and suggest the annual fee be decided by dutch auction. For those not familiar, in a dutch auction, all those who wish to bid submit a single, sealed bid. If there are “N” items then the Nth highest bid becomes the price that the top N bidders all pay. There may be a minimum below which the items are not sold.

This can be slightly complex in that you can do this one of two ways. The first is everybody pays their real bid, and losers and overbidders get a refund. This assures all bidders are serious. The other is to set the price, and then bill the winners. The problem here is people might bid high but then balk when they see the final price. You need a way of enforcing the payment. Credit cards can help here. As can, of course, being the government, which can refuse to licence your car until you pay the agreed fees.

Carpool lanes are a hot topic here, of course. The mere mention of the subject of kidpooling (Counting children to determine if a car is a carpool) makes the blood boil in the local newspapers. People feel remarkable senses of entitlements, and lose focus of the real goals — to reduce congestion and pollution. Emotions would run high here, too.

Tempfailing for spam -- where does it lead

One growing technique for use in anti-spam involves finding ways to “fail” on initial contacts for sending mail. Real, standard conformant mail programs try again in various ways, but spammers, in writing their mail blasters, tend to just have them skip that address and go to the next one in their list.

Two common approaches include simply returning a “temporarily unavailable” status on any initial mail attempt that might be spam. Another approach is to have dead MX records both at the “try first” and “try last” end of the MX chain.

Why does this work? Spammers just want to deliver as much mail as possible given time and bandwidth. If one address fails for any reason, it’s really no different whether you spend your resources trying the address again or in a different way, or just move on to the next address. In fact, since many of the failures are real failures, it’s actually more productive to just move on.

And, I admit, some of the spam filtering tools I make use of use these techniques, and they do help. But what exactly are they doing? For spammers, the limiting factor is bandwidth. Dealing with failures, especially timeouts on dead servers, takes very little of their resources.

It doesn’t reduce the amount of spam they send, at least by much, it just redistributes it to those who don’t use the techniques. For a positive spin, you can liken it to putting up a higher fence than your neighbour, so the criminals attack them and not you. For a negative spin, you can imagine it as being like an air filter that filters out the pollution on air coming into your house, and spews it out the back at your neighbours.

So it’s a touch question. Is this approach a good idea? Especially at the start, it was very effective. Over time if it becomes very common spammers will see a reduction in spam they deliver and make fairly simple moves to compensate for it. Is this fair game or antisocial?

There is an old joke about two hikers who meet a bear. The first sits down and starts putting on his running shoes. The other says, “What are you doing, you can’t outrun a bear!” and the first says, “I don’t have to outrun the bear, I just have to outrun you.”

Are we passing the bear onto our neighbours?

(This is part of a larger question of some of the other negative consequences of anti-spam. For example, as text filters got better, spammers moved to sending their spam as embedded images which filters could not easily decode. The result is more and more bandwidth used, both by spammers and victims. Was it a victory or a loss?)

Replacing the FCC with "don't be spectrum selfish."

Radio technology has advanced greatly in the last several years, and will advance more. When the FCC opened up the small “useless” band where microwave ovens operate to unlicenced use, it generated the greatest period of innovation in the history of radio. As my friend David Reed often points out, radio waves don’t interfere with one another out in the ether. Interference only happens at a receiver, usually due to bad design. I’m going to steal several of David’s ideas here and agree with him that a powerful agency founded on the idea that we absolutely must prevent interference is a bad idea.

My overly simple summary of a replacement regime is just this, “Don’t be selfish.” More broadly, this means, “don’t use more spectrum than you need,” both at the transmitting and receiving end. I think we could replace the FCC with a court that adjudicates problems of alleged interference. This special court would decide which party was being more selfish, and tell them to mend their ways. Unlike past regimes, the part 15 lesson suggests that sometimes it is the receiver who is being more spectrum selfish.

Here are some examples of using more spectrum than you need:

  • Using radio when you could have readily used wires, particularly the internet. This includes mixed mode operations where you need radio at the endpoints, but could have used it just to reach wired nodes that did the long haul over wires.
  • Using any more power than you need to reliably reach your receiver. Endpoints should talk back if they can, over wires or radio, so you know how much power you need to reach them.
  • Using an omni antenna when you could have used a directional one.
  • Using the wrong band — for example using a band that bounces and goes long distance when you had only short-distance, line of sight needs.
  • Using old technology — for example not frequency hopping to share spectrum when you could have.
  • Not being dynamic — if two transmitters who can’t otherwise avoid interfering exist, they should figure out how one of them will fairly switch to a different frequency (if hopping isn’t enough.)

As noted, some of these rules apply to the receiver, not just the transmitter. If a receiver uses an omni antenna when they could be directional, they will lose a claim of interference unless the transmitter is also being very selfish. If a receiver isn’t smart enough to frequency hop, or tell its transmitter what band or power to use, it could lose.

Since some noise is expected not just from smart transmitters, but from the real world and its ancient devices (microwave ovens included) receivers should be expected to tolerate a little interference. If they’re hypersensitive to interference and don’t have a good reason for it, it’s their fault, not necessarily the source’s.  read more »

Now you have to have the right reverse-DNS

Update: Several of the spam bounces of this sort that I got were traced to the same anti-spam system, and the operator says it was not intentional, and has been corrected. So it may not be quite as bad as it seemed quite yet.

I have a social list of people I invite to parties. Every time I mail to it, I feel the impact of spam and anti-spam. Always several people have given up on a mailbox. And I run into new spam filters blocking the mail.

Perhaps I’m an old timer, but I run my own mail server. It’s in my house. I read my mail on that actual machine, and because of that, mail is wicked-fast for me, as fast as instant messaging for many people. (In fact, I never adopted IM the way some people did because E-mail is as fast.)

They’re working to make this harder to do. Many ISPs won’t even let you send mail directly, or demand you make a special request to have the mail port open to you. I’m bothered by the first case, less so by the second, because indeed, zombie PCs send much of the spam we’re now getting.

Because I send mail from the system, I also web surf from it. And while it’s not a serious privacy protection, I decided I would not have a reverse-DNS record for my system. That way people would not see “templetons.com” in their web logs whenever I surfed. It’s not that you can’t use other techniques to find out that the address is mine, but that requires deliberate thought. Reverse DNS is automatic for many web logs.

Soon more and more sites would not take mail from a system without reverse DNS. Because I get my IP block from a small ISP, he does my reverse DNS, and I asked him to make one. He made one like many ISPs do, built from the IP numbers themselves. As in ip-nn-nn-nn.ispname.com.

But soon I saw bounces that said, “This reverse DNS looks like a dialup user, I won’t take your mail.” So I had him change it to a different string that doesn’t trumpet my name but doesn’t look like a standard anonymous reverse DNS.

But now I’m getting bounces just because the reverse DNS doesn’t match the name my mail server uses. There is no security in this, any spammer can program their mail server to use the reverse DNS name of the system they have taken over. But I guess some don’t, so another wall is thrown up, and those people won’t get invites to my parties.

This one is really stupid because it’s quite common for a single machine to have many names and serve many domains. To correct an earlier note, it is possible for an IP to have more than one PTR reverse DNS record, though I don’t know how many applications deal with that. And that screws these mailers. There is no need to look at reverse DNS at all.

Sigh.

Censored and uncensored soundtrack on the airplane

A recent story that United had removed all instances of the word “God” (not simply Goddamn) from a historical movie reminded me just how much they censor the movies on planes.

Here they have an easy and simple way out. Everybody is on headsets, and they already offer different soundtracks in different languages by dialing the dial. So offer the censored and real soundtrack on two different audio channels. Parents can easily make sure the kids are on whatever soundtrack they have chosen for them, as the number glows on the armrest.

Now most people, given the choice are going to take the real soundtrack. Which is fine, since now they certainly can’t complain if it does offend them. A few will take the censored soundtrack. But most people should be happy. This is not much work since the real work is creating the censored track. Assuming there is room for more tracks on the DVD, keeping the original one is no big deal.

How to stop people from putting widescreen TVs in stretch mode

(Note I have a simpler article for those just looking for advice on how to get their Widescreen TV to display properly.)

Very commonly today I see widescreen TVs being installed, both HDTV and normal. Flat panel TVs are a big win in public places since they don’t have the bulk and weight of the older ones, so this is no surprise, even in SDTV. And they are usually made widescreen, which is great.

Yet almost all the time, I see them configured so they take standard def TV programs, which are made for a 4:3 aspect ratio, and stretch them to fill the 16:9 screen. As a result everybody looks a bit fat. The last few hotel rooms I have stayed in have had widescreen TVs configured like this. Hotel TVs disable you from getting at the setup mode, offering a remote control which includes the special hotel menus and pay-per-view movie rentals. So you can’t change it. I’ve called down to the desk to get somebody to fix the TV and they often don’t know what I’m talking about, or if somebody comes it takes quite a while to get somebody who understands it.

This is probably because I routinely meet people who claim they want to set their TV this way. They just “don’t like” having the blank bars on either side of the 4:3 picture that you get on a widescreen TV. They say they would rather see a distorted picture than see those bars. Perhaps they feel cheated that they aren’t getting to use all of their screen. (Do they feel cheated with a letterbox movie on a 4:3 TV?)

It is presumably for those people that the TVs are set this way. For broadcast signals, a TV should be able to figure out the aspect ratio. NTSC broadcasts are all in 4:3, though some are letterboxed inside the 4:3 which may call for doing a “zoom” to expand the inner box to fill the screen, but never a “stretch” which makes everybody fat. HDTV broadcasts are all natively in widescreen, and just about all TVs will detect that and handle it. (All U.S. stations that are HD always broadcast in the same resolution, and “upconvert” their standard 4:3 programs to the HD resolution, placing black “pillarbox” bars on the left and right. Sometimes you will see a program made for SDTV letterbox on such a channel, and in that case a zoom is called for.)

The only purpose the “stretch” function has is for special video sources like DVD players. Today, almost all widescreen DVDs use the superior “anamorphic” widescreen method, where the full DVD frame is used, as it is for 4:3 or “full frame” DVDs. Because TVs have no way to tell DVD players what shape they are, and DVD players have no way to tell TVs whether the movie is widescreen or 4:3, you need to tell one or both of them about the arrangement. That’s a bit messy. If you tell a modern DVD player what shape TV you have, it will do OK because it knows what type of DVD it is. DVD players, presented with a widescreen movie and a 4:3 TV will letterbox the movie. However, if you have a DVD player that doesn’t know what type of TV it is connected to, and you play a DVD, you have to tell the TV to stretch or pillarbox. This is why the option to stretch is there in the first place.

However, now that it’s there, people are using it in really crazy ways. I would personally disable stretch mode when playing from a source known not to be a direct video input video player, but as I said people are actually asking for the image to be incorrectly stretched to avoid seeing the bars.

So what can we do to stop this, and to get the hotels and public TVs to be set right, aside from complaining? Would it make sense to create “cute” pillarbars perhaps with the image of an old CRT TV’s sides in them? Since HDTVs have tons of resolution, they could even draw the top and bottom at a slight cost of screen size, but not of resolution. Some TVs offer the option of gray, black and white pillars, but perhaps they can make pillars that somehow match the TV’s frame in a convincing way, and the frame could even be designed to blend with the pillars.

Would putting up fake drapes do the job? In the old days of the cinema, movies came in different widths sometimes, and the drapes would be drawn in to cover the left and right of the screen if the image was going to be 4:3 or something not as wide. They were presumably trying to deal with the psychological problem people have with pillarbars.

Or do we have to go so far as to offer physical drapes or slats which are pulled in by motors, or even manually? The whole point of flatscreen TVs is we don’t have a lot of room to do something like this, which is why it’s better if virtual. And of course it’s crazy to spend the money such things would cost, especially if motorized, to make people feel better about pillarbars.

I should also note that most TVs have a “zoom” mode, designed to take shows that end up both letterboxed and pillarbarred and zoom them to properly fit the screen. That’s a useful feature to have — but I also see it being used on 4:3 content to get rid of the pillarbars. In this case at least the image isn’t stretched, but it does crop off the top and bottom of the image. Some programs can tolerate this fine (most TV broadcasts expect significant overscan, meaning that the edges will be behind the frame of the TV) but of course on others it’s just as crazy as stretching. I welcome other ideas.

Update: Is it getting worse, rather than better? I recently flew on Virgin America airlines, which has widescreen displays on the back of each seat. They offer you movies (for $8) and live satellite TV. The TV is stretched! No setting box to change it, though if you go to their “TV chat room” you will see it in proper aspect, at 1/3 the size. I presume the movies are widescreen at least.

Cell carriers, let us have more than one phone on the same number

Everybody’s got old cell phones, which sit in closets. Why don’t the wireless carriers let customers cheaply have two or more phones on the same line. That would mean that when a call came in, both phones would ring (and your landlines if you desire) and you could answer in either place. You could make calls from either phone, though not both at the same time.

Right now they offer family plans, which let you have a 2nd “line” on the same account. That doesn’t save much money for the 2nd, though the 3rd and 4th are typically only $10 extra. That’s a whole extra number and account, however.

Letting customers do this should be good for the cell companies. You would be more likely to make a cell call. People would leave cells in their cars, or at other haunts (office, home or even in a coat pocket) for the “just in case I forget my cell” moments. That means more minutes billed.

The only downside is you might see people trying to share, both the very poor, or some couples, or perhaps families wanting to give a single number to a group of kids. While for most people the party line arrangement would be inconvenient, if it becomes a real problem, a number of steps could be taken to avoid it:

  • They know where the phones are. Thus don’t allow phone A to make/answer a call a short interval after phone B if they are far apart. If they are 1 hour apart, require an hour’s time.
  • To really stop things, require the non-default phones to register by calling a special number. When one phone registers, the others can’t make calls. Put limits on switching the active phone, possibly based on phone location as above.
  • Shortly after registering, or making/receiving a call, only the active phone receives calls.

I don’t think these steps are necessary, but if implemented they would make sharing very impractical and thus this service could be at no extra charge, or at worst a nominal charge of a buck or two. It could also be charged for only in months or days it’s actually used.

This is a great service for the customer and should make money for the cell co. So why don’t they?

Math getting better? -- CitizenRe

(Note: I have posted a followup article on CitizenRe as a result of this thread. Also a solar economics spreadsheet.)

I’ve been writing about the economics of green energy and solar PV, and have been pointed to a very interesting company named CitizenRe. Their offering suggests a major cost reduction to make solar workable.

They’re selling PV solar in a new way. Once they go into operation, they install and own the PV panels on your roof, and you commit to buy their output at a rate below your current utility rate. Few apparent catches, though there are some risks if you need to move (though they try to make that easy and will move the system once for those who do a long term contract.) You are also responsible for damage, so you either take the risk of panel damage or insure against it. Typically they provide an underpowered system and insist you live where you can sell back excess to the utility, which makes sense.

But my main question is, how can they afford to do it? They claim to be making their own panels and electrical equipment. Perhaps they can do this at such a better price they can make this affordable. Of course they take the rebates and tax credits which makes a big difference. Even so, they seem to offer panels even in lower-insolation places like New England, and to beat the prices of cheaper utilities which only charge around 8 cents/kwh.

My math suggests that with typical numbers of 2 khw/peak watt/year, to deliver 8 cents/kwh for 25 years requires an installed cost of under $2/peak watt — even less in the less sunny places. Nobody is even remotely close to this in cost, so this must require considerable reduction from rebates and tax credits.

A few other gotchas — if you need to re-roof, you must pay about $500 to temporarily remove up to 5kw of panels. And there is the risk that energy will get cheaper, leaving you locked in at a higher rate since you commit to buy all the power from the panels. While many people fear the reverse — grid power going up in price, where this is a win — in fact I think that energy getting cheaper is actually a significant risk as more and more money goes into cleantech and innovation in solar and other forms of generation.

It’s interesting that they are offering a price to compete with your own local utility. That makes sense in a “charge what the market will bear” style, but it would make more sense to market only to customers buying expensive grid power in states with high insolation (ie. the southwest.)

Even with the risks this seems like a deal with real potential — if it’s real — and I’ll be giving it more thought. Of course, for many, the big deal is that not only do they pay a competitive price, they are much greener, and even provide back-up power during the daytime. I would be interested if any readers know more about this company and their economics.

Update: There is a really detailed comment thread on this post. However, I must warn CitizenRe affiliates that while they must disclose their financial connection, they must also not provide affiliate URLs. Posts with affiliate URLs will be deleted. Some salient details: There is internal dissent. I and many others wonder why an offer this good sounding would want to stain itself by being an MLM-pyramid. Much stuff still undisclosed, some doubt on when installs will take place.

Photostatuary

3-D printing is getting cheaper. This week I saw a story about producing a hacked together 3-D printer that could print in unusual cheap materials like play-doh and chocolate frosting for $2,000. Soon, another 3-D technology will get cheap — the 3-D body scan.

I predict soon we’ll see 3-D scanning and reproduction become a consumer medium. It might be common to be able to pop into a shop and get a quick scan and lifelike statue of yourself, a pet or any object. Professional photographers will get them — it will become common, perhaps, to have a 3-D scan done of the happy couple at the wedding, with resultant statue. Indeed, soon we’ll see this before the wedding, where the couple on the wedding cake are detailed statues of the bridge and groom.

And let’s not forget baby “portraits” (though many of today’s scanning processes require the subject to be still.) At least small children can be immortalized. Strictly this requires the scanners to get cheap first, because you can send the statue back later in the main from a central 3-D printer if it’s not made of food.

The scanners may never become easily portable, since they need to scan from all sides or rotate the subject, but they will also eventually become used by serious amateur photographers, and posing for a portrait may commonly also include a statue, or at least a 3-d model in a computer (with textures and colours added) that you can spin around.

This will create a market for software that can take 3-D scans and easily make you look better. Thinner, of course, but perhaps even more muscular or with better posture. Many of us would be a bit shocked to see ourselves in 3-D, since few of us are models. As we’ll quickly have more statues than we know what to do with, we may get more interested in the computer models, or in ephemeral materials (like frosting) for these photostatuary.

This was all possible long ago if you could hire an artist, and many a noble had a bust of himself in the drawing room. But what will happen when it gets democratized?

Virtual right-of-way alternatives for BRT

In one of my first blog posts, I wrote about virtual right-of-way, a plan to create dedicated right of way for surface rail and bus transit, but to allow cars to use the RoW as long as they stay behind, and never in front of the transit vehicle.

I proposed one simple solution, that if the driver has to step on the brakes because of a car in the way, a camera photographs the car and plate, and the driver gets a fat ticket in the mail. People would learn you dare not get into the right-of-way if you can see a bus/train in your rearview mirror.

However, one downside stuck with me, which is that people might be so afraid of the ticket that they make unsafe lane changes in a hurry to get out of the way of the bus, and cause accidents. Even a few accidents might dampen enthusiasm for the plan, which is a shame because why leave the RoW vacant so much of the time?

San Francisco is planning BRT (Bus Rapid Transit) which gives buses dedicated lanes and nice “stations” for Geary St., its busiest bus corridor. However, that’s going to cut tremendously into Geary’s car capacity, which will drive traffic onto other streets. Could my plan for V-Row (Virtual Right of Way) help?

My new thought is to make travel in the V-Row a privilege, rather than something any car can do as long as it stays out of the way of the bus. To do that, car owners would need to sign up for V-Row access, and purchase a small receiver to mount in their car. The receiver would signal when a bus is approaching with a nice wide margin, to tell the driver to leave the lane. It would get louder and more annoying the closer the bus got. The ticket would still be processed by a camera on the front of the bus triggered by the brakes.

Non-registered drivers could still enter the V-Row, whether it was technically legal or not. If they got a photo-ticket, it might be greater than the one for registered drivers who have the alerting device.

I’ve thought of a few ways to do the alert. If there are small, short range radio transmitters dotted along the route, the bus could tell them to issue the warning as they approach. They could also flash LEDs (avoiding the need for the special receiver.) Indeed, they could even broadcast on a very low power open FM channel, again obviating the need for a special device if you don’t mind not running your stereo for something else. (The broadcast would be specific, “Bus approaching on Westbound Geary at 4th ave” so you are not confused if you hear a signal from another line or another direction.) Infrared or microwave short-range transmission would also be good. The transmitters would include actual lat/long coordinates of the zones to clear, so cars with their own GPS could get an even more accurate warning.

It might even be possible to have an infrared or ultra-high-frequency radio transmitter on the front of the bus, which would naturally only transmit to people in front of the bus. IR could be blocked by fog, radio could leak to other zones, so there might be bugs to work out there. The receiver could at least know what direction it is going (compass, if not GPS) and know to ignore signals from perpendicular vehicles.

Each bus of course, via GPS will know where it is and where it’s going, giving it the ability to transmit via radio or even IR the warning to clear the V-Row ahead of it.

V-Row is vastly, vastly cheaper than other forms of rapid transit. In fact, my proposal here might even make it funded by the drivers who are eager to make use of the lane. Many would, because going behind the bus will be a wonderful experience compared to rush hour traffic, since the traffic lights are going to be synchronized to the bus in most BRT plans.

One could even imagine, at higher pavement cost, a lane to pass the bus when it stops. Then the cars go even faster, but the driver signals a few seconds before she’s going to pull out and all cars in the lane would stop to wait for it, then follow it. Careful algorithms could plan things properly based on bus spacing and wait times and signals from drivers or sensors to identify where the gaps in the V-Row are that will allow cars are, and to signal cars that they can enter. (The sensor would also, when you’re not in the V-Row, tell you when it’s acceptable to enter it.)

During periods of heavy congestion, however, there may not be anywhere for cars leaving the V-Row to go in the regular lanes without congesting them more. However, it’s not going to be worse there than it is with no cars allowed in the bus right-of-way, at most it gets that bad. It may be the case the bus drivers could command all cars out of the V-Row (even behind the bus) because congestion is too high, or transit vehicles are getting too closely spaced due to the usual variations of transit. (In most cases detecting transit vehicles that are very close would be automatic and cars would be commanded not to enter those zones.)

There are many other applications for a receiver in a car to receive information on where to drive, including automatic direction around congestion, accidents and construction. I can think of many reasons to get one.

Some BRT plans call for the dedicated right-of-way to have very few physical connections with the ordinary streets. This might appear to make V-Row harder, but in fact it might make planning easier. Cars could be allowed in at controlled points, like metering lights, and commanded to leave at controlled points. In that case there would be no tickets, except for cars that pass an exit point they were commanded to leave at. If the system told you to stay in a lane and was wrong, and a bus came up behind you, it would not be your fault, but nor would it be so frequent as to slow the bus system much.

16 years of EFF next Thursday

Join me next Thursday (one-eleven) at the one-eleven Minna gallery in San Francisco to celebrate EFF’s 16th year. From 7 to 10pm. Suggested donation $20. Stop by if you’re at Macworld.

Details at http://www.eff.org/deeplinks/archives/005055.php

More eBay feedback

A recent Forbes items pointed to my earlier posts on eBay Feedback so I thought it was time to update them. Note also the eBay tag for all posts on eBay including comments on the new non-feedback rules.

I originally mused about blinding feedback or detecting revenge feedback. It occurs to me there is a far, far simpler solution. If the first party leaves negative feedback, the other party can’t leave feedback at all. Instead, the negative feedback is displayed both in the target’s feedback profile and also in the commenter’s profile as a “negative feedback left.” (I don’t just mean how you can see it in the ‘feedback left for others’ display. I mean it would show up in your own feedback that you left negative feedback on a transaction as a buyer or seller. It would not count in your feedback percentage, but it would display in the list a count of negatives you left, and the text response to the negative made by the other party if any.)

Why? Well, once the first feedbacker leaves a negative, how much information is there, really, in the response feedback? It’s a pretty rare person who, having been given a negative feedback is going to respond with a positive! Far more likely they will not leave any feedback at all if they admit the problem was their fault. Or that they will leave revenge. So if there’s no information, it’s best to leave it out of the equation.

This means you can leave negatives without fear of revenge, but it will be clearly shown to people who look at your profile whether you leave a lot of negatives or not, and they can judge from comments if you are spiteful or really had some problems. This will discourage some negative feedback, since people will not want a more visible reputation of giving lots of negatives. A typical seller will expect to have given a bunch of negatives to deadbeat buyers who didn’t pay, and the comments will show that clearly. If, however, they have an above average number of disputes over little things, that might scare customers off — and perhaps deservedly.

I don’t know if eBay will do this so I’ve been musing that it might be time for somebody to make an independent reputation database for eBay, and tie it in with a plugin like ShortShip. This database could spot revenge feedbacks, note the order of feedbacks, and allow more detailed commentary. Of course if eBay tries to stop it, it has to be a piece of software that does all the eBay fetching from user’s machines rather then a central server.

Rebate experiences

I wrote earlier about the controversial topic of discriminatory pricing, where vendors try to charge different customers different prices, usually based on what they can afford or will tolerate. One particularly vexing type of such pricing is the mail-in rebate. Mail in rebates do two things. In their pure form, they give a lower price to people willing to spend some time on the bureaucracy. As such, they would work at charging richer customers more because richer customers tend to value time more than money compared to poorer customers.

However, they are rarely that simple. Some products offer ridiculously low rebates it’s not worth anybody’s time to process — they are not much better than a trick. With higher rebates, often the full price is inflated to make the discount appear larger than it is. This can also be a trick. A person who has decided she will not do rebates should normally never buy such a product, however, in many cases people do buy them, and never get around to processing the rebate.

While the vendors never release figures, clearly many people never get their rebate. Companies that manage rebates can in fact make fairly realiable promises about how many of the rebates will actually be redeemed. While I suspect the largest reason for non-redemption is “not getting around to it,” in many cases rebate programs work to make it hard to redeem. They will make the redemption process as complex as possible, and not redeem on any little error. Some companies have even been found to have fraudulently failed to redeem correctly prepared rebate forms, waiting for customers to complain and paying only if they do. Of course, few customers complain, as it’s even more work, and of those who do, few retain the documentation necessary for a complaint. In many cases, customers do not even keep note of what rebate requests they sent out. Rebate companies tend to deliberately take as long as possible — usually several months — to process rebates. This is partly to keep the float on the money, but also I suspect to make people forget about what they are waiting for.

As such, I avoid most rebates, but I do do some of them. In particular, if I can do rebates in bulk, it can be worthwhile. In this case (usually around the holidays) I will gather together many rebates and fill them out all at once. I took a sheet of laser printer address labels and printed out stickers with all the common items desired on rebate forms, including name/address stickers which I already have, and stickers with a special E-mail address and free voicemail only phone number (ipkall.com) to speed up the process.

This year, several rebates now “offered” online processing. This turns out to save time for the company, not for you. You fill in the information (saving them data entry work) and it prints out your rebate form, which you must still mail in along with the original UPC and some form of original receipt. (Fry’s has automated their end of the rebate process, printing rebate receipts and rebate forms on thermal printers at the cash register.)

One of the companies, onrebate.com seemed like an even nastier trick. On my first visit the site was incredibly slow, taking 30 seconds per page in a multi-step process. However, a later visit was OK. However, they of course do nothing to make things easier, like re-use of data on a second rebate (including some of the famous “double rebate” products.) One thing they offer which is very positive is payment of your rebate via paypal, which has two giant benefits — no need for a trip to the bank, and easy tracking of when you are repaid. In addition, it eliminates the common trick of printing rebate cheques with “not valid after…” legends set for the very near future, another way they block redemption.

Onrebate also offers quick payment, if you let them keep about 10% of your rebate. Of course this is a bad deal to just get money 2 months sooner, but we know people fall for it. As an experiment, I filed two rebates with them, one with the instant payment and one without. I got the notice of processing on the instant payment one first, saying I would be paid within a couple of weeks. On the other hand I got the money on the other one first! E-mail notification is positive for tracking, of course. Some companies go the other way. I received a $10 rebate check recently with no indication, other than the name of the general rebate processing company, of what it was for. This helps confuse people about what rebates they have received and not received.

Even with the streamlined bulk process, however, it took too much time this year. One needs to check that one has followed all the rules, which often vary. Some demand signatures, some demand emails, some demand phone numbers. Some demand copies of receipts, some demand originals. Some demand web processing. Almost all demand original UPCs which can be hard work to cut out of products. Some demand copies. A quick and easy idea for “copies” is to use a digital camera to take pictures of the various items. This also is a quick record you can go back and check should you have the inclination. It doesn’t say the copies have to be very good. Most households don’t have photocopiers any more, but almost all have digital cameras and printers, which is even easier than a scanner.

(I also have a small sheetfed scanner I use for my paperless home efforts, but it has problems with thermal paper receipts.)

We’ll never see this become easy because of course the rebate management companies want the redemption rate to stay low. I presume some of them even market to the vendors the low rates, otherwise we would not see the “free after rebate” concept that has become more common.

I filed claims for $290 in rebates this December. So far one $60 (paypal) and one $10 have come in, and the expedited paypal rebates have not. I don’t expect to see much before late February, however.

Outside of bulk processing or very good rebate deals, the non-redemption rate seems to make it better to always check if there is a non-rebated product at a good price. Figure out your own “discount rate” for how often you personally complete rebates and how often you actually receive money. I doubt many get 100%. Then factor in a value on your time — what do you get paid per hour, figuring 2000 to 2500 hours for salaried people? Expect to spend 10 to 30 minutes on a rebate form, including post office trips, bank trips etc. (This is much lower if you regularly go to these places, or have at-home mail pickup as I do.)

Of course, you may not even agree with the company’s original goal — to find a way to charge more to people who value time over money, and thus less to those who value money over time. It is interesting, however, to speculate on what other systems might be devised to reach this goal that are not so random and bureaucratic as the rebate system. For example, use of the web only became practical once you could presume the “money > time” crowd had web access — a system that allows discounts only for the rich is not going to be very effective. I am interested in alternative ideas.

One might be to offer the rebates to those who agree to take a web journey that exposes them to advertising. This both assures they value money over time but actually sells that attention. A web process, upon which you are paid by paypal at the end, could be highly reliable without the “lottery” factor. Vendors could even start including tokens in products with one-time-use numbers on them which people could type in rather than having to mail physical UPC codes. (However, the mailing of the UPC code, aside from adding work and cost to the process also is important for disallowing returns of products after rebates are filed. Stores would need to check that the token number was present, and not used, before doing a return.) Stores could also print a similar magic number on sales receipts.

The work associated in the logistics of rebates can’t be eliminated by the web, though. The goal, after all, is to make the process time consuming, so you can only shift work from one place to another. But it can be made less random, which would actually encourage more people to buy rebated products if they truly believe they will offer up their time and attention.

A linux distro for making digital picture frames

I’ve thought digital picture frames were a nice idea for a while, but have not yet bought one. The early generation were vastly overpriced, and the current cheaper generation still typically only offer 640x480 resolution. I spend a lot to produce quality, high-res photography, and while even a megapixel frame would be showing only a small part of my available resolution, 1/4 megapixel is just ridiculous.

I’ve written before that I think a great product would either be flat panels that come with or can use a module to provide 802.11 and a simple protocol for remote computers to display stuff on them. Or I have wished for a simple and cheap internet appliance that would feature 802.11 and a VGA output to do the job. 1280x1024 flat panels now sell for under $150, and it would not take much in the way of added electronics to turn them into an 802.11 or even USB-stick/flashcard based digital photo frame with 4 times the resolution of the similarly priced dedicated frames.

One answer many people have tried is to convert an old laptop to a digital photo frame. 800x600 laptops are dirt cheap, and in fact I have some that are too slow to use for much else. 1024x768 laptops can also be had for very low prices on ebay, especially if you will take a “broken” one that’s not broken when it comes to being a frame — for example if it’s missing the hard disk, or the screen hinges (but not the screen) are broken. A web search will find you several tutorials on converting a laptop.

To make it really easy, what would be great is a ready to go small linux distribution aimed at this purpose. Insert a CD or flash card with the distribution on it and be ready to go as a picture frame.

Ideally, this distro would be set to run without a hard disk. You don’t want to spin the hard disk since that makes noise and generates heat. Some laptops won’t boot from USB or flash, so you might need a working hard drive to get booted, but ideally you would unmount it and spin it down after booting.

Having a flash drive is possible with just about all laptops, because PCMCIA compact flash adapters can be had for under $10. Laptops with USB can use cheaply available thumb-drives. PCMCIA USB adapters are also about $10, but beware that really old laptops won’t take the newer-generation “cardbus” models.

While some people like to put pictures into the frame using a flash card or stick, and this can be useful, I think the ideal way to do it is to use 802.11. And this is for the grandmother market. One of the interesting early digital picture frames had a phone plug on it. The frame would dial out by modem to download new pictures that you uploaded to the vendor’s site. The result was that grandma could see new pictures on a regular basis without doing anything. The downside was this meant an annoying monthly fee to cover the modem costs.

But today 802.11 is getting very common. Indeed, even if grandma is completely internet-phobic, there’s probably a neighbour’s 802.11 visible in her house, and what neighbour would not be willing to give permission for a function such as this. Then the box can be programmed to download and display photos from any typical photo web site, and family members can quickly upload or email photos to that web site.

Of course if there is no 802.11 then flash is the way to do it. USB sticks are ideal as they are cheap and easy to insert and remove, even for the computer-phobic. I doubt you really want to just stick a card out of a camera, people want to prepare their slideshows. (In particular, you want to pre-scale the images down to screen size for quick display and to get many more in the memory.) 800x600 pictures are in fact so small — 50kb can be enough — that you could even build the frame with no flash, just an all-ram linux that loads from flash, CD or spun-down hard drive, and keeps a 100 photos in spare ram, and sucks down new ones over the network as needed. This mode eliminates the need for worrying about drivers for flash or USB. The linux would run in frame-buffer mode, there would be no X server needed.

The key factor is that the gift giver prepares the box and mounts it on the wall, plugged in. After that the recipient need do nothing but look at it, while new photos arrive from time to time. While remote controls are nice (and can be done on the many laptops that feature infrared sensors) the zero-user-interface (ZUI) approach does wonders with certain markets.

Update: I’ve noticed that adapters for Laptop mini-IDE to compact flash are under $10. So you can take any laptop that’s missing a drive and insert a flash card as the drive, with no worries about whether you can boot from a removable device. You might still want an external flash card slot if it’s not going to be wifi, but you can get a silent computer easily and cheaply this way. (Flash disk is slower than HDD to read by has no seek time.)

Even for the builder the task could be very simple.

  • Unscrew or break the hinges to fold the screen against the bottom of the laptop (with possible spacer for heat)
  • Install, if needed, 802.11 card, USB card or flash slot and flash — or flash IDE.
  • Install linux distro onto hard disk, CD or flash
  • Configure by listing web URL where new photo information will be found, plus URL for parameters such as speed of slideshow, fade modes etc.
  • Configure 802.11 parameters
  • Put it in a deep picture frame
  • Set bios to auto turn-on after power failure if possible
  • Mount on wall or table and plug in.

Another war tragedy -- the solar opportunity in Iraq

While I’ve written before about the trouble in making solar competitive with grid power, this is not true when the grid is being blown up by geurilla fighters on a regular basis. Over the past couple of years, Bechtel has been paid over 2 billion dollars, mostly to try to rebuild the Iraq electrical infrastructure. Perhaps it’s not their fault that power is only on in Bagdadh for 2 hours a day after these billions have been spent — but their might have been a better way.

Imagine if that billion had been directed at building a solar power system, with a lower-power grid for night power. A billion would have provided major stimulus to the solar industry, of course, and helped the companies that are working at making PV cost-effective. But it also would have generated a power infrastructure that was much harder to destroy in a civil war. Yes, they might take down sections of the grid, but these would only have been there for night and brownout power. Without them, people would still have had more power. And not just during the day. Mini “neighbourhood grid” systems could allow small areas to have backup diesel generators. Not quite as efficient as the big generators but much more difficult to take down. The “value” targets would still see their local panels and generators under attack, but that’s the way of it.

It seems odd to think of this in a country with so much oil. But doing this would have also had a major effect on greenhouse gas emissions. Putting solar into Iraq would have made the US responsible for major emission cuts. Cutting emissions there so we don’t have to cut them here.

Something to think about next time your country goes and destroys a foreign country’s power grid and then works to rebuild it. (Of course, ideally that’s never.)

Syndicate content