Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

So tell me again why you need a stay on the order stopping the wiretapping?

You probably heard yesterday’s good news that the ACLU prevailed in their petition for an injunction against the NSA warrentless wiretapping. (Our case against AT&T to hold them accountable for allegedly participating in this now-ruled-unlawful program continues in the courts.)

However, the ruling was appealed (no surprise) and the government also asked for, and was granted a stay of the injunction. So the wiretaps won’t stop unless the appeal is won.

But this begs the question, “Why do you need a stay?”

The line from the White House has been that the government engaged in this warrantless wiretapping because the the President had the authority to do that, both inherently and under the famous AUMF. And they wanted to use that authority because they complained the official system mandated by law, requiring process before the FISA court, was just too cumbersome. Even though the FISA law allows immediate emergency wiretaps without a warrant as long as a retroactive application is made soon.

We’ve all wondered just why that’s too cumbersome. But they seemed to be saying that since the President haud the authority to bypass the FISA court, why should they impede the program with all that pesky judicial oversight?

But now we have a ruling that the President does not have that authority. Perhaps that will change on appeal, but for now it is the ruling. So surely this should mean that they just go back to doing it the way the FISA regulations require it? What’s the urgent need for a stay? Could they not have been ready with the papers to get the warrants they need if they lost?

Well, I think I know the answer. Many people suspect that the reason they don’t go to FISA is not because it’s too much paperwork. It’s because they are trying to do things FISA would not let them do. So of course they don’t want to ask. (The FISA court, btw, has only told them no once, and even that was overturned. That’s about all the public knows about all its rulings.) I believe there is a more invasive program in place, and we’ve seen hints of that in press reports, with data mining of call records and more.

By needing this stay, the message has come through loud and clear. They are not willing to get the court’s oversight of this program, no way, no how. And who knows how long it will be until we learn what’s really going on?

Infrared patterns and paint to screw with tourist video/photos

Last week at ZeroOne in San Jose, one of the art pieces reminded me of a sneaky idea I had a while ago. As you may know, many camcorders, camera phones and cheaper digital cameras respond to infrared light. You can check this out pretty easily by holding down a button on your remote control while using the preview screen on your camera. If you see a bright light, you’re camera shoots in infrared.

Anyway, the idea is to find techniques, be they arrays of bright infrared LEDs, or paints that shine well in infrared but are not obvious in visible light, and create invisible graffiti that only shows up in tourist photos and videos. Imagine the tourists get home from their trip to fisherman’s wharf, and the side of the building says something funny or rude that they are sure wasn’t there when they filmed it.

The art piece at ZeroOne used this concept to put up a black monolith to the naked eye. If you pulled out your camera phone or digital camera, you could see words scrolling down the front. Amusing to watch people watch it. Another piece by our friends at .etoy also had people pulling out cameraphones to watch it. They displayed graphics made of giant pixels on a wall just a few feet from you. Up close, it looked like random noise. If you found a way to widen your field of view (which the screen on a camera can do) allowed you to see the big picture, and you could see the images of talking faces. (My SLR camera’s 10mm lens through the optical viewfinder worked even better.)

That piece only really worked at night, though with superbright LEDs I think it could be done in the day. I don’t know if there are any paints to coatings to make this work well. It would be amusing to tag the world with tags that can only be seen when you pull out your camera.

I remember IBM

Everybody’s pulling out IBM PC stories on the 25th anniversary so I thought I would relate mine. I had been an active developer as a teen for the 6502 world — Commodore Pet, Apple ][, Atari 800 and the like, and sold my first game to Personal Software Inc. back in 1979. PSI was just starting out, but the founders hired me on as their first employee to do more programming. The company became famous shortly thereafter by publishing VisiCalc, which was the first serious PC application, and the program that helped make Apple as a computer company outside the hobby market.

In 1981, I came back for a summer job from school. Mitch Kapor, who had worked for Personal Software in 1980 (and had been my manager at the time) had written a companion for VisiCalc, called VisiPlot. VisiPlot did graphs and charts, and a module in it (VisiTrend) did statistical analysis. Mitch had since left, and was on his way to founding Lotus. Mitch had written VisiPlot in Apple ][ Basic, and he won’t mind if I say it wasn’t a masterwork of code readability, and indeed I never gave it more than a glance. Personal Software, soon to be renamed VisiCorp, asked me to write VisiPlot from scratch, in C, for an un-named soon to be released computer.

I didn’t mention this, but I had never coded in C before. I picked up a copy of the Kernighan and Ritchie C manual, and read it as my girlfriend drove us over the plains on my trip from Toronto to California.

I wasn’t told much about the computer I would be coding for. Instead, I defined an API for doing I/O and graphics, and wrote to a generalized machine. Bizarrely (for 1981) I did all this by dialing up by modem to a unix computer time sharing service called CCA on the east coast. I wrote and compiled in C on unix, and defined a serial protocol to send graphics back to, IIRC an Apple computer acting as a terminal. And, in 3 months, I made it happen.

(Very important side note: CCA-Unix was on the arpanet. While I had been given some access to an Arpanet computer in 1979 by Bob Frankston, the author of VisiCalc, this was my first day to day access. That access turned out to be the real life-changing event in this story.)

There was a locked room at the back of the office. It contained the computer my code would eventually run on. I was not allowed in the room. Only a very small number of outside companies were allowed to have an IBM PC — Microsoft, UCSD, Digital Research, VisiCorp/Software Arts and a couple of other applications companies.

On this day, 25 years ago, IBM announced their PC. In those days, “PC” meant any kind of personal computer. People look at me strangely when I call an Apple computer a PC. But not long after that, most people took “PC” to mean IBM. Finally I could see what I was coding for. Not that the C compilers were all that good for the 8088 at the time. However, 2 weeks later I would leave to return to school. Somebody else would write the library for my API so that the program would run on the IBM PC, and they released the product. The contract with Mitch required they pay royalties to him for any version of VisiPlot, including mine, so they bought out that contract for a total value close to a million — that helped Mitch create Lotus, which would, with assistance from the inside, outcompete and destroy VisiCorp.

(Important side note #2: Mitch would use the money from Lotus to found the E.F.F. — of which I am now chairman.)

The IBM PC was itself less exciting than people had hoped. The 8088 tried to be a 16 bit processor but it was really 8 bit when it came to performance. PC-DOS (later MS-DOS) was pretty minimal. But it had an IBM name on it, so everybody paid attention. Apple bought full page ads in the major papers saying, “Welcome IBM, Seriously.” Later they would buy ads with lines like Steve Jobs saying, “When I invented the personal computer…” and most of us laughed but some of the press bought it. And of course there is a lot more to this story.

And I was paid about $7,000 for the just under 4 months of work, building almost all of an entire software package. I wish I could program like that today, though I’m glad I’m not paid that way today.

So while most people today will have known the IBM-PC for 25 years, I was programming for it before it released. I just didn’t know it!

Why do people put angle brackets around <urls>

Quite frequently in non-HTML documents, such as E-mails, people will enclose their URLs in angle brackets, such as <http://foo.com> What is the origin of this? For me, it just makes cutting and pasting the URLs much harder (it’s easier if they have whitespace around them and easiest if they are on a line by themselves.) It’s not any kind of valid XML or HTML in fact it would cause a problem in any document of that sort.

There’s lot of software out there that parses URLs out of text documents of course, but they all seem to do fine with whitespace and other punctuation. They handle the angle bracket notation, but don’t need it. Is there any software out there that needs it? If not, why do so many people use this form?

Broadcast lectures on cell phones

Many universities are now setting up to broadcast lectures over their LANs, often in video. Many students simply watch from their rooms, or even watch later. There are many downsides to this (fewer show up in class) but the movement is growing.

Here’s a simple addition that would be a bonanza for the cell companies. Arrange to offer broadcast of lectures to student cell phones. In this case, I mean live, and primarily for those who are running late to class. They could call into the number, put on their bluetooth headset and hear the start of the lecture on the way in. All the lecture hall has to do is put the audio into a phone that calls a conference bridge (standard stuff all the companies have already) and then students can call into the bridge to hear the lecture. In fact, the cell company should probably pay the school for all the minutes they would bill.

This need not apply only to lectures at universities. All sorts of talks and large meetings could do the same, including sessions at conferences.

Perhaps it would encourage tardyness, but you could also make the latecomers wait outside (listening) for an appropriate pause at which to enter.

The peril of anonymized data

The blogosphere is justifiably abuzz with the release by AOL of “anonymized” search query histories for over 500,000 AOL users, trying to be nice to the research community. After the fury, they pulled it and issued a decently strong apology, but the damage is done.

Many people have pointed out obvious risks, such as the fact that searches often contain text that reveal who you are. Who hasn’t searched on their own name? (Alas, I’m now the #7 “brad” on Google, a shadow of my long stint at #1.)

But some other browsers have discovered something far darker. There are searches in there for things like “how to kill your wife” and child porn. Once that’s discovered, isn’t that now going to be sufficient grounds for a court order to reveal who that person was? It seems there is probable cause to believe user 17556639 is thinking about killing his wife. And knowing this very specific bit of information, who would impede efforts to investigate and protect her?

But we can’t have this happening in general. How long before sites are forced to look for evidence of crimes in “anonymized” data and warrants then nymize it. (Did I just invent a word?)

After all, I recall a year ago, I wanted to see if Google would sell adwords on various nasty searches, and what adwords they would be. So I searched for “kiddie porn” and other nasty things. (To save you the stigma, Google clearly has a system designed to spot such searches and not show ads, since people who bought the word “kiddie” may not want to advertise on those results.)

So had my Google results been in such a leak, I might have faced one of those very scary kiddie porn raids, which in the end would find nothing after tearing apart my life and confiscating my computers. (I might hope they would have a sanity check on doing this to somebody from the EFF, but who knows. And you don’t have that protection even if somebody would accord it to me.)

I expect we’ll be seeing the reprecussions from this data spill for some time to come. In the end, if we want privacy from being data mined, deletion of such records is the only way to go.

Patient's room phone with basic presence

Those who know about my phone startup Voxable will know I have far more ambitious goals regarding presence and telephony, but during my recent hospital stay, I thought of a simple subset idea that could make hospital phone systems much better for the patient, namely a way to easily specifiy whether it’s a good time to call the patient or not. Something as simple as a toggle switch on the phone, or with standard phones, a couple of magic extensions they can dial to set whether it’s good or not.

When you’re in the hospital, your sleep schedule is highly unusual. You sleep during the day frequently, you typically sleep much more than usual, and you’re also being woken up regularly by medical staff at any time of the day for visits, medications, blood pressure etc.

At Stanford Hospital, outsiders could not dial patient phones after 10pm, even if you might be up. On the other hand even when the calls can come through, people are worried if it’s a good time. So a simple switch on the phone would cause the call to be redirected to voice mail or just a recording saying it’s not a good time. Throw it to take a nap or do something else where you want peace and quiet. If you throw it at night, it stays in sleep mode until 8 or 9 hours. Then it beeps and reverts to available mode. If you throw it in the day, it will revert in a shorter amount of time (because you might forget) however a fancier interface would let you specify the time on an IVR menu. Nurses would make you available when they wake you in the morning, or you could put up a note saying you don’t want this. (Since it seems to be the law you can’t get the same nurse two days in a row.)

In particular, when doctors and nurses come in to do something with you, they would throw the switch, and un-throw it when they leave, so you don’t get a call while in the middle of an examination. The nurse’s RFID badge, which they are all getting, could also trigger this.

Now people who call would know they got you at a good time, when you’re ready to chat. Next step — design a good way for the phone to be readily reachable by people in pain, such as hanging from the ceiling on a retractable cord, or retractable into the rail on the side of the bed. Very annoying when in pain to begin the slow process of getting to the phone, just to have them give up when you get to it.

Anti-Phishing -- warn if I send a password somewhere I've never sent it

There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. “Pet names” given to web sites so you can confirm you’re where you were before.

I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults. Now I don’t store my most important passwords (bank, etc.) in my password vault, but I do store most medium importance ones there (accounts at various billing entities etc.) I just use a simple common password for web boards, blogs and other places where the damage from compromise is nil to minimal.

So when I go to such a site, I expect the password vault to fill in the password. If it doesn’t, that’s a big warning flag for me. And so I can’t easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to bankofthevvest.com (Two “v”s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.

The browser should store passwords in the vault, and even the “don’t store this” passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is “secretword” and I use it on bankofthewest.com, no problem. “secretword” isn’t stored in my password vault, but the hash of it is. If I ever type in “secretword” to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I’ll do a good job of examining — it’s still possible I’ll be fooled by bankofthevvest.com, but other tricks won’t fool me.

The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at “who-cares” sites, we don’t want to be warned all the time. The second time we use that password, we’ll get a warning, and we need a box to say, “Don’t warn me about re-use of this password.”

Read on for subtleties…  read more »

For virtual servers, virtualize mySQL too

Right now this blog is hosted by powerVPS, which provides virtual private servers. This is to say they have a large powerful box, and they run virutalization softare (Virtuozo) which allows several users to have the illusion of a private machine, on which they are the root user. In theory users get an equal share of the machine, but since most of the users do not run at full capacity, any user can "burst" to temporarily use more resources.

Unfortunately I have found that this approach does fine with CPU, but not with RAM. The virtual server I first used had 256MB of ram (burst to 1gb) available to it. But it was not able to perform at the level of a dedicated server with 256mb of ram -- swapping the rest to disk -- would do. It also doesn't perform anywhere near the level of a non-virtualized shared server, which is what you will commonly see in very cheap web hosting. An ordinary shared server looks like normal multi-user timesharing, though they tend to virtualize the apache so it looks like everybody gets their own apache.

I eventually had to double my virtual machine's capacity -- and double the monthly fee. You probably saw an increase in the speed of this blog a couple of weeks ago.

Now the virtual machines out there are pretty good, and do cost only a modest performance hit when you run one. But when you run many, you lose out on the OS's ability to run many copies of the same program but keep only one copy in memory.

I propose a more efficient design that mixes shared machine and virtual machine concepts. One step to that would be to not have every user run their own mySQL database. MySQL takes about 50mb of ram, which is not much today but a lot if multiplied out 16 times. Instead have one special virtual server (or just a different dedicated machine) with a copy of MySQL. This would be a special version, which virtualizes the connection, so that as far as each IP address connecting to it is concerned, they think they have a private version of mySQL. This means that everybody can create a database called "drupal" (as far as they think) if they want to. The virtualizer would add some prefix to the names based on which customer is connecting. This would also apply to permissions, so each root user would be different, and really only have global permissions on the right databases.

You would not be able to modify mySQL's parameters or start and stop it -- unless you went back to running a private copy in your own virtual server. But if you didn't need that, you would get a more efficient database server.

The bad news -- it's up to the hosting companies to do this. MySQL AB doesn't get paid by those hosting companies, so it's not particularly motivated to put in changes for them. But it's an open source system so others could write such changes.

The other big users on web hosts are apache and php. There are many virtualized versions of apache, but this is often where people do want to virtualize, to run custom scripts, java programs and special CGIs. Providing a mixed shared/virtual environment here would be more difficult. One easy approach would be to have it be two web sites, with some pages on the shared site and links going to the virtual site. More cleverly, the virtual apache could have internal rewrite rules that are not shown to outsiders that cause it to fetch and forward from the virtualized web server.

Get a giant display screen

Yesterday I received a Dell 3007WFP panel display. The price hurt ($1600 on eBay, $2200 from Dell but sometimes there are coupons) and you need a new video card (and to top it off, 90% of the capable video cards are PCI-e and may mean a new motherboard) but there is quite a jump by moving to this 2560 x 1600 (4.1 megapixel) display if you are a digital photographer. This is a very similar panel to Apple's Cinema, but a fair bit cheaper.

It's great for ordinary windowing and text of course, which is most of what I do, but it's a great deal cheaper just to get multiple displays. In fact, up to now I've been using CRTs since I have a desk designed to hold 21" CRTs and they are cheaper and blacker to boot. You can have two 1600x1200 21" CRTs for probably $400 today and get the same screen real estate as this Dell.

But that really doesn't do for photos. If you are serious about photography, you almost surely have a digital camera with more than 4MP, and probably way more. If it's a cheap-ass camera it may not be sharp if viewed at 1:1 zoom, but if it's a good one, with good lenses, it will be.

If you're also like me you probably never see 99% of your digital photos except on screen, which means you never truly see them. I print a few, mostly my panoramics and finally see all their resolution, but not their vibrance. A monitor shows the photos with backlight, which provides a contrast ratio paper can't deliver.

At 4MP, this monitor is only showing half the resolution of my 8MP 20D photos. And when I move to a 12MP camera it will only be a third, but it's still a dramatic step up from a 2MP display. It's a touch more than twice as good because the widescreen aspect ratio is a little closer to the 3:2 of my photos than the 4:3 of 1600x1200. Of course if you shoot with a 4:3 camera, here you'll be wasting pixels. In both cases, of course, you can crop a little so you are using all the pixels. (In fact, a slideshow mode that zoom/crops to fully use the display would be a handy mode. Most slideshows offer 1:1 and zoom to fit based on no cropping.)

There are many reasons for having lots of pixels aside from printing and cropping. Manipulations are easier and look better. But let's face it, actually seeing those pixels is still the biggest reason for having them. So I came to the conclusion that I just haven't been seeing my photos, and now I am seeing them much better with a screen like this. Truth is, looking at pictures on it is better than any 35mm print, though not quite at a 35mm slide of quality.

Dell should give me a cut for saying this.

Long ago I told people not to shoot on 1MP and 2MP digital cameras instead of film, because in the future, displays would get so good the photos will look obviously old and flawed. That day is now well here. Even my 3MP D30 pictures don't fill the screen. I wonder when I'll get a display that makes my 8MP pictures small.

Congress passes DTOPA -- blocking phones

Today, Congress passed 410-15 the Delete Telephony Online Predators act, or DTOPA. This act requires all schools and libraries to by default block access to the social networking system called the “telephone.” All libraries receiving federal funding, and schools receiving E-rate funding must immediately bar access to this network. Blocks can be turned off, on request, for adults, and when students are under the supervision of an adult.

“This is not the end-all bill,” Rep. Fred Upton (R-Mich.) said. “But, we know sexual predators should not have the avenue of our schools and libraries to pursue their evil deeds.” The “telephone” social network allows voice conversation between a student and virtually any sexual predator in the world. Once a predator gets a child’s “number” or gives his number to the child, they can speak at any time, no matter where the predator is in the world.

Many children have taken to carrying small pocket “telephones” which can be signalled by predators at any time. Use of these will be prohibited.

Transit agencies -- allow a discount for people who travel together for ordinary trips.

Transit is of course more efficient than private cars, many people on one vechicle. But because a round-trip for a couple or family involves buying 4 to 8 single tickets, couples and families who have cars will often take their cars unless parking is going to be a problem. For example, for us to go downtown it’s $6 within SF. For people taking BART from Berkeley or Oakland it’s $13.40 for 2 people. Makes it very tempting to take a car, even if it costs a similar amount (at 35 cents/mile, 15 of those for gasoline in a city) for the convenience and, outside of rush-hour, speed.

So even if transit is the winning choice for one, it often isn’t for 2. And while 2 in a car is better than 1, an extra 2 on transit during non-peak hours is even better for traffic and the environment.

Many transit agencies offer a one-day family pass, mostly aimed at tourists. There may be some that also offer what I am going to propose, which is a more ordinary one-way or return ticket for groups of people living at the same address, that is sufficiently discounted to make them do the transition from car to transit.

This isn’t trivial, we don’t want drivers to have to check addresses on IDs as people get on the bus. They can check a simple card, though. For example, people could get a simple, non-logged card with their photo and some simple word, symbol or colour combination, so that the driver can tell right away that all the cards were picked up together. (For example they could all have the same randomly chosen word on them in large print, or 3 colour stripes.)

The household/family fare would be only available outside of hours where the transit cars get loaded to standing room. Past that point each rider should pay, and driving is usually rough anyway. Passengers could board, show their matching cards, and get reduced, or even free fares for the additional people. The driver could look at the photos but probably only needs to do that from time to time. (Mainly, we would be trying to stop somebody from getting a set of household cards, and selling cheap rides to random people at the stop with them. Not that likely an event anyways, but random photo checks could stop it.)

It’s harder to manage at automatic fare stations as found on subways. There you could get more abuse, but it might not be so much as to present a problem. The main issue would be homeless people “renting” card sets to groups of people who arrive at a turnstile. (At fancy pay-to-pee public toilets in SF, the homeless were given unlimited use tokens. Better that than have them urinate on the streets for lack of a quarter. They promptly got to renting these tokens to tourists wanting to use the toilets.)

If you’re not too worried about abuse, family tickets could simply be purchased in advance from a desk where they can check that everybody is in the same household. The adults would have to show (easiest for couples) but they need not bring the kids, who already get reduced fares as it is, though in the household ticket they would probably be free.

I presume some transit agencies already do this since the one-day passes are common enough. How do they work it out? Is it aimed at locals rather than tourists? Do they assume locals close to the transit line get monthly passes?

No, senator Stevens was misquoted...

Everybody in the blogosphere has heard something about Alaska’s Ted Stevens calling the internet a series of tubes.

They just heard him wrong. His porn filters got turned off and he discovered the internet was a series of pubes.

(And, BTW, I think we’ve been unfair to Stevens. While it wasn’t high traffic that delayed his E-mail — “an internet” — a few days, his description wasn’t really that bad… for a senator.)

Switching to popular vote from electoral college

A proposal by a Stanford CS Prof for a means to switch the U.S. Presidential race from electoral college to popular vote is gaining some momentum. In short, the proposal calls for some group of states representing a majority of the electoral college to agree to an inter-state compact that they will vote their electoral votes according to the result of the popular vote.

State compacts are like treaties but are enforceable by both state courts and federal law, so this has some merit. In addition, you actually don’t even need to get 270 electoral votes in the compact. All you really need is a much smaller number of “balanced” states. For example perhaps 60 typically republican electoral votes and 60 typically democratic electoral votes. Maybe even less. For example I think a compact with MA, IL, MN (42 Dem) and IN, AB, OK, UT, ID, KA (42 Rep) might well be enough, certainly to start. Not that it hurts if CA, NY or TX join.

That’s because normally the electoral college already follows the popular vote. If it’s not going to, the race is very close, and a fairly small number of states in the compact would be assured to swing the electoral college to the popular vote in that case. There are a few exceptions I’ll talk about below, but largely this would work.

This is unlike proposals for states to, on their own, do things like allocate their electors based on popular vote within the state, as Maine does. Such proposals don’t gain traction because there is generally going to be somebody powerful in the state who loses under such a new rule. In a state solidly behind one party, they would be fools to effectively give electoral votes to the minority party. In a balanced state, they would be giving up their coveted “swing state” status, which causes presidential candidates to give them all the attention and election-year gifts.

Even if, somehow, many states decided to switch to a proportional college, it is an unstable situation. Suddenly, any one state that is biased towards one party (both in state government and electoral college history) is highly motivated to put their candidate over the top by switching back to winner-takes-all.

There’s merit in the popular-vote-compact because it can be joined by “safe” states, so long as a similar number of safe votes from the other side join up. The safe states resent the electoral college system, it gets them ignored. Since close races are typically decided by a single mid-sized state, even a very small compact could be surprisingly effective — just 3 or 4 states!

The current “swing state” set is AZ, AR, CO, FL, IA, ME, MI, MN, MO, NV, NH, NM, NC, OH, OR, PA, VA, WA, WV, and WI, though of course this set changes over time. However, once states commit to a compact, they will be stuck with it, even if it goes against their interests down the road.

The one thing that interferes with the small-compact is that even the giant states like New York, Texas and California can become swing states if the “other” party runs a native candidate. California in particular. (In 1984 Mondale won only Minnesota, and got just under 50% of the vote. Anything can happen.) That’s why you don’t just get an “instant effective compact” from just 3 states like California matching Texas and Indiana. But there are small sets that probably would work.

Also, a tiny compact such as I propose would not undo the “campaign only in swing states” system so easily. A candidate who worked only on swing states (and won them) could outdo the extra margin now needed because of the compact. In theory. If the compact grew (with non-swing states, annoyed at this, joining it) this would eventually fade.

Of course the next question may surprise you. Is it a good idea to switch from the electoral college system? 4 times the winner of the popular vote has lost (strangely, 3 of those have been the 3 times the winner was the son — GWB, Adams - or grandson - Harrison- of a President) the White House. The framers of the consitution, while they did not envision the two party system we see today, intended for the winner of the popular vote to be able to lose the electoral college.

When they designed the system, they wanted to protect against the idea of a “regional” president. A regional winner would be a candidate with extreme popularity in some small geographic region. Imagine a candidate able to take 90% of the vote in their home region, that region being 1/3 of the population. Imagine them being less popular in the other 2/3 of the country, only getting 31% of the vote there. This candidate wins the popular vote, but would lose the electoral college (quite solidly.) Real examples would not be so simple. The framers did not want a candidate who really represented only a small portion of the country in power. The wanted to require that a candidate have some level of national support.

The Civil War provides an example of the setting for such extreme conditions. In that sort of schism, it’s easy to imagine one region rallying around a candidate very strongly, while the rest of the nation remains unsure.

Do we reach their goal today? Perhaps not. However, we must take care before we abandon their goal to make sure it’s what we want to do.

Update: See the comments for discussion of ties. Also, I failed to discuss another important issue to me, that of 3rd parties. The electoral debacle of 2000 hurt 3rd parties a lot, with a major “Ralph don’t run” campaign that told 3rd parties, “don’t you dare run if you could actually make a difference.” A national popular vote would continue, and possibly strengthen the bias against 3rd parties. Some 3rd parties have been proposing what they call a “safe state” strategy, where they tell voters to only vote for their presidential candidate in the safe states. This allows them to demonstrate how much support they are getting (and with luck the press reports their safe-state percentage rather than national percentage) without spoiling or being accused of spoiling.

Of course, I think the answer for that would be a preferential ballot, which would have to be done on a state by state basis, and might not mesh well with the compact under discussion.

Judge allows EFF's AT&T lawsuit to go forward

Big news today. Judge Walker has denied the motions — particularly the one by the federal government — to dismiss our case against AT&T for cooperative with the NSA on warrantless surveillance of phone traffic and records.

The federal government, including the heads of the major spy agencies, had filed a brief demanding the case be dismissed on “state secrets” grounds. This common law doctrine, which is often frighteningly successful, allows cases to be dismissed, even if they are of great merit, if following through would reveal state secrets.

Here is our brief note which as a link to the decision.

This is a great step. Further application of the state secrets rule would have made legal oversight of surveillance by spy agencies moot. We can write all the laws we want governing how spies may operate, and how surveillance is to be regulated, but if nobody can sue over violations of those laws, what purpose do they really have? Very little.

Now our allegations can be tested in court.

Paradox of abundance, with DVRs and Netflix/Peerflix

An interesting article in the WSJ yesterday on the paradox of abundance describes how many Netflix customers are putting many “highbrow” or “serious” movies on their lists, then letting them sit for months, unwatched, even returning them unwatched.

This sounds great for Netflix, of course, though it would be bad for Peerflix.

It echoes something I have been observing in my own household with the combination of a MythTV PVR with lots of disk space and a Peerflix subscription. When the time pressure of the old system goes off, stuff doesn’t get watched.

This is a counter to one of the early phenomenon that people with PVRs like Tivo/MythTV experience, namely watching more TV because it’s so much more convenient and there’s much more to watch than you imagined. In particular, when you record a series on your PVR, you watch every episode of that series unless you deliberately try not to (as I do with my “abridged” series watching system where I delete episodes of shows if they get bad reviews.)

In the past, with live TV, you might be a fan of a series, but you were going to miss a few. They expected you to and included “Previously on…” snippets for you. For a few top series you set up the VCR, but even then it missed things. And only the most serious viewers had a VCR record every episode of every show they might have interest in. But that’s easy with the PVR.

We’ve found some of our series watching to be really delayed. Sometimes it’s deliberate — we won’t watch the cliffhanger final episode of a season until we know we have the conclusion at the start of the next season, though that has major spoiler risks. Sometimes there will be series fatigue, where too much of your viewing time has gone to a set of core series and you are keen for something else — anything else. Then the series languishes.

Now there is some time pressure in the DVR. Eventually it runs out of disk space and gets rid of old shows. Which is what makes the DVDs from Peerflix or Netflix in even more trouble. Some have indeed gone 6 months without watching.

As the WSJ article suggests, part of it relates to the style of show. One is always up for lighthearted shows, comedies etc. But sitting there for months is The Pianist. For some reason when we sit down in front of the TV and want to pick a show, Nazis never seem very appealing. Even though we know from recommendations that it’s a very good film.

When the cinema was the normal venue for films, the system of choice was different. First of all, if we decide we want to go out to a movie, we’ll consider the movies currently playing. Only a small handful will be movies we think worthwhile to go to. In that context, it’s much more likely we might pick a serious or depressing movie with Nazis in it. It could easily be the clear choice in our small list. In addition, we know that the movie will only be in cinemas for a short time, any given movie, especially serious ones, may be gone in a few weeks. That’s even more true in smaller markets.

I’ve also noticed a push for shorter programming. When you’ve rented a DVD, your plan for the evening is clear, you are going to watch a movie at home. When you just sit down to choose something from your library, the temptation is strong to watch shorter things instead of making a 2 hour committment to a longer thing.

These factors are even more true when there are 2 or more people to please, instead of just one. The reality seems to be when the choice is 2 hours of war or Nazis or a 22 minute TV comedy, the 22 minute comedy — even several of them in a row — is almost always the winner. Also popular are non-fiction shows, such as science and nature shows, which have no strict time contract since you can readily stop them in the middle to resume later with no suspense.

Anyway, as you can see the WJS article resonated with me. Since the phenomenon is common, the next question is what this means for the industry. Will the market for more serious movies be diminished? The public was already choosing lighter movies over serious ones, but now even those who do enjoy the serious movies may find themselves tending away from them.

Of course, if people take a DVD from Netflix and leave it on the shelf for months, that actually helps the market for the disk in the rental context, helps it quite a bit. Far more copies are needed to meet the demands of the viewers, even if there are fewer viewers. However, the real shift coming is to pay-per-view and downloading. If people look at the PPV menu and usually pick the light movie over the serious one, then the market for the serious ones is sunk.

Burning Man 2005 Panoramas

Hot on the heels of the regular photos the gallery of 2005 Burning Man Panoramas is now up. This year, I got to borrow a cherry picker at sunset on Friday for some interesting perspectives. The long ones are around 3400 by 52000 at full res (180 megapixels) and even the ones on the web are larger than before. Use F11 to put your browser into full screen mode.

This year I switched most of my generation to Panorama Factory, which in its latest verions has allowed fine control of the blending zone, so I can finally use it to deal with moving people in scenes.

Here’s a view of the temple, mostly because it has the narrowest thumbnail.

On the refutation of Metcalfe's law

Recently IEEE Spectrum published a paper on a refutation of Metcalfe’s law — an observation (not really a law) by Bob Metcalfe — that the “value” of a network incrased with the square of the number of people/nodes on it. I was asked to be a referee for this paper, and while they addressed some of my comments, I don’t think they addressed the principle one, so I am posting my comments here now.

My main contention is that in many cases the value of a network actually starts declining after a while and becomes inversely proportional to the number of people on it. That’s because noise (such as flamage and spam) and unmanageable signal (too many serious messages) rises with the size and eventually reaches a level where the cost of dealing with it surpasses the value of the network. I’m thinking mailing lists in particular here.

You can read my referee’s comments on Metcalfe’s law though note that these comments were written on the original article, before some corrections were made.

How only Google can pull off pay-to-perform ads

Bruce Schneier today compliments Google on trying out pay-to-perform ads as a means around click-fraud, but worries that this is risky because you become a partner with the advertiser. If their product doesn’t sell, you don’t make money.

And that’s a reasonable fear for any small site accepting pay-to-perform ads. If the product isn’t very good, you aren’t going to get a cut of much. Many affiliate programs really perform poorly for the site, though a few rare ones do well.

However, Google has a way around this. While the first step on Google’s path to success was to make a search engine that gave better results, how they did advertising was just as important. At a time when everybody was desperate for web advertising, and sites were willing to accept annoying flash animations, pop-ups and pop-unders and even adware, Google introduced ads that were purely text. In addition, they had the audacity, it seemed, to insist that pay-per-click bidding advertisers provide popular ads people would actually click through. If people are not clicking on your ad, Google stops running it. They even do this if there are not other ads to place on the page. They had the guts to say, “We’ll sell pay per click, but if your ad isn’t good, we won’t run it.” Nobody was turning down business then, and few are now.

Sites of course don’t want to be paid per click, or a cut of sales. They want a CPM, and that’s about all they want, as long as the ads are otherwise a good match for the site. Per-click costs and percentages are just a means to figuring out a CPM. Advertisers don’t want to pay CPMs, they want to pay for results, like clicks or sales.

Google found a great way to combine the two. They offered pay per click, but they insisted that the clicks generate enough CPM to keep them happy.

The same will apply here. They will offer pay for performance, but those ads will be competing with bidders who are bidding pay-per-click. Google will run, as it always has, the type of ad that gets the highest results. If you bid pay per performance, and the PPCs are bidding higher, your ad won’t run. And even if there are not higher PPCs, if your ad isn’t working and convering into sales and generating revenue for Google, I suspect they will just not run it. They can afford to do this, they are Google.

And so they will get the best of both worlds again. Advertisers who can come up with products that can sell through ads will pay for actual sales, and love how they can calculate how well it does for them. Google will continue to get good CPMs, which is what they care about, and what Adsense partners (including myself) care about. And they will have eliminated clickfraud at least on these types of ads. Once again they stay on top.

(Disclaimer: I am a consultant to Google, and am in their Adsense program. If you aren’t in it, there is a link in the right-hand bar you can use to join that program. I get a pay for performance credit if you do. Unlike Google’s PPC ads, where Adsense members are forbidden by contract from encouraging people to click on the ads, there is no need for such strictures against pay for performance ads, in fact there’s evey reason to encourage it.)

Remaining neutral on network neutrality -- it's the monopoly, stupid

People ask me about the EFF endorsing some of the network neutrality laws proposed in congress. I, and the EFF are big supporters of an open, neutral end-to-end network design. It’s the right way to build the internet, and has given us much of what we have. So why haven’t I endorsed coding it into law?

If you’ve followed closely, you’ve seen very different opinions from EFF board members. Dave Farber has been one of the biggest (non-business) opponents of the laws. Larry Lessig has been a major supporter. Both smart men with a good understanding of the issues.

I haven’t supported the laws personally because I’m very wary of encoding rules of internet operation into law. Just about every other time we’ve seen this attempted, it’s ended badly. And that’s even without considering the telephone companies’ tremendous experience and success in lobbying and manipulation of the law. They’re much, much better at it than any of the other players involved, and their track record is to win. Not every time, but most of it. Remember the past neutrality rules that forced them to resell their copper to CLECs so their could be competition in the DSL space? That ended well, didn’t it?

Read on…  read more »

Syndicate content