Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

Sysadmin services trading

I've ranted before about just how hard it has become to configure and administer computers. And there are services where you can hire sysadmins to help you, primarily aimed at novice users.

But we advanced users often need help today, too. Mostly when we run into problems we go to message boards, or do web searches and find advice on what to do. And once we get good on a package we can generally fix problems with it in no time.

I would love a service where I can trade my skill with some packages for help from others on other packages. There are some packages I know well, and could probably install for you or fix for you in a jiffy. Somebody else can do the same favour for me. In both cases we would explain what we did so the other person learned.

All of this would take place remotely, with VNC or ssh. Of course, this opens up a big question about trust. A reputation system would be a big start, but might not be enough. Of course you would want a complete log of all files changed, and how they were changed -- this service might apply more to just editing scripts and not compiling new binaries. Best of all, you could arrange to have a virtualized version of your machine around for the helper to use. After examining the differences you could apply to them to your real machine. Though in the end, you still need reputations so that people wanting to hack machines would not get into the system. They might have to be vetted as much as any outside consultant you would hire for money.

There seems a real efficiency to be had if this could be made to work. How often have you pounded for hours on something that a person skilled with the particular software could fix in minutes? How often could you do the same for others? Indeed, in many cases the person helping you might well be one of the developers of a system, who also would be learning about user problems. (Admittedly those developers would quickly earn enough credit to not have to maintain any other part of their system.)

The real tool would be truly secure operating systems where you can trust a stranger to work on one component.

Model airplane dogfights with LEDs

Lots of people love model airplanes, and I bet they would love to simulate dogfights. They can't fire actual projectiles, as that would be dangerous, expensive, unworkable due to the weight and actually damage planes.

It should be possible to set up a system for dogfights using light, however. One way would be to have planes mount lasers that send out a coded pulse with a bit of dispersion, and have the other planes mount receivers with diffusers to pick up light from a lot of directions. It might be better to go in reverse, the way many shooting games do -- the planes broadcast a coded pulse from some bright LED in a specific colour and the "gun" is just a narrow sight that tries to pick up these pulses. When the gun gets one, it sends it down to the coordinator on the ground, and that tells the target plane it's been hit (possibly forcing it to leave the airspace after some number of hits, or impair the flying controls, etc.)

Of course you need authenticated equipment. If people provide their own it's too easy to cheat, and one could also just make a gun that has no barrel instead of a wide one, or have one on the ground. So some honour might be required here.

It would of course be hard to do, with no cockpit view. Some larger model planes can carry small video cameras for a more realistic dogfight of that sort, but I suspect people could figure something out. The gun could have sensors for the pulses that are wider than the actual "direct hit" sensor, allowing them to tell you when you're getting close, and even showing a screen on a laptop that is not a camera view from the plane but at least a view of how close you are to the target.

The wireless watch as a PDA/phone extension

I wrote earlier about the bluetooth vibrator watch. I pushed this in part to promote the idea that phones should (almost) never ring. That ringing is rude to others and violates your own privacy, too.

Sony, Citizen and some others are now releasing bluetooth watches that go beyond this. Your watch should become a very small control station for your larger PDA/phone. Of course digital watches have a small screen, and there are also some nice analog watches where the background of the watch is secretly a screen. This should become cheaper with time.

As before, when a call comes in, your watch should gently vibrate or even just tingle your skin with a small charge. On the screen should be the caller-ID, and the buttons should be marked with choices, such as rejecting the call or accepting it. (These features will be in some of the upcoming bluetooth watches) If you accept it, the caller would hear you saying that you are getting out your real headset/handset and will talk to them in a few seconds. If you were in a meeting, they might be told it will be more than a few seconds, as you must excuse yourself from the room.

Your watch of course knows if it is on your wrist in many ways, including temperature, so the phone can know to actually ring if you’ve taken the watch off — for example when going to bed, if you want it to ring when you’re in bed, that is.

As the screens increase in resolution, they could also show things like the subject of emails and pages. No more pulling out the blackberry or cell phone — just a subtle glance at your watch when it tingles. Be nice if you can set your presence on your watch so that all calls go to voice mail, too.

Most flip phones have a 2nd small screen on them so you can see the time and caller-id when the phone is closed. This would not be needed if you use a watch like this, so the cost of the phone can be reduced to make up for the more expensive watch.

Your watch could also bind to your desk phone at the office. And the phone would also know if you are in the office or not.

Imagine a world of peace where you’re never hearing phones going off, and you aren’t seeing people constantly pulling out phones and blackberries to check calls and messages. Imagine a world where people no longer wear cell phones on their belts, either.

The watch could have a small headset in it too, but that would add bulk, and I think it’s better to pull out a dedicated one.

The only real downside to this — you would probably have to charge your watch once a week. This might not easily fit in with the smaller ladies’ watch designs. It should be possible in any larger design. E-ink technology, which takes no power to run a display, could also make a great material for the background of your watch dial, or even display a tolerable virtual watch dial for the many who prefer an analog set of hands. It might be necessary to design a protocol even lower power than bluetooth to give the watches even better battery life, and of course a standard charging interface found in hotels and offices would be great.

I think once this happens it will be hard to imagine how we tolerated it any other way. Yes, people get fun and status from their ringtones, but I think we can handle sacrificing that.

The watch could also be a mini-screen for a few other PDA and phone functions. For example, if you use a bluetooth earpiece, you can keep your phone in your pocket or purse, which is really nice, but sometimes you want a bit of display, for example to assist with voice command mode.

(Of course if you know about Voxable, you know I believe phone calls should simply not happen at all at the wrong times, but that’s a different leap.)

Peerflix goes to dollar prices

I have written several times before about Peerflix — Now that I’ve started applying some tags as well as categories to my items you can now see all the Peerflix stories using that link — and the issues behind doing a P2P media trading/loaning system. Unlike my own ideas in this area, Peerflix took a selling approach. You sold and bought DVDs, initially for their own internal currency. It was 3 “Peerbux” for new releases, 2 for older ones, and 1 for bargain bin disks.

That system, however, was failing. You would often be stuck for months or more with an unpopular disk. Getting box sets was difficult. So in December they moved to pricing videos in real dollars. I found that interesting because it makes them, in a way, much closer to a specialty eBay. There are still a lot of differences from eBay — only unboxed disks are traded, they provide insurance for broken disks and most importantly, they set the price on disks.

One can trade DVDs on eBay fairy efficiently but it requires a lot of brain effort because you must put time into figuring good bid and ask prices for items of inconsequential price. Peerflix agreed that this is probably a poor idea, so they decided to set the prices. I don’t know how they set their initial prices, but it may have been by looking at eBay data or similar information.  read more »

Photo editors: Embed your text in the jpegs

Hey photo editing programs — I’m looking at you, Photoshop — a lot of you allow people to place text into graphic images, usually as a text layer. Most graphics with text on the web are made this way. Then we export the image as a jpeg or png/gif, flatting the layers so our artful text is displayed. This is how all the buttons with words are made, as well as the title banner graphics on most web sites.

So photo editors, when you render and flatten the layers, take the visible text (you know what it is) and include it in a tag inside the file, such as the EXIF information. Possibly as the caption if there isn’t already one. Let us disable this, including on just a single layer, but providing it would be a good default.

Then all the web spiders/search engines would be able to find that text. Web page editors could offer that text as a possible “alt” text for the graphic. And the blind would be able to have their web-page readers read to them the text embedded in graphics.

We're #12. We're #12!

From the shameless narcissism department: I was surprised to see myself and the EFF picked by PC World today at #12 on their 50 most important people on the web list. I’m really there as a proxy for the EFF, I suspect, but it’s great to see our work recognized. I’m pleased to say the EFF is going like gangbusters right now with so many cases under our wing, and many thousands of new members in the last year, thanks in part to the AT&T lawsuit and others. Of course every year we must repeat our fundraising efforts all over again — the vast majority of EFF money comes from individual members and donors, not from corporations much at all, and only to a small degree from foundation grants.

It’s also good to see fellow EFF board members Larry Lessig, Brewster Kahle and Dave Farber on the list, along with many other EFF friends and associates, and my Bittorrent compatriot Bram Cohen appears at #3. Of course, this and $4 will get you a cup of coffee.

Calendar software, notice when I fly

Most of us, when we travel, put appointments we will have while on the road into our calendars. And we usually enter them in local time. ie. if I have a 1pm appointment in New York, I set it for 1pm not 10am in my Pacific home time zone. While some calendar programs let you specify the time zone for an event, most people don't, and many people also don't change the time zone when they cross a border, at least not right away. (I presume that some cell phone PDAs pick up the new time from the cell network and import it into the PDA, if the network provides that.) Many PDAs don't really even let you set the time zone, just the time.

Here's an idea that's simple for the user. Most people put their flights into their calendars. In fact, most of the airline web sites now let you download your flight details right into your calendar. Those flight details include flight times and the airport codes.

So the calendar software should notice the flight, look up the destination airport code, and trigger a time zone change during the flight. This would also let the flight duration look correct in the calendar view window, though it would mean some "days" would be longer than others, and hours would repeat or be missing in the display.

You could also manually enter magic entries like "TZ to PST" or similar which the calendar could understand as a command to change the zone at that time.

Of course, I could go on many long rants about the things lacking from current calendar software, and perhaps at some point I will, but this one struck me as interesting because, in the downloaded case, the UI for the user is close to invisible, and I always like that.

It becomes important when we start importing our "presence" from our calendar, or get alerts from our devices about events, we don't want these things to trigger in the wrong time zone.

Without knowing it, we're all in the gene databases already

I have written before how future technology affects our privacy decisions today. DNA collection is definitely one of these areas. As you may know, law enforcement in the USA is now collecting DNA from people convicted of crimes, and even those arrested in a number of jurisdictions — with no ability to expunge the data if not found guilty. You may feel this doesn’t affect you, as you have not been arrested.

As DNA technology grows, bioinformatics software is becoming able to determine that a sample of DNA is a “near match” for somebody in a database. For example, they might determine that a person in the database is not the source of the DNA being studied, but is a relative of that person.

In a recent case, a DNA search turned up not the perpetrator, but his brother. They investigated the male relatives of the brother and found and convicted the man in question.  read more »

Zphone and the "rich little attack"

I was discussing his Zphone encrypting telephone system with Phil Zimmermann today. In his system, phone calls are encrypted with opportunistic, certificateless cryptography, which I applaud because it allows zero user interface and not centralization. It is vulnerable to “man in the middle” attacks if the MITM can be present in all communications.

His defence against MITM is to allow the users of the system to do a spoken authentication protocol at any time in their series of conversations. While it’s good to do it on the first call, his system works even when done later. In their conversation, they can, using spoken voice, read off a signature of the crypto secrets that are securing their conversation. The signatures must match — if they don’t, a man-in-the-middle is possibly interfering.

I brought up an attack he had thought of and called the Rich Little attack, involving impersonation with a combination of a good voice impersonation actor and hypothetical computerized speech modification that turns a good impersonator into a near perfect one. Phil believes that trying to substitute voice in a challenge that can come at any time, in any form, in any conversation is woefully impractical.

A small amount of thought made me produce this attack: Two impersonators. Early on in a series of conversations, the spy agency trying to break in brings in two impersonators who have listened to Alice and Bob respectively (we are hearing their calls) and learned their mannerisms. A digital audio processor helps convert the tones of their voice. That’s even easier on an 8khz channel.  read more »

Subsidize customers, not phones

As you may know, if you buy a cell phone today, you have to sign up for a 1 or 2 year contract, and you get a serious discount on the phone, often as much as $200. The stores that sell the phones get paid this subsidy when they sell to you, if you buy from a carrier you just get a discount. The subsidy phones are locked so you can’t go and take them to another carrier, though typically you can get them unlocked for a modest fee either by the carrier or unlock shops.

The phones are locked in a different way, in that this subsidy pretty much makes everybody buy their phone through a carrier. Since you are going to sign up with a carrier for a year or two anyway, you would be stupid not to. And except for prepaid, signing up even without a subsidy phone still requires a contract, you just don’t get anything for it.

Because of this, it is carriers that shop for phones, not consumers. The carriers tell the handset makers what to provide, and quite often, what not to provide. Subsidy phones tend to come with features disabled, such as bluetooth access for your laptop to sync the address book or connect to the internet. A number of PDA phones are sold with 802.11 access in them in Europe, but this feature is removed for the U.S. market. The carriers don’t want you using 802.11 to bypass their per minute fees, or they want to regulate your data use.

This method of selling phones is the biggest crippler of the cell phone industry. If consumers bought phones directly, there would be more competition and more features. But less control by the carriers.

That’s the only reason I can think of why they don’t do what seems obvious to me. If you walk up to a carrier and say you will sign the 2 year contract, but want to bring your own phone, they should be very happy to hear that and give you the subsidy. They can give it to you as a $10 discount for 20 months instead of $200 all at once and it would actually be cheaper for them. This would allow a much better resale market in used phones, and allow new and innovative phones — even open source homebuilt phones. Competition and free markets means innovation.

They could even exercise some control if they truly needed to. They need not let you just bring in any phone, they could still specify which ones are approved. I think that would be stupid, but they could do it. However, this would still not let them so easily control what applications you could get on the phone. For example, one reason they disabled bluetooth features (other than headset) on many phones is they wanted you to pay their fees to download your apps and photos over the network, not just sync them up to your computer for free. An open phone market would deprive them of that revenue.

So frankly, if they are so worried about just these revenue issues, then give me less subsidy. Figure out what you’re losing by letting me have my choice of phone, and take it out of the subsidy. I can still put in my choice of phone today if I am willing to pay the extra $200, but of course few want to do that, so there’s no market for such phones. This would improve that.

There must be some number which makes this work, and the innovation generated would benefit the carriers in the long run. In Asia, subsidies have largely gone away, and there is word this trend may be moving to Europe, where at least carriers are happy to have 802.11 in their phones. Let’s hope.

It's OK, the internet will scale fine

I’ve been seeing a lot of press lately worrying that the internet won’t be able to handle the coming video revolution, that as more and more people try to get their TV via the internet, it will soon reach a traffic volume we don’t have capacity to handle. (Some of this came from a Google TV exec’s European talk, though Google has backtracked a bit on that.)

I don’t actually believe that, even given the premise behind that statement, which is traditional centralized download from sites like Youtube or MovieLink. I think we have the dark fiber and other technology already in place, with terabits over fiber in the lab, to make this happen.

However, the real thing that they’re missing is that we don’t have to have that much capacity. I’m on the board of Bittorrent Inc., which was created to commercialize the P2P file transfer technology developed by its founder, and Monday we’re launching a video store based on that technology. But in spite of the commercial interest I may have in this question, my answer remains the same.

The internet was meant to be a P2P network. Today, however, most people do download more than they upload, and have a connection which reflects this. But even with the reduced upload capacity of home broadband, there is still plenty of otherwise unused upstream sitting there ready. That’s what Bittorrent and some other P2P technologies do — they take this upstream bandwidth, which was not being used before, and use it to feed a desired file to other people wishing to download the file. It’s a trade, so you do it from others and they do it for you. It allows a user with an ordinary connection to publish a giant file where this would otherwise be impossible.

Yes, as the best technology for publishing large files on the cheap, it does get used by people wanting to infringe copyrights, but that’s because it’s the best, not because it inherently infringes. It also has a long history of working well for legitimate purposes and is one of the primary means of publishing new linux distros today, and will be doing hollywood major studio movies Feb 26.

Right now the clients connect with whoever they can connect with, but they favour other clients that send them lots of stuff. That makes a bias towards other clients to whom there is a good connection. While I don’t set the tech roadmap for the company, I have expectations that over time the protocol will become aware of network topology, so that it does an even better job of mostly peering with network neighbours. Customers of the same ISP, or students at the same school, for example. There is tons of bandwidth available on the internal networks of ISPs, and it’s cheap to provide there. More than enough for everybody to have a few megabits for a few hours a day to get their HDTV. In the future, an ideal network cloud would send each file just once over any external backbone link, or at most once every few days — becoming almost as efficient as multicasting.

(Indeed, we could also make great strides if we were to finally get multicasting deployed, as it does a great job of distributing the popular material that still makes up most of the traffic.)

So no, we’re not going to run out. Yes, a central site trying to broadcast the Academy Awards to 50 million homes won’t be able to work. And in fact, for cases like that, radio broadcasting and cable (or multicasting) continue to make the most sense. But if we turn up the upstream, there is more than enough bandwidth to go around within every local ISP network. Right now most people buy aDSL, but in fact it’s not out the question that we might see devices in this area move to being soft-switchable as to how much bandwidth they do up and and how much down, so that if upstream is needed, it can be had on demand. It doesn’t really matter to the ISP — in fact since most users don’t do upstream normally they have wasted capacity out to the network unless they also do hosting to make up for it.

There are some exceptions to this. In wireless ISP networks, there is no up and downstream, and that’s also true on some ethernets. For wireless users, it’s better to have a central cache just send the data, or to use multicasting. But for the wired users it’s all 2-way, and if the upstream isn’t used, it just sits there when it could be sending data to another customer on the same DSLAM.

So let’s not get too scared. And check out the early version of bittorrent’s new entertainment store and do a rental download (sadly only with Windows XP based DRM, sigh — I hope for the day we can convince the studios not to insist on this) of multiple Oscar winner “Little Miss Sunshine” and many others.

A solar economics spreadsheet

In light of my recent threads on CitizenRe I built a spreadsheet to do solar energy economic calculations. If you click on that, you can download the spreadsheet to try for yourself. If you don’t have a spreadsheet program (I recommend the free Gnumeric or Open Office) it’s also up as a Google Solar Spreadsheet but you may need a Google account to plug in your own numbers.  read more »

Do taxi monopolies make sense in the high-tech world?

Many cities (and airports) have official taxi monopolies. They limit the number of cabs in the city, and regulate them, typically by issuing “medallions” to cabs or drivers or licences to companies. The most famous systems are in London and New York, but they are in many other places. In New York, the medallions were created earlier in the century, and have stayed fixed in number for decades after declining from their post-creation peak. The medallion is a goldmine for its “owner.” Because NY medallions can be bought and sold, recently they have changed hands at auction for around $300,000. That 300K medallion allows a cab to be painted yellow, and to pick up people hailing cabs in the street. It’s illegal for ordinary cars to do this. Medallion owners lease the combination of cab and medallion for $60 to $80 for a 7-9 hour shift, I believe.

Here in San Francisco, the medallions are not transferable, and in theory are only issued (after a wait of a decade or more) to working cab drivers, who must put in about 160 4-hour shifts per year. After that, they can and do rent out their medallion to other drivers, for a more modest rental income of about $2,000 per month.

On the surface, this seems ridiculous. Why do we even need a government monopoly on taxis, and why should this monopoly just be a state-granted goldmine for those who get their hands on it? This is a complex issue, and if you search for essays on taxi medallions and monopoly systems you will find various arguments pro and con. What I want to get into here is whether some of those arguments might be ripe for change, in our new high-tech world of computer networks, GPSs and cell phones.

In most cities, there are more competitive markets for “car services” which you call for an appointment. They are not allowed to pick up hailing passengers, though a study in Manhattan found that they do — 2 of every 5 cars responding to a hail were licenced car services doing so unlawfully.  read more »

CitizenRe, real or imagined -- a challenge

Recently I opened up a surprising can of worms with a blog post about CitizenRe wondering if they had finally solved the problem of making solar power compete with the electrical grid. At that post you will see a substantial comment thread, including contributions by executives of the firm, which I welcome. At first, I had known little about CitizenRe and the reputation it was building. I thought i should summarize some of the issues I have been considering and other elements I have learned.

CitizenRe’s offer is very appealing. They claim they will build a plant that can make vastly cheaper solar. Once they do, they will install it on your roof and “rent” it to you. You buy all the power it produces from them at a rate that beats your current grid power cost. Your risks are few — you put down a deposit of $500 to $1500 depending on system size, you must cover any damage to the panels, and they offer removal and replacement for a very modest fee if you need to reroof or even move. You lock in your rate, which is good if grid rates go up and bad if grid rates go down or other solar becomes cheaper, but on the whole it’s a balanced offer.

In fact, it seems too good to be true. It’s way, way cheaper than any offering available today. Because it sounds so good, many people are saying “show me.” I want to see just how they are going to pull that off. Many in the existing solar industry are saying that much louder. They are worried that if CitizenRe fails to deliver, all their customers will have been diverted to a pipedream while they suffer financial ruin. Of course, they are also worried that if CitizenRe does deliver, they will be competed out of business, so they do have a conflict of interest.

Here are some of the things to make me skeptical.  read more »

When should a password be strong

If you’re like me, you select special unique passwords for the sites that count, such as banks, and you use a fairly simple password for things like accounts on blogs and message boards where you’re not particularly scared if somebody learns the password. (You had better not be scared, since most of these sites store your password in the clear so they can mail it to you, which means they learn your standard account/password and could pretend to be you on all the sites you duplicate the password on.) There are tools that will generate a different password for every site you visit, and of course most browsers will remember a complete suite of passwords for you, but neither of these work well when roaming to an internet cafe or friend’s house.

However, every so often you’ll get a site that demands you use a “strong” password, requiring it to be a certain length, to have digits or punctuation, spaces and mixed case, or subsets of rules like these. This of course screws you up if the site is an unimportant site and you want to use your easy to remember password, you must generate a variant of it that meets their rules and remember it. These are usually sites where you can’t imagine why you want to create an account in the first place, such as stores you will shop at once, or blogs you will comment on once and so on.

Strong passwords make a lot of sense in certain situations, but it seems some people don’t understand why. You need a strong password in case it is possible or desireable for an attacker to do a “dictionary” attack on your account. This means they have to try thousands, or even millions of passwords until they hit the one that works. If you use a dictionary word, they can try the most common words in the dictionary and learn your password.  read more »

Upgrading to Drupal 5.1

I have upgraded the site to the latest Drupal 5.1. For a short time that means some features I coded won't be available until I re-patch, such as my anti-spam comment tool (comments are moderated for now.) If stuff is broken, let me know. (I don't know what happened to the category menus and will try to get them back.) I'll also be adding some new features, such as RSS feeds of comments and nodes and some other things mostly only seen by those who create an account.

I've put in drupal's simple captcha module which does a math problem instead of the old simple question I had. It seems to be generating an sql error, but is otherwise working. I may change it to the simple text question as a default captcha is subject to spammer attack.

Drupal has had a pretty terrible upgrade procedure for some time now, with upgrade consisting of simply replacing the entire file tree, and proctecing your local config. This had no accounting for local changes to code or even installed modules. At least in 5.0 they have moved to putting non-core modules and themes in their own site-only directory. I'm also now installing from CVS which should let me make my changes and import their changes as well.

Anti-gerrymandering formulae

A well known curse of many representative democracies is gerrymandering. People in power draw the districts to assure they will stay in power. There are some particularly ridiculous cases in the USA.

I was recently pointed to a paper on a simple, linear system which tries to divide up a state into districts using the shortest straight line that properly divides the population. I have been doing some thinking of my own in this area so I thought I would share it. The short-line algorithm has the important attribute that it’s fixed and fairly deterministic. It chooses one solution, regardless of politics. It can’t be gamed. That is good, but it has flaws. Its district boundaries pay no attention to any geopolitical features except state borders. Lakes, rivers, mountains, highways, cities are all irrelevant to it. That’s not a bad feature in my book, though it does mean, as they recognize, that sometimes people may have a slightly unusual trek to their polling station.  read more »

Now that virtualizers are here, let's default to letting you run your old system

Virtualizer technology, that lets you create a virtual machine in which to run another “guest” operating system on top of your own, seems to have arrived. It’s common for servers (for security) and for testing, as well as things like running Windows on linux or a Mac. There are several good free ones. One, kvm, is built into the lastest Linux kernel (2.6.20). Microsoft offers their own.

I propose that when an OS distribution does a major upgrade, it encapsulate your old environment as much as possible in a compressed virtualizer disk image. Then it should allow you to bring up your old environment on demand in a virtual machine. This way you can be confident that you can always get back to programs and files from your old machine — in effect, you are keeping it around, virtualized. If things break, you can see how they broke. In an emergency, you can go back and do things within your old machine. It can also allow you to migrate functions from your old machine to your new one more gradually. Virtual machines can have their own IP address (or even have the original one. While they can’t access all the hardware they can do quite a bit.

Of course this takes lots of disk space, but disk space is cheap, and the core of an OS (ie. not including personal user files like photo archives and videos) is usually only a few gigabytes — peanuts by today’s standards. There is a risk here, that if you run the old system and give it access to those personal files (for example run your photo organizer) you could do some damage. OSs don’t get do a great division between “your” files for OS and program config and “your” large data repositories. One could imagine an overlay filesystem which can only read the real files, and puts any writes into an overlay only seen by the virtual mount.

One can also do it the other way — run the new OS in the virtual machine until you have it tested and working, and then “flip the switch” to make the new OS be native and the old OS be virtual at the next boot. However, that means the new OS won’t get native hardware access, which you usually want when installing and configuring an OS upgrade or update.

All this would be particuarly handing if doing an “upgrade” that moves from, say, Fedora to Ubuntu, or more extreme, Windows to Linux. In such cases it is common to just leave the old hard disk partition alone and make a new one, but one must dual boot. Having the automatic ability to virtualize the old OS would be very handy for doing the transition. Microsoft could do the same trick for upgrades from old versions to Vista.

Of course, one must be careful the two machines don’t look too alike. They must not use the same MAC address or IP if they run internet services. They must, temporarily at least, have a different hostname. And they must not make incompatible changes, as I noted, to the same files if they’re going to share any.

Since hard disks keep getting bigger with every upgrade, it’s not out of the question that you might not keep your entire machine history behind as a series of virtual machine images. You could imagine going back to the computer environment you had 20 years ago, on demand, just for fun, or to recover old data formats — you name it. With disks growing as they are, we should not throw anything away, even entire computer environments.

Social networking sites -- accept you won't be the only one, and start interoperating.

So many social networking sites (LinkedIn, Orkut, Friendster, Tribe, Myspace etc.) seem bent on being islands. But there can’t be just one player in this space, not even one player in each niche. But when you join a new one it’s like starting all over again. I routinely get invitations to join new social applications, and I just ignore them. It’s not worth the effort.

At some point, 2 or more of the medium sized ones should realize that the way to beat #1 is to find a way to join forces. To make it possible on service A to tie to a friend on service B, and to get almost all the benefits you would have if both people were on the same service. Then you can pick a home service, and link to people on their home services.

This is a tall order, especially while protecting highly private information. It is not enough to simply define a file format, like the FOAF format, for transporting data from one service to another. At best that’s likely only to get you the intersection of features of all the services using the format, and an aging intersection at that.

How to do this while preserving the business models and uniqueness of the services is challenging. For example, some services want to charge you for distant contacts or certain types of searches of your social network. And what do you do when a FoF involves the first friend being on service B and the FoF being on service C.

Truth is, we all belong to many social networks. They won’t all be in one system, ever.

You can’t just have routine sharing. This is private information, we don’t want spammers or marketers harvesting it.

The interchange format will have to be very dynamic. That means that as soon as one service supports a new feature, it should be possible for the format to start supporting it right away, without a committee having to bless a new standard. That means different people will do the same thing in different ways, and that has to be reconciled nicely in the future, not before we start using it.

Of course, at the same time I remain curious about just what they hope for us to do with these social networks. So far I have mostly seen them as a source of entertainment. Real live-altering experiences are rare. Some are using them for business networking and job hunting. Mailing FoFs didn’t really work out, it quickly became more spam than anything. Searching a network (the ideal app for Google’s Orkut) has not yet been done well.

Perhaps the right answer is to keep the networks simple and then let the applications build on top of them, independent of how the networks themselves are implemented. This means, however, a way to give an individual application access to your social network and — this is tricky — the social networks of your friends. Perhaps what we need is a platform, implemented by many, upon which social applications can then be built by many. However, each one will need to ask for access, which might encourage applications to group together to ask as a group. The platform providers should provide few applications. In effect, even browsing your network is not an application the provider should offer, as that has to travel over many providers.

Once some smaller networks figure this out, the larger ones will have to join or fall. Because I don’t want to have to keep joining different networks, but I will join new applications based on my network.

Farewell, Studio 60 on the Sunset Strip

I’ve decided to stop watching Studio 60. (You probably didn’t even know I was watching it, but I thought it was worthwhile outlining the reasons for not watching it.)

Studio 60 was hailed as the most likely great show of this season, with good reason, since it’s from Aaron Sorkin, creator of one truly great show (the West Wing) and one near-great (Sportsnight.) Sorkin is deservedly hailed for producing TV that’s smart and either amusing or meaningful, and that’s what I seek. But I’m not caring about the characters on Studio 60.

I think Sorkin’s error was a fundamental conceit — that the workings of TV production will be as interesting to the audience as they are to the creators. Now I’m actually more interested than most in this, having come from a TV producing family, and with a particular interest in the world of comedy and Saturday Night Live. It’s not simply that this was a “Mary Sue” where Sorkin tries to tell us how he would do SNL if he were in charge, since I’m not sure that’s what it is.

I fear that he went into the network and said, “Hey! The heroine is the principled network president! The heroes are the show’s executive producers!” and the network drank their own kool-aid. How could they resist?

The West Wing tried to really deal with DC issues we actually care about. We went from seeing Bradley Whitford battle to save the education system to battling to avoid ticking off sponsors. How can that not be a letdown? The only way would be if it were a pure comedy.

It’s possible to do an entertaining show about TV. Sorkin’s own Sportsnight was one, after all. However, you didn’t have to care a whit about sports, or sports TV, or TV production to enjoy that show. Those things were the background, not the foreground of Sportsnight. There have been many great comedies about TV and Radio — Dick Van Dyke, Mary Tyler Moore, SCTV, Home Improvement, Murphy Brown, WKRP etc. However, dramas about TV have rarely worked. The only good one I can think of was Max Headroom, and it was more about a future vision of media than about the TV industry.

Studio 60 is sometimes amusing (though not even as amusing as the West Wing) but surprisingly unfunny. Indeed, the show-within-the-show is also surprisingly unfunny. You would think they could write and present one truly funny sketch a week. SNL has to write over an hour’s worth, and while it often does not succeed, there’s usually one good sketch. Had he wanted a Mary-Sue story, he would have done this.

So let that be a lesson. TV should stick to making fun of itself, not trying to make itself appear heroic. We’re not buying it.

Syndicate content