Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

Outsourced valet parking with drive-by-wire cars

There already are some drive-by-wire cars being sold, including a few (in Japan) that can parallel park themselves. And while I fear that anti-terrorist worries may stand in the way of self-driving and automatic cars, one early application, before we can get full self-driving, would be tele-operated cars, the the remote driver in an inexpensive place, like Mexico.

Now I don’t know if the world is ready, safety-wise for a remote chauffeur in a car driving down a public street, where it could hit another car or pedestrian, even if the video was very high-res and the latency quite low. But parking is another story. I think a remote driver could readily park a car in a valet lot kept clear of pedestrians. In fact, because you can drive very slowly to do this, one can even tolerate longer latencies, perhaps all the way to India. The remote operator might actually have a better view for parking, with small low-res cameras mounted right at the bumpers for a view the seated driver can’t have. They can also have automatic assists (already found in some cars) to warn about near approach to other cars.

The win of valet parking is large — I think at least half the space in a typical parking lot is taken up with lanes and inter-car spacing. In addition, a human-free garage can have some floors only 5’ high for the regular cars, or use those jacks around found in some valet garages that stack 2 cars on top of one another. So I’m talking possibly almost 4 times the density. You still need some lanes of course, except for cars you are certain won’t be needed on short notice (such as at airports, train stations etc.)

The wins of remote valet parking include the ability to space cars closely (no need to open the doors to get out) and eventually to have the 5’ high floors. In addition, remote operators can switch from vehicle to vehicle instantly — they don’t have to run to the car to get it. They can switch from garage to garage instantly, meaning their services would be 100% utilized.

Read on…  read more »

A multi power supply for your desk from a PC power supply

I’ve blogged several times before about my desire for universal DC power — ideally with smart power, but even standardized power supplies would be a start.

However, here’s a way to get partyway, cheap. PC power supplies are really cheap, fairly good, and very, very powerful. They put out lots of voltages. Most of the power is at +5v, +12v and now +3.3v. Some of the power is also available at -5v and -12v in many of them. The positive voltages above can be available as much as 30 to 40 amps! The -5 and -12 are typically lower power, 300 to 500ma, but sometimes more.

So what I want somebody to build is a cheap adapter kit (or a series of them) that plug into the standard molex of PC power supplies, and then split out into banks at various voltages, using the simple dual-pin found in Radio Shack’s universal power supplies with changeable tips. USB jacks at +5 volts, with power but no data, would also be available because that’s becoming the closest thing we have to a universal power plug.

There would be two forms of this kit. One form would be meant to be plugged into a running PC, and have a thick wire running out a hole or slot to a power console. This would allow powering devices that you don’t mind (or even desire) turning off when the PC is off. Network hubs, USB hubs, perhaps even phones and battery chargers etc. It would not have access to the +3.3v directly, as the hard drive molex connector normally just gives the +5 and 12 with plenty of power.

A second form of the kit would be intended to get its own power supply. It might have a box. These supplies are cheap, and anybody with an old PC has one lying around free, too. Ideally one with a variable speed fan since you’re not going to use even a fraction of the capacity of this supply and so won’t get it that hot. You might even be able to kill the fan to keep it quiet with low use. This kit would have a switch to turn the PS on, of course, as modern ones only go on under simple motherboard control.

Now with the full set of voltages, it should be noted you can also get +7v (from 5 to 12), 8.7v (call it 9) from 3.3 to 12, 1.7v (probably not that useful), and at lower currents, 10v (-5 to +5), 17v (too bad that’s low current as a lot of laptops like this), 24v, 8.3v, and 15.3v.

On top of that, you can use voltage regulators to produce the other popular voltages, in particular 6v from 7, and 9v from 12 and so on. Special tips would be sold to do this. This is a little bit wasteful but super-cheap and quite common.

Anyway, point is, you would get a single box and you could plug almost all your DC devices into it, and it would be cheap-cheap-cheap, because of the low price of PC supplies. About the only popular thing you can’t plug in are the 16v and 22v laptops which require 4 amps or so. 12v laptops of course would do fine. At the main popular voltages you would have more current than you could ever use, in fact fuses might be in order. Ideally you could have splitters, so if you have a small array of boxes close together you can get simple wiring.

Finally, somebody should just sell nice boxes with all this together, since the parts for PC power supplies are dirt cheap, the boxes would be easy to make, and replace almost all your power supplies. Get tips for common cell phone chargers (voltage regulators can do the job here as currents are so small) as well as battery chargers available with the kit. (These are already commonly available, in many cases from the USB jack which should be provided.) And throw in special plugs for external USB hard drives (which want 12v and 5v just like the internal drives.)

There is a downside. If the power supply fails, everything is off. You may want to keep the old supplies in storage. Some day I envision that devices just don’t come with power supplies, you are expected to have a box like this unless the power need is very odd. If you start drawing serious amperage the fan will need to go on and you might hear it, but it should be pretty quiet in the better power supplies.

Why isn't my cell phone a bluetooth GPS

GPS receivers with bluetooth are growing in popularity, and it makes sense. I want my digital camera to have bluetooth as well so it can record where each picture is taken.

But as I was drivng from the airport last night, I realized that my cell phone has location awareness in it (for dialing 911 and location aware apps) and my laptop has bluetooth in it, and mapping software if connected to a GPS — so why couldn’t my cell phone be talking to my laptop to give it my location for the mapping software? Or ideed, why won’t it tell a digital camera that info as well?

Are people making cell phones that can be told to transmit their position to a local device that wants such data?

Update: My Sprint Mogul, whose GPS is enabled by the latest firmware update, is able to act as a bluetooth GPS using a free GPS2Blue program.

Dept. of Justice files subpoena against NSA to get Google search records

April 1, 2006, San Francisco, CA: In a surprise move, Department of Justice (DoJ) attorneys filed a subpoena yesterday in federal court against the National Security Agency, requesting one million sample Google searches. They plan to use the searches as evidence in their defence of the constitutionality of the Child Online Protection Act.

The DoJ had previously requested a subpoena against Google, Inc. itself for the records, but Google mounted a serious defence, resulting in much more limited data flow. According to DoJ spokesperson Charles Miller, “Google was just putting up too much of a fight. The other sites and ISPs mostly caved in quickly and handed over web traffic and search records without a fuss, but Google made it expensive for us. We knew the NSA had all the records, so it seemed much simpler to just get them by going within the federal government.”

“Yahoo, of course, gave in rather easily. If they hadn’t, we could have just asked our friends in the Chinese government to demand the records. Yahoo does whatever they say.”

The White House revealed in December that the NSA has been performing warrentless searches on international phone, e-mail and internet traffic after the New York Times broke the story. Common speculation suggests they have been tapping other things, to data mine the vast sea of internet traffic, looking for patterns that might point to enemy activity.

“The NSA has the wires into all the hubs already, it’s just a lot faster for them to get this data.”

“We can neither confirm nor deny we have these search records,” said an un-named NSA spokesperson. “In fact, even asking if we have them makes you suspect.”

(Thanks to John Gilmore for the suggestion.)

Upcoming speaking and conferences

Next week (Mon-Tuesday) I will be speaking at David Isenberg’s “Freedom To Connect” conference, on an open net, in Silver Spring, Maryland (Washington DC.)

April 10 I will be at UCSB’s CITS conference (Santa Barbara, obviously) on growing network communities.

The next week April 19-21 sees the annual Asilomar Microcomputer Workshop, always a good time.

See you there.

DNA/Medical testing services that promise what they won't tell you.

Today many services offer MRI scans for a fee. DNA testing services are getting better and better — soon they will be able to predict how likely it is you will get all sorts of diseases. Many worry that this will alter the landscape of insurance, either because insurance companies will demand testing, or demand you tell them what you learn from testing.

Many criticise the MRI scan services because they quite often show up something that’s harmless but which inspires a medical demand to check it out just to make sure. That check-out may be expensive or even be invasive surgery.

So people are suggesting, “don’t get tested because you don’t want to know.” However there is stuff you do want to know, and stuff that may be useful in the future.

I propose escrowed testing services that promise not to tell you, or anybody, certain things that they find. For example, they would classify genetic tendencies for diseases for which there is no preventative course, like Parkinsons or Alzheimer’s. Many would say they have no desire to know they might get Parkinson’s as they get older, since there is nothing they can do but worry.

The service might escrow the data themselves with the big added plus that they would regularly re-evalutate the decision about whether you might want to know something. Thus, if a preventative treatment comes along that is recommended for people with your genes, then they would recognize this and tell you the thing you formerly didn’t want to know. They would also track what new things can be tested, and tell you when a re-test might make sense as technology improves.

The information could also be escrowed with a trusted friend or relative. You might have a buddy or spouse who could get the full story, and then decide what you need to know. A tough role of course, perhaps too tough for a spouse, who would worry about your pending Parkinson’s almost as much as you. You can’t easily use relatives, because they share lots of your DNA, at least for DNA scans.

Of course, your doctor is an obvious person for this, but this goes against their current principles and training.

Of course there is a legal minefield here. One would need a means to provide pretty good immunities for the escrowers, while at the same time not allowing them to be totally careless. The honest belief that information was in the don’t-tell profile should be enough to provide immunity.

There is another risk here, of course, which is that strangers, even doctors, can’t be fully trusted with the final decisions on your health. You will be taking a risk that the 3rd parties won’t work quite as hard at solving problems or even paying attention to them as you would. In fact, you’re doing this because you would worry too much.

There’s another benefit to this. Many people, if told to expect something, will invent it. This is very common with things like drug side-effects. In order to avoid this, when I take a new drug, I don’t read the long PDR list of side-effects. Instead, I have Kathryn read them. Then I can wait until I truly sense something and ask if it’s a side effect, rather than expecting it. The same principle applies here, though that suggests you need somebody very close as your health escrow. Of course again your doctor would be the right choice here, so that when you went there to say “I’m feeling numbness in my fingers” she could say, “Ah, well now it’s time to tell you about this thing we found in your gene scan.” Possibly a system that lets the doctor search, but not read, the gene scan, could help.

I get, but mostly don't get, the slingbox

Jeff Pulver is a giant fan of the SlingBox, a small box you hook up to your TV devices and ethernet, so you can access your home TV from anywhere. It includes a hardware encoder, infrared controllers to control your cable box, Tivo or DVD player, and software for Windows to watch the stream. The creators decided to build it when they found they couldn’t watch their San Francisco Giants games while on business trips.

And I get that part. For those who spend a great deal of time on the road, the hotel TV systems are pretty sucky. They only have a few channels (and rarely Comedy Central, which has the only show I both watch on a daily basis and which needs to be watched sooner rather than later) as well as overpriced movies. But at the same time you have to be spending a lot of time on the road to want this. My travel itineraries are intense enough that watching TV is the last thing I want to do on them.

But at the same time it’s hard not to be reminded of the kludge this is, especially hooked to a Tivo. And if you have a Tivo or simliar device, you know it’s the only way you will watch TV, live TV is just too frustrating. I don’t have Tivo any more, I have MythTV. MythTV is open, which is to say it stores the recorded shows on disk in files like any other files. If I wanted to watch them somewhere else, I could just copy or stream them easily from the MythTV box, and that would be a far better experience than decoding them to video, re-encoding them with the SlingBox and sending them out. Because of bandwith limits, you can’t easily do this unless you were to insert a real-time transcoder to cut the bandwidth down, ideally one that adapts to bandwidth as the Slingbox does. And I don’t think anybody has written one of these, because I suspect the MythTV developers are not that too-much-time-on-the-road SlingBox customer.

(Admittedly the hardware transcode would be useful, but a 3GHZ class machine should be capable of doing it in software, and really, this should just be software.) For watching live TV, if you cared, you probably could do that in Myth TV. If you cared.

So the SlingBox…  read more »

High oil demand good for Global Warming, and nuclear waste

Two thoughts today related to global warming.

Many people fear that as the developing world starts developing more, it’s going to want more fossil fuels, and will burn them like crazy and add more CO2 to the air. China is the country feared the most. As you can see in my many pictures from there they burn a lot of coal there and the air is most often hazy from it.

But I recently wondered — just as the growing Chinese market has shot up the world price of steel and many other commodities, surely it will do the same for oil. And that in turn should drive the development of cleantech energy sources that would replace the oil. Which the energy hungry developing nations might well embrace even faster, not having as much infrastructure built around oil, gas and coal. So might good come out of bad?


The second thought: Some environmentalists are now reversing themselves and starting to embrace nuclear power. (As I wrote a year ago, “Glow is the new green.”) The theory goes that as scary as nuclear power is, burning coal, oil and gas is even more frightening. Certainly many more people have been killed due to coal and oil, and far more radiation has been released into the air from coal, and there’s been far more destruction of the land for it. The big uncertainty however, is what to do with the waste.

So I read some of the predictions on the dire end of the global warming community. They are quite dire. Sea levels rise enough to flood many of the most populated and fertile coastal areas. Billions displaced and with the ruination of agricultural lands, quite possibly a billion dead. Millions of square miles destroyed. Possible largescale desertification.

If you fear that scenerio, the nuclear waste problem becomes moot. After all, even if you took every nuclear plant, melted it down, and then sprayed the ground-up waste into the winds, the damage would be a fraction of that. Deaths would be perhaps a few millions, with tens of millions more facing higher cancer and mutation risks. Hundreds of thousands of square miles would be made unusuable (admittedly for a much longer time period.) But it’s all nothing compared to a billion deaths and millions of square miles, and in truth, being ground up into the wind is not what’s going to happen to the waste. Even the worst scenarious have just a few meltdowns or explosions. And of course modern nuclear plant designs are much better, and can’t melt-down, though they could still be blown up by terrorists.

Obviously it would be better to find other things. Photovoltaic and thermo-solar, of course. Waves and wind and ethanol. Fusion, in our dreams, or other more speculative technologies. But if runaway climate change from CO2 is our true fear, the choices are pretty easy.

Sudden web traffic not so great with Adsense

As I’ve written before, Google’s Adsense program is for many people bringing about the dream of having a profitable web publication. I have a link on the right of the blog for those who want to try it. I’ve been particularly impressed with the CPMs this blog earns, which can be as much as $15. The blog has about 1000 pageviews/day (I don’t post every day) and doesn’t make enough to be a big difference, but a not impossible 20-fold increase could provide a living wage for blogging. Yahoo publisher’s blog ads, which some of you are seeing in the RSS feed have been a miserable failure, and will be removed next software upgrade. They are poorly targetted and have earned me, literally, not even a dollar.

Recently however I noticed a way in which the Google targetting engine is too good, from my standpoint. From time to time my web sites or blog will get linked from a very high traffic site. This week the 4th amendment shipping tape was a popular stumble-upon, for example. I’ve also been featured from time to time in Slashdot, boingboing and various other popular sites.

When this happens, it’s not a money maker because the click-throughs and CPMs drop way down. This is not too surprising. The people following a quick link are less likely to be looking for the products Google picks to advertise. However, more recently I saw high traffic bringing down not just the CPM, but even the total dollars! I theorize that Google, seeing poor clickthrough, cycles out the normally lucrative ads to try others. So even the normal visitors, who have not gone away, are seeing more poorly chosen ads. Or it could just be randomness that I’m seeing a pattern in.

Solution: Consider the referer when placing ads. If the clickthrough is poor on a given referer (like slashdot or boingboing) then play with the ads to hunt for better clickthrough. For the more regular referers (which are typically internal, the result of searches and regular readers) stick to the ads that typically perform well with that group.

eBay shipping scam and more eBay dynamics

I’ve done a few threads on eBay feedback, today I want to discuss ways to fix the eBay shipping scam. In this scam, a significant proporation of eBay sellers are listing items low, sometimes below cost, and charging shipping fees far above cost. It’s not uncommon to see an item with a $1 cost and $30 in shipping rather than fairer numbers. The most eBay has done about it is allow the display of the shipping fees when you do a search, so you can spot these listings.

I am amazed eBay doesn’t do more, as one of the main reasons for sellers to do this is to save on eBay fees. However, it has negative consequences for the buyer, aside from making it harder to compare auctions. First of all, if you have a problem, the seller can refund your “price” (the $1) but not the shipping, which is no refund at all. Presumably ditto with paypal refunds. Secondly, the law requires that if you are charged more than actual shipping (ie. handling) there is tax on the total S&H. That means buyers pay pointles taxes on shipping.

Again, since eBay would make more fees if they fixed this I don’t know why they have taken so long. I suggest:

  • Let buyers sort by shipping fees. Pretty soon you get a sense of what real shipping on your item should be. A sort will reveal who is charging the real amount and who isn’t. Those who don’t provide fees get listed last — which is good as far as I am concerned.
  • Let buyers see a total price, especially on Buy-it-now, shipping + cost, and sort on that or search on that. Again, those who don’t provide a sipping price come last.
  • Highlight auctions wthat use actual shipping price, or have a handling fee below a reasonable threshold. This will be unfair on certain high-handling items.
  • Of course, charge eBay fees on the total, including handling and shipping. Doesn’t help the buyer any but at least removes the incentive.

Now let’s talk about the reputation dynamics of the transaction. The norm is buyer sends liquid money sight unseen to the seller, and the seller sends merchandise. Why should it necessarily be one way or the other? In business, high reputation buyers just send a purchase order, get the item and an invoice, and pay later.

I think it would be good on eBay to develop a norm that if the buyer has a better reputation thant he seller, the seller ships first, the buyer pays last. If the seller’s rep is better, or it’s even, stick with the current system.

Sellers could always offer this sort of payment, even when the seller is high-rep, to high-rep buyers as an incentive.

There should also be special rules for zero-rep or low-rep sellers. By this I don’t mean negative reputation, just having few transactions. Who is going to buy from a zero-rep seller? The tradition has been to build up a buyer rep, and then you can sell, which is better than nothing but not perfect.

When the seller has a very low rep, the seller should just automatically assume it’s going to be send-merchandise-first, get money later except with very low rep buyers. Low rep sellers should be strongly encouraged to offer escrow, at their expense. It would be worth it. Often I’ve seen auctions where the difference in price is quite large, 20% or more, for sellers of reputations under 5. eBay should just make a strong warning to the low-rep sellers that they should consider this, and even offer it as a service.

Update: I’ve run into a highly useful Firefox extension called ShortShip. This modifies eBay search pages to include columns with total price. Their “pro” version has other useful features. You can sort by it, but it only is able to sort what was on that particular page (ie. the auctions close to ending, typically) so the price sort can be mistaken, with a cheaper buy-it-now not shown. eBay is so slow in adding software features that extensions like this are the way to go.

Wiretaps beget wiretaps -- I don't hate that much to say I told you so.

For some time in my talks on CALEA and VoIP I’ve pointed out that because the U.S. government is mandating a wiretap backdoor into all telephony equipment, the vendors putting in these backdoors to sell to the U.S. market, and then selling the same backdoors all over the world. Even if you trust the USGov not to run around randomly wiretapping people without warrants, since that would never happen, there are a lot of governments and phone companies in other countries who can’t be trusted but whom we’re enabling. All to catch the 3 stupid criminals who use VoIP and don’t use an encrypted system like Skype.

Recently this story about a wiretap on the Greek PM’s phone was forwarded to me by John Gilmore. Ericsson says that they installed wiretap backdoors to allow legal wiretaps, and this system was abused because Vodaphone didn’t protect it very well — a claim they deny. As a result there was tapping of the phone of the prime minister for months, as well as foreign dignitaries and a U.S. Embassy phone. Well, there’s irony.

We’re hearing about this because there is accountability in Greece. But I have to assume it’s going to happen a lot in countries where we will never hear about it. If you build the apparatus of the surveillance society, even with the best of intentions, it will get used that way, either here, or in less savoury places.

It would be nice if U.S. companies would at least refuse to sell the wiretap functions, or charge a fortune for them, to countries without legal requirements for them like the USA. Of course, soon that won’t be very many, thanks to the US lead, and the companies will have to include the backdoors to do business in all those nations. Will U.S. companies have the guts to say, “Sorry China, Saudi Arabia, et al. — no wiretap backdoors in our product, law or not. Add it yourself if you can figure it out.”

Baby Bells announce new "GoodPackets" program to charge for access

New York, March 22, 2006 (CW) Bell South and AT&T, two of the remaining Baby Bell or “iLec” companies announced today, in conjunction with GoodPackets Inc., a program to charge senders for certified delivery of internet packets to their ISP customers.

William Smith, CTO of Bell South, together with AT&T CEO Ed Whitacre, who will be his new boss once the proposed merger is completed, made a joint announcement of the program together with Dick Greengrass, CEO of GoodPackets.

Under the program, customers of GoodPackets interested in better delivery of their packets to AT&T and BellSouth DSL customers will pay GoodPackets a fee to get their packets certified. Certified packets will bypass blocks and filters in the routers of the ISPs for premium delivery to customers, and be tagged as certified to the end-user.

“We’re just seeing too many bad packets these days, and we have to block some of them. But serious, professional sites on the internet don’t want their packets blocked, and are willing to pay to assure they aren’t,” said Whitacre. According to Greengrass, a portion of the money paid to GoodPackets will be given to the ISP in question.”

According to Smith, “his firm should be able, for example, to charge Yahoo Inc. for the opportunity to have its search site load faster than that of Google Inc.”

“A lot of these extra packets filling our pipes are of dubious origin, in any event. A large portion of internet traffic comes from peer to peer filesharing systems which are often infringing copyright, or from companies like Skype bypassing the telcom tarrifs we all have to pay. Charging money will let the legitimate companies out there distinguish their traffic from all this unknown traffic, and assure delivery,” said Whitacre.

Traffic originating from BellSouth and AT&T servers would not need to pay for the premium access. “It’s our network, after all, and our video servers don’t go through the routers to the outside world to get to our users,” said Smith.

Greengrass insisted the fees were not for delivery, but for certification that the packets come from a known and trusted source. Users and ISPs can then decide if they want to give them more reliable delivery and acceptance. That the charges are per packet is simply a way to differentiate the market, and not overcharge low-volume senders.


For those who don’t get it, this is a satire comparing the AOL/Yahoo/Goodmail program to the network neutrality debate.

Have the OS give user permissions on "privileged" IP ports.

Very technical post here. Among the children of Unix (Linux/BSDs/MacOS) there is a convention that for a program to open a TCP or UDP port from 0 to 1023, it must have superuser permission. The idea is that these ports are privileged, and you don’t want just any random program taking control of such a port and pretending to be (or blocking out) a system service like Email or DNS or the web.

This makes sense, but the result is that all programs that provide such services have to start their lives as the all-powerful superuser, which is a security threat of its own. Many programs get superuser powers just so they can open their network port and, and then discard the powers. This is not good security design.

While capability-based-security (where the dispatcher that runs programs gives them capability handles for all the activities they need to do) would be much better, that’s not an option here yet.

I propose a simple ability to “chown” ports (ie. give ownership and control like a file) to specific Unix users or groups. For example, if there is a “named” user that manages the DNS name daemon, give ownership of the DNS port (53) to that user. Then a program running as that user could open that port, and nobody else except root (superuser) could do so. You could also open some ports to any user, if you wanted.  read more »

Let's see neighbourhood fiber lan

The phone companies failed at the fiber to the curb promise in most of the USA and many other places. (I have had fiber to the curb at my house since 1992 but all it provides is Comcast cable.)

But fiber is cheap now, and getting cheaper, and unlike wires it presents no electrical dangers. I propose a market in gear for neighbourhoods setting up a fast NLAN, by running a small fiber bundle through their backyards (or, in urban row housing, possibly over their roofs.) Small fiber conduits could be buried in soil more easily than watering hoses, or run along fences. Then both ends, meeting the larger street or another NLAN, could join up for super-high connectivity.

I would join both ends because then breaks in this amateur-installed line don’t shut it down. The other end need not be at super-speed, just enough so phones work etc. until a temporary above-ground patch can be run above the break.

Of course, you would need consent of all the people on the block (though at the back property line you only need the consent of one of the two sides at any given point.) Municipal regulations could also give neighbours access to the poles though they would probably have to pay a licenced installer.

An additional product to sell would be a neighbourhood server kit, to provide offsite backup for members and video storage. Depending on legal changes, it could be possible to have a block cable company handling the over-the-air DTV stations, saving the need to put up antennas. Deals could be cut with the satellite companies to place a single dish with fancy digital decoder in one house. The cable companies would hate this but the satellite companies might love it.

Of course there does need to be something to connect to at the end of the street for most of these apps, though not all of them. After all, fiber is not that much better than a bundle of copper wires over the short haul of a neighbourhood. But if there were a market, I bet it would come, either with fiber down main streets, fixed wireless or aggregated copper.

Encrytped text that looks like plaintext, thanks to spammers.

You may be familiar with Stegonography, the technique for hiding messages in other messages so that not only can the black-hat not read the message, they aren’t even aware it’s there at all. It’s arguably the most secure way to send secret data over an open channel. A classic form of “stego” involves encrypting a message and then hiding it in the low order “noise” bits of a digital photograph. An observer can’t tell the noise from real noise. Only somebody with the key can extract the actual message.

This is great but it has one flaw — the images must be much larger than the hidden text. To get down a significant amount of text, you must download tons of images, which may look suspicious. If your goal is to make a truly hidden path through something like the great firewall of China, not only will it look odd, but you may not have the bandwidth.

Spammers, bless their hearts (how often do you hear that?) have been working hard to develop computer generated text that computers can’t readily tell isn’t real human written text. They do this to bypass the spam filters that are looking for patterns in spam. It’s an arms race.

Can we use these techniques and others, to win another arms race with the national firewalls? I would propose a proxy server which, given the right commands, fetches a desired censored page. It then “encrypts” the page with a cypher that’s a bit more like a code, substituting words for words rather than byte blocks for byte blocks, but doing so under control of a cypher key so only somebody with the key can read it.

Most importantly, the resulting document, while looking like gibberish to a human being, would be structured to look like a plausible innocuous web page to censorware. And while it is rumoured the Chinese have real human beings looking at the pages, even they can’t have enough to track every web fetch.

A plan like this would require lots and lots and lots of free sites to install the special proxy, serving only those in censored countries. Ideally they would only be used on pages known to be blocked, something tools behind the censorware would be measuring and publishing hash tables about.

Of course, there is a risk that the censors would deliberately pretend to join the proxy network to catch people who are using it. And of course with live human beings they could discover use of the network so it would never be risk-free. On the other hand, if use of the proxies were placed in a popular plugin so that so many people used it as to make it impossible to effectively track or punish, it might win the day.

Indeed, one could even make the encrypted pages look like spam, which flows in great volumes in and out of places like China, stegoing the censored web pages in apparent spam!

(Obviously proxying in port 443 is better, but if that became very popular the censors might just limit 443 to a handful of sites that truly need it.)

The true invention of the internet, redux, and Goodmail/Network Neutrality

I wrote an essay here a year ago on the internet cost contract and how it was the real invention (not packet switching) that made the internet. The internet cost contract is “I pay for my end, you pay for yours, and we don’t sweat the packets.” It is this approach, not any particular technology, that fostered the great things that came from the internet. (Though always-on also played a big role.)

It’s time to re-read that essay because two recent big issues uncover attacks on the contract, and thus no less than the foundation of the internet.

The first is the Goodmail program announced by AOL. The EFF has been a leading member of a coalition pushing AOL to reconsider this program. People have asked us, “how bad can it really be?” Why is putting a price on E-mail so bad?

One particular disturbing thing about the goodmail program is that it reminds me a bit of a protection racket. Goodmail hopes its customers will pay it hundreds of millions of dollars because they are afraid of spam filters. They are selling those customers (who are required to be legitimate mailers sending solicited mail) protection from the spam filters of AOL. Problem is, those spam filters shouldn’t be blocking the legitimate mail at all — it is a flaw in the filters that makes people want to buy protection from them. They’re buying protection from something that shouldn’t be harming them in the first place. An ISP, like AOL, would normally be expected to have the duty to deliver legitimate mail to its customers. To serve those customers, they also block spam. Now, unlike the mobster selling protection, AOL’s spam-blockers are not blocking the legitimate mail maliciously, but that’s about the only difference, and part of why this smells bad.

This has been my direct criticism of the program on its own. Goodmail says it’s really a certification program. There have been IETF standards to sign E-mail and get certificates for signers for a long time, and many “Certificate Authority” companies of all stripes who sell such a process. They don’t charge per message, though.

The charging per message sets a nasty precedent which is an attack on the internet cost contract. It violates the rule about not sweating the individual traffic. I pay for my end, you pay for yours. As soon as we start deciding some traffic is good and bad, and some traffic has to pay to transit the pipes or get through the filters, we’ve taken a step backwards to the settlement based networks that the internet defeated decades ago.

In the 70s and 80s the world had many online services you paid for by the hour. It had MCI mail, which you paid to send. It had packet switched X.25 networks you paid for by the kilopacket. They were all crushed by the internet, not just in cost, buy in innovation. AOL, the last of the online services, had to adopt the internet model in almost all respects to avoid a slope to doom.

The idea of a two-tier internet, which many have been writing about recently, has generated the debate on a subject called network neutrality. Sometimes the problem is attempts to block services entirely based on what they are (such as blocking VoIP that competes with the phone service of the company that owns the wires.) Other times it’s a threat that companies providing high-bandwidth services, like video and voice, should “pay their share” and not get a “free ride” on the pipes that “belong” to the telco or cable ISPs.

Once again, the goal is to violate the contract. The pipes start off belonging to the ISPs but they sell them to their customers. The customers are buying their line to the middle, where they meet the line from the other user or site they want to talk to. The problem is generated because the carriers all price the lines at lower than they might have to charge if they were all fully saturated, since most users only make limited, partial use of the lines. When new apps increase the amount a typical user needs, it alters the economics of the ISP. They could deal with that by raising prices and really delivering the service they only pretend to sell, or by charging the other end, and breaking the cost contract. They’ve rattled sabres about doing the latter.

The contract is worth defending not just because it gives us cheap internet or flat rates. It is worth defending because it fosters innovation. It lets people experiment with services that would get shut down quickly if people got billed per packet. Without the cost contract, great new ideas will never get off the ground. And that would be the real shame.

Give us TVoIP, not IPTV

A buzzword in the cable/ilec world is IPTV, a plan to deliver TV over IP. Microsoft and several other companies have built IPTV offerings, to give phone and cable companies what they like to call a “triple play” (voice, video and data) and be the one-stop communications company.

IPTV offerings have you remotely control an engine at the central office of your broadband provider which generates a TV stream which is fed to your TV set. Like having the super set-top box back at the cable office instead of in your house. Of course it requires enough dedicated bandwidth to deliver good quality TV video. That’s 1.5 to 2 megabits for regular TV, 5 to 10 for HDTV with MP4.

Many of the offerings look slick. Some are a basic “network PVR” (try to look like a Tivo that’s outsourced) and Microsoft’s includes the ability to do things you can’t do at your own house, like tune 20 channels at once and have them all be live in small boxes.

I’m at the pulver.com Von conference where people are pushing this, notably the BellSouth exec who just spoke.

But they’ve got it wrong. We don’t need IPTV. We want TVoIP or perhaps more accurately Vid-o-IP. That’s a box at your house that plays video, and uses the internet to suck it down. It may also tune and record regular TV signals (like MythTV or Windows Media Center.)

Now it turns out that’s more expensive. You have to have a box, and a hard drive and a powerful processor. The IPTV approach puts all that equipment at the central office where it’s shared, and gets economies of scale. How can that not be the winner?

Well for one, TVoIP doesn’t require quality bandwidth. You can even use it with less bandwidth than a live stream takes. That’s because after people get TVoIP/PVR, they don’t feel inclined to surf. IPTV is still too much in the “watch live TV” world with surfing. TVoIP is in the poor-man’s video on demand world (like NetFlix and Tivo) where you pick what you might want to see in advance, and later go to the TV to pick something from the list of what’s shown up. Tuns out that’s 95% as good as Video on Demand, but much cheaper.

But more importantly, it’s under your control. Time and time again, the public has picked a clunkier, more expensive, harder to maintain box that’s under their own control over a slick, cheap service that is under the control of some bureaucracy. PCs over mainframes. PCs over Network Computers and Timesharing and SunRays. Sometimes it’s hard to explain why they did this for economic reasons, or even for quality reasons.

They did it because of choice. The box in your own house is, ideally, a platform you own. One that you can add new things to because you want them, and 3rd party vendors can add things to because you demand them. Central control means central choice of what innovations are important. And that never works. Even when it’s cheaper.

If the set top box were to remain a set top box, a box you can’t control, then IPTV would make good sense. But we don’t want it to be that. It’s now time to make it more, and companies are starting to offer products to make it more. We want a platform. Few people want to program it themselves, but we all want great small companies innovating and coming up with the next new thing. Which TVoIP can give us and IPTV won’t. Of course, there are locked TVoIP boxes, like the Akimbo and others, but they won’t win. Indeed, some efforts, like the trusted computing one, seek to make the home box locked, instead of an open platform, when it comes to playing media (and thus locking linux out of the game.) A truly open platform would see the most innovation for the user.

Disclaimer, I am involved with BitTorrent, which makes the most popular software used for downloading video over the internet.

Browsers: Time to have a default margin

In most browsers, the default style presents text adjecent to all sides of the browser window, with no margin. This is a throwback to early days of screen design, when screen real estate was considered so valuable that deliberately wasting it with whitespace was sacrilige.

Of course, in centuries of design on paper, nobody ever put text right up to the margins. Everybody knows it’s ugly and not what the eye wants. Thus, when you see a web page using the default style, which I end up with myself out of laziness, people have a reaction to it as ugly.

Screens are now big enough that it’s time to change the default style to be one that is easier to read. And that means margins. If a page designer wants to put stuff up against the edges, they can easily define their own stylesheets now to do this, so let them do it. I doubt they ever will put text there, though they might put graphics or their own custom margins. If text to the edges is a choice that nobody would make if given the option, it sure seems like silly default to have. It won’t break anything, you can just make the window wider, or make it a user option (which I believe it is in some browsers, but rarely set).

And then more people could use the default for quick pages without having to think about style every time they spit out a web page.

Reputation system for cars and the selfish merge.

George Carlin once proposed a system where people would shoot suction cup darts at cars when they did something annoying, like cutting you off, and if you got too many darts the cops would pull you over. Another friend recently proposed a lot of interest in building some sort of reputation system for cars using computers.

Though Carlin’s was a satire, it actually has merits that it would be hard to match in a computerized system. Sure, we could build a system where if somebody was rude on the road, you could snap a quick photo of their licence plate, or say it into a microphone or cell phone for insertion into a reputation database. But people could also just do this to annoy you. There’s no efficient way to prove you actually were there for the rude event. The photos could do that but it’s too much work to verify them. The darts actually do it, since you could not just stick them on my car when I’m stopped, or I would pull them off before driving.

One problem I want to solve with such a system is the selfish merge. We’ve all seen it — lanes are merging, and the cooperating drivers try to merge early. Then the selfish drivers zoom ahead in the vanishing lane until they get to its end. And always, somebody lets them in. Selfishly zooming up does get you through the jam faster, but at the same time these late mergers are a major contributor to the very jam they are bypassing.

We’ll never stop people from letting in the drivers, and indeed, from time to time innocent drivers get into the free lane because they are not clear on the situation or missed the merge.

…More…  read more »

Hybrid Personal Rapid Transit

When I was in high school, I did a project on PRT — Personal Rapid Transit. It was the “next big thing” in transit and of course, 30 years later it’s still not here, in spite of efforts by various companies like Taxi 2000 to bring it about.

With PRT, you have small, lightweight cars that run on a network of tracks or monorail, typically elevated. “Stations” are all spurs off the line, so all trips are non-stop. You go to a station, often right in your building, and a private mini-car is waiting. You give it your destination and it zooms into the computer regulated network to take you there non-stop.

The wins from this are tremendous. Because the cars are small and light, the track is vastly cheaper to build, and can often be placed with just thin poles holding it above the street. It can go through buildings, or of course go underground or at-grade. (In theory it seems to me smart at-grade (ground-level) crossings would be possible though most people don’t plan for this at present.)

The other big win is the speed. Almost no waiting for a car except at peak times, and the nonstop trips would be much faster than other transit or private cars on the congested, traffic-signal regulated roads.

Update: I have since concluded that self-driving vehicles are getting closer, and because they require no new track infrastructure and instead use regular roads, they will happen instead of PRT.

Yet there’s no serious push for such systems…

Read on.  read more »

Syndicate content