Technology

Linux live CD with network state, use of windows disk

There are a number of Linix "Live CD" distributions out there. These allow you to boot Linux from a CD and run it (somewhat slowly) without ever touching the hard disk in the machine. (They can access the disk however, which makes them good for system repair, both for Linux and Windows.) One popular one is Knoppix, and Mandrake makes one called MandrakeMove, which takes the important next step of letting you store your personal config choices on a USB thumbdrive or floppy. There are distributions that can fit on a thumbrive (after all, those drives are getting quite large for little money, but this is recent enough there hasn't been as much focus on this.)

Let me suggest where I would like this trend to continue. It's great to be able to take any machine and quickly convert it to your style and environment with a CD, or even better a business-card CD or thumbdrive. (Most systems can boot from a CD, fewer from a thumb-drive, most from a floppy leading to another device.) Storing some state on a floppy, thumbdrive, CD-R session -- preferences, home directory files and scripts, browswer config and bookmarks -- is a must. Indeed, if the tools let you build a custom CD just for you with your choice of packages you can bring in much of your whole working environment.

I haven't seen anybody provide automatic storage on the net, based on the assumption the machine you take over probably has an ethernet card. If it does, it would be great to go out and suck down your latest personal changes and files, starting with the most important to get you going, and bringing the rest in the background. This doesn't need a special server, though the group making this distribution might well offer to do so. You could keep and update much of this data in a special mailbox message or mailbox folder, especially with IMAP. Anybody can get access to that. (Or a web mail tool like GMAIL.) Of course if you have actual hosting this can also be used. The data would be encrypted, you would need a password -- not just your mail password -- to use it.

As you changed the data it would be updated to the net storage. Now you could go to any machine with a non-customized CD. Indeed, you could even, on a common fast machine, download a minimal environment (perhaps 60 megabytes which is just a few minutes on a fast broadband link) and after it boots, get the custom information including which other packages are important.

The key is to do things in the Windows filesystem, most likely to be what you find on the machine where you are the guest.  read more »

Solar Powered PC

We all would love solar power to work better, but it's hard to have it make economic sense yet, at least if you're near the grid. A solar panel takes 4 years just to give back the energy it took to build it, and it never pays back the money put in if you compare it to putting the money into the stock market. And that's with full utilization. If you use panels and batteries, any time your batteries are near full the power is being discarded, and you also have to replace your batteries every so often and dispose of the old lead-filled ones. Yuk. A grid-tie can use all the power of a panel but that's an expensive, whole-house thing.

But here's a start -- a solar-using PC power supply. My PCs, like many folks', are on all day, including the peak-demand heat of the day. Desktops draw anywhere from 50 to 200 watts even when idling.

So make a PC power supply that has 3 external connections. One for the wall plug. And two optional ones, one for a 12v solar panel and one for a battery. Then sell it with a 50w or 100w solar panel -- most importantly, the panel should not ever generate more power than the PC uses.

Because of that, during the bright part of the day, the panel will be providing most, or just barely all, of the power for the PC. The wall plug will provide the rest. At night, the wall plug would provide all the power. It's a grid-tie but it doesn't feed power back to the grid, it just reduces demand on it. The 100w panel takes 100w off the grid load during the peak demand times. And we use every watt the panel generates, we never throw any away.  read more »

How can the Scientific Atlanta HD-8000 suck so badly?

I've been a longtime user of the Tivo, and when my mother got an HDTV, I pushed her to get a PVR. In Canada, the only really workable option for her was to rent the HD-8000 HD PVR from Rogers, her cable company. No Tivo service in Canada, and she wasn't ready for a PC based PVR (And HD ones are still immature.)

Two things I learned from the process. The first was how amazed I was at how badly the HD-8000 was designed. It strikes me as a first generation unit, not something that was designed after people looked at the Tivo and the Replay. Trying to watch a show in the middle of recording it is possible, but really cumbersome. It's very easy to lose your buffer on a live program you were watcing, or to lose your place in a recorded program you were watching. Browsing shows is guide-based, requiring you to browse only a particular day at a time. I could go on.

The other remarkable thing was seeing my low-tech mother's reaction. In spite of all I tell her about the PVR, she still wants to watch TV live most of the time. As a retiree and caregiver, she's home most of the time, and while she intellectually understands what the box does, her habits are so-long set that she really doesn't "get" it.

Which may explain the poor UI on the HD-8000. They don't expect their users to get it either. They expect their users to see it as a fancy VCR, with the ability to pause live TV. (Tivo owners learn that pausing live TV is more of a gimmick feature, in that you almost never watch live TV.)

Watching the recorded HD does make me jealous, though. HD PVR choices here are limited. You can get DirecTV's HD-Tivo for $1000, or build a MythTV box for a similar amount of money. It is the need for the PVR that has stopped me from getting HDTV, which otherwise I want very much.

But my Mother doesn't remember that when called on the phone, she can pause it. Or that you should always record a show you see that you want to watch, to give you the freedom to switch from it and come back later without risk. She is happy with her old habit of switching channels when a commercial comes on, and coming back to the other show later, presumably missing some of it. She is even happy watching low def live, when PVRed hi-def is a few steps away. My mother helps me remember that all users are not like me, which is good.

More failsafe firmware upgrade paths

Today, for the 2nd time, I lost a wireless access point in the process of putting new firmware into it. The new firmware apparently has some problems, but that's to be expected as a risk.

I've only seen it rarely, but the right thing to do is to have a rom or small un-writable section of the flash that contains a fully tested minimalist new firmware accepter. So that no matter what you do to the firmware, there is some way to get the old stuff back in, through some use of physical switches. I know have to send this thing back for warranty repair over something that I should be able to fix here.

Now other than that the WRT54G is a fine wireless access point precisely because the firmware is open source and you can get fancy extra features from other folks, but because this means more updating, there should be an escape hatch.

Car app -- 4 way stop broker

I called earlier for ideas for uses of ad-hoc wireless card data networks (with 802.11 or similar.) I've been having trouble finding any compelling because I think the space is narrow, especially for the driver. I don't see much data you will want that only other cars around you will have. It has to be fresh, live data (otherwise your car would have loaded it when parked) and it has to be giant data (otherwise you would pick it up over the 3G or 4G cellular networks at lower data rates) and not suffer from both the connectivity and the data availability being intermittent and random in nature.

However, seeing the Dresner paper on a Reservation-Based Intersection Control Mechanism (with cool simulations) made me wonder if we might be able to get something sooner.

People might be too scared of the technology to handle a high-volume intersection but what about a low volume one, such as a 4-way stop? In particular, what if we have to assume many cars don't have a network?

A networked 4-way stop would have a network node broadcasting its existence and state. If the node at the intersection were down, it would act like an ordinary 4-way stop. Networked cars approaching the intersection would broker travel through it. (They would all have GPS, 802.11 and the node at the intersection would have a map.)

If a car was given access, right lights on the stop signs would light up. (Their power needs are much less than a traffic light, possibly even solar.) The one with the cleared car would light yellow. The cleared driver would get a signal (audio and visible) that they are cleared, inside their car.

Drivers seeing the red light would stop (network enabled or not) and wait for the light to go off after the cleared cars go through. Drivers seeing the yellow light who are not the cleared car (and thus not a networked car) would stop and proceed through the intersection like a normal 4-way stop.

The cleared driver would approach the intersection at reduced speed and check for drivers stopped at the other signs. If there were none she would move through the intersection without stopping. If some were present the display would say which intersections had networked cars. If all were networked, the driver woudl proceed. If some are not networked, the driver would proceed with more caution (perhaps a 5mph rolling stop, ready to full-stop if needed) or speak a command or push a button to enhance the stop signal for the non-networked cars....  read more »

Return of the digital picture frame

A couple of years ago, a series of digital picture frame products appeared. Some took memory cards. One plugged into a modem so grandma could get new grandchild pictures each day without doing things. But they were all super low resolution and high priced.

Panels have come down a lot recently. I see wall mount 1280x1024 panels getting to about $350, wall mountable (though you need power.) That's a resolution I could handle.

How about throwing picture frame ability into these? Either the memory card slot as before, or perhaps 802.11? In the latter case, you could even tolerate not bothering with jpg decompression or much else on the panel, let the PC do it all over the network.

For a few extra bucks, however, a wireless, wall-mount, high-res flat panel display is something I can see people buying many of. Give them a full X server or mini-media server so you can stream mpeg video at them, and I could see a raft of applications as home display and control devices.

They could show you TV, your doorway security cam when the door rings, your caller-ID when the phone rings, weather, traffic, you name it, and be a digital picture frame when nothing else is going on.

Throw in an infrared receiver and they could work with remote controls.

Of course you could also make a mini box that has all this and a VGA output. They do make such boxes with TV output to be media servers connected to your TV and stereo. Has anybody seen one designed to mount flat on the wall behind a flat panel display?

All pointers suggest this product could be under $400 soon, then under $200 at which point you would see a lot of people buying one for each room. Right now 1280x1024 seems the hi-res sweet spot, though in fact 1280 x 854 or of course 1536x1024 to get a photographic aspect ratio woudl be even nicer.

Maybe not for grandma's baby pictures yet, but who knows? If grandma has DSL, you could buy her one of these, and a cheap wireless access point even though she doesn't have any other wireless equipment, and with proper security, let the pictures and display be controlled by you or a photo managing service.

Better UI for Wifi password setup

The new genertion of WiFi equipment supports WPA (WiFi Protected Access) a version of the IETF's EAP protocol, so that superior key authentication with different keys for each user and the keys are much harder to crack. In corporate networks, the keys can be fetched via RADIUS -- effectively allowing a single login password to provide all network access securely.

That's great, but not enough has been done that I have seen to make a good user interface for the home network. I set up family member's wireless networks with WEP keys and its a pain even for a skilled person. When a person visits my house and wants wireless access I need to key in a 32byte hex string.

For home networks, how about a nice simple protocol. When a new device attempts to connect to the network, note that. Then let the user go to the web configuration page for their access point. There it will list the new devices that have tried to get on the net. There will probably be only one. If the user clicks to approve it, transmit the WEP key back to that new device (encrypted with a public key the device provided) so it can now join the network. Possibly with reduced permissions, but that's a bonus.

The main goal is plug and play (or near to it) joining of the encrypted network in the ordinary home. If there are multiple APs, they can share the key with WPA or other protocols. Or frankly, it's not even a giant burden to have to confirm the new user to all the APs, since most homes don't have more than one. (Mine does, I can't get the signal to go from one corner of my house to the other.)

Want to make it even easier for the unskilled home user? Put a button on the access point. Push it, then the new laptop will ask for a key. A light will go on if one and one one device asked for access, and the laptop will confirm it. Then push the button again and the laptop gets a permanent key for access then and in the future. Of course a web interface is cheaper than a button and clearer but this is dirt simple. If two devices try to get access, then you get an error and have to try again or go to the web interface, but this would be rare and a sign that perhaps somebody was trying to sneak in.

Changing the letters on phone keys

When SIP was designed for internet telephony, the feeling was to get rid of the phone number and replace it with IDs with the form of email addresses. E-mail addresses are of course easier to remember and read, though as a downside they tie your address to a domain, which is fine if it's yours, but silly if it's your service provider's.

However, to much surprise, handsets with numeric keypads not only continue to dominate the phone, but their use is growing. So much that complex "texting" systems have been designed and come with phones to let people enter text messages with the keypad.

In addition, popular IP phones feature not full keyboards, but traditional keypads, even though they have room. Mobile phones largely won't have keyboards for size constraints. As a result, IP phone users are using services like Free World Dialup and SipPhone so they can have phone numbers again, the thing we wanted rid of.

There is another ancient system involving phone numbers based on the letters Bell put on the keypad. Starting with Pennsylviana-6-5000, and moving to numbers like 1-800-FLOWERS.

Of course there are other answers to dialing -- menus, speech interfaces and so on. But if dialpads are with us for a while longer, does it make sense to rethink the system of finding words to spell out phone numbers?

If we use the existing system (with perhaps some minor mods) we could get a wide selection of spellable words by having longer numbers. No reason you can't have multiple numbers -- a "normal" 7 (or 10) digit number and then a longer number that is easier to remember but harder to key because it's longer. Thus I could probably have "BRADTEMPLETON" 2723-836753866 as a phone number, as well as my regular 7 digit number for use in systems that can't handle long numbers. Cell phones of course can easily have the length of numbers extended, but even ordinary phones can do this easily with a * or # code.

Of course the spell a word system has name collisions, so not everybody can get their preferred choice of name, but everybody can have an easy to remember string, I would venture. (Like with domain names.)  read more »

I join board of Foresight Institute

I have accepted an invitation to join the Board of Directors for the Foresight Institute for Nanotechnology.

Foresight was created by Chris Peterson and Eric Drexler, author of "Engines of Creation" to act as advocate and watchdog in the field of molecular nanotechnology, of which Eric can claim to be the modern father. I've been a senior associate of the institute for some years and spoken at their conference. I will MC the conference coming up next weekend.

While I put most of my focus right now into issues of computer technology, software, civil rights and the internet, if you ask me what the true "next big thing" is, it's in nanotech, so I'm very pleased to be part of Foresight.

I should also note that Foresight is seeking a new executive director to manage the operations of the institute and take a leadership role in the future of nanotechnology. Contact me if this could be the job for you -- but please, plain-text ASCII resumes only, no word processor files.

What would we do with 802.11 in our car

I wrote some time ago of how I would like a car's MP3 player/computer to have 802.11, so that when it parks in my driveway, it notices it is home and syncs up new data and music.

That would be great, of course, but it seems there should be other things you would do with it. Networking with the car next to you on the road seems like a cool idea but I'm having trouble dreaming up applications. Listening to the music in the next car seems cute but probably would be boring after a while. Being able to talk to the driver of the next car seems like a nice social game (and it hardly needs 802.11) and might just result in road-rage.

If common, I could see it for dating, since people seem to attach a strong romantic image to making eye contact with an attractive person in another car. There was even a dating service I read about long ago which gave you bumper stickers so you could contact somebody if you felt sparks. The personals have a section for this.

You might be able to create longer mesh networks, to share traffic info or the sort of things you used to share on CB if there are enough cars, but this would be highly unreliable, and any application here might be better served by broadcast data that goes over longer ranges. (We are already seeing broadcast traffic data services, though they will never warn about speed traps, I suspect.)

And of course, if you can connect back to the internet that's highly useful, but again this would be highly intermittent connectivity. 802.11 isn't really set up for short-burst connectivity though one could create a protocol that was, good enough to fetch live audio etc. But this ends up being just another microcell network -- what can we get car to car?

So -- all sorts of cute little applications but nothing really compelling in my view. But since we will get wireless networking in our cars for the carport sync, I invite readers to dream up some apps.

New law on semiconductor growth

In 1965, Gordon Moore of intel published a paper suggesting that the number of transistors on a chip would double every year. Later, it was revised to suggest a number of 18 months, which became true in part due to marketing pressure to meet the law.

Recently, Intel revised the law to set the time at two years.

So this suggests a new law, that the time period in Moore's Law doubles about every 40 years.

Will 3 tech trends change where we live?

I suspect that some time this decade we will see 3 tech trends converge which might make a big difference in the utility of remote real estate, land that currently remains undeveloped because it is so remote.

The first is already here, the internet. Many people can now use the internet to work from anywhere, and both long-range wireless broadband and satellite let you get the internet anywhere. That can give you data, video and phone service as well as the conduit for work.

It also gives you shopping, thanks to the commitment of the shipping companies to deliver to any address, even remote ones. Now you don't need much locally -- just your groceries and urgent needs. Everywhere now has a giant bookstore and a giant everything-else store if you can get UPS.

The second trend is cheaper remote power. Possibly solar, but perhaps sooner the fuel cell, to give quiet, clean and cheap electricity anywhere you can get propane delivered. We're not there yet but some products are already on the market. If not there are other improving forms of off-grid power.

The next is the return of cheaper general aviation, allowing people to own planes so they can live far from cities and get to them quickly. This is the only trend to see a recent reversal, as 9/11 has put general burdens on aviation. Today the money you save on the cost of a home, comparing a remote location to a big city, can easily buy that plane.

Some things are still harder, including schooling and of course an active social life. But for a component of society that wanted to live remotely but could not make it workable, this may be about to change. Suddenly that remote hilltop with the fabulous view that was undeveloped because it was off-grid and too remote for the good life may get a house on it. We may see a lot of this.

Telling good patents from bad

Many people feel there's a patent law crisis underway. The Patent office has been granting patents that either seem obvious, or aren't the sort of thing that should be patented. Some advance that software shouldn't be patentable at all, just as mathematics is not patentable.

I don't go that far, for reasons I will explain. But I have found a common thread in many of the bad patents which could be a litmus test for telling the bad from the good.

Patent law, as we know, requires inventions to be novel and not obvious to one skilled in the art.

But the patent office has taken too liberal a definition of novel. They are granting patents when the problem is novel, and the filer is the first to try to solve it. As such their answer to the new question is novel.

The better patents are ones that solve older problems.

Amazon was one of the earliest internet shopping operations. So of course they were among the first to look hard at the UI for that style of shopping, and thus were first to file an invention called one-click-buy. But one-click-buy was really just an obvious answer to a new problem. The same applies to XOR cursors, browser plug-ins, and streaming audio and video.

Some patents, however, are deserving. I remember seeing CS professors give lectures in the mid-70s about how Huffman coding was provably the be best form of data compression, even after Ziv and Lempel published their paper on their compression algorithms. They took a very old problem and came up with a new answer. Key management in cryptography was a 2000 year old problem, and Diffie, Hellman and Merkle came up with a bold new answer. (As did cryptographers at British intelligence, but I still don't think this makes this obvious.)

While it would not solve every problem, I think if patent examiners asked, "How long has somebody been trying to solve the problem this invention solves?" and held off patents when the problem was novel, or at least applied more scrutiny, we would have a lot less problem with the patent system.

Many people simply say, "we should not allow patenting of software."

This has always bothered me. To me, software and hardware are the same thing, and the rest of the world is slowly realizing that. The virtual world is the real world, and having one law for that done in software and another for that done in hardware is a poor course to take.  read more »

Foresight Conference

The weekend of May 14th, I will be attending (and MCing for part of) the Foresight Senior Associates Conference. This conference is always a lot of fun, with many at the edge (and beyond) ideas about nanotech, AI, anti-aging and other related topics. It's run by my friends Chris Peterson and Eric Drexler and their Foresight Institute. You may have read Eric's book "Engines of Creation."

They are offering readers of my blog a $200 discount on attending. To attend, you must be a senior associate, which requires a $250 annual donation, so the discount just about compensates for that. If you're into futurism, this is a fun place to be.

Why don't cell phones have USB?

In line with earlier thoughts about univeral DC power, let me ask why cell phones haven't standardized on USB (or a mini-USB plug) as an interface?

USB provides power. Not as much as some chargers, but enough to get a decent rate to many phones. And it has data, which can be used for phone control and configuration, speakerphone and headset interfaces, address book sync, ringtone download, memory card download, data-modem connections to PCs and anything else, all with one standard plug.

Every cell store has a rack of scores of adapters, chargers and cables. Each time you get a new phone they want to sell you new accessories, I guess. We have a standard. Why don't we use it, or extend it enough to be used.

(I'll admit it's not a good headset interface due to USB's silly master-slave protocol, since to connect to the PC the phone would be a slave, and to connect to the headset it would be the master. But this can be worked around, and I'll tolerate an extra headset jack.)

See below for some interesting safety ideas...  read more »

Offshore patient monitoring

As you might guess from the prior entry, somebody I know recently had an ICU visit. The hospital had to cut back staff, laying off nurses' aides and hiring some extra nurses, then making them do the former work of the nurses aides (changing sheets etc) because of regulations forcing them to have a higher ratio of patients to nurses. So, more nurses per patient but the nurses end up doing less actual nursing per patient because they are doing the work the aides did. Clever, no?

Anyway, to add fuel to the offshore outsourcing debate, I wondered how practical it would be to outsource patient watching. A trained nurse in a lower-income area, possibly on the other side of the world, would watch a patient via a live video feed and data feeds of all the instruments. If they see a problem, they would send an alert to a physically present nurse or doctor. They could see and talk to the patient, if the patient is responsive.

Since the bandwidth would be expensive for this, I imagine a lower-res video for real-time, though still good enough to see important things with remote pan and zoom control. However, on-demand they could jump up the bandwidth during an event. They would also be able to send a command to replay something they saw in full-resolution, with some delay.

To do this the local recorder would record the full resolution video, even HDTV, and keep it for an hour on a hard disk. It woudl transmit a lower-res version live. Since most hospital beds are static scenes this would compress well. Motion, instead of causing artifacts would just call for more bandwidth from the total pool. However, when the watcher says, "let me see the last 10 seconds" the patient's recorder would re-transmit it in full HDTV if necessary.

But the main point is the overseas workers might be so cost effective that you can have near full-time monitoring of a patient by a skilled professional. In many hospitals and nursing homes, the staff might visit only once every few hours, perhaps every 15 minutes at best. You can die in 15 minutes.

Of course it's spooky from a privacy standpoint to be watched all the time, this would not be for everybody. And better instrumentation that's non-intrusive and can detect emergency events quickly would be even better. Though nothing will do as well as a trained person right now. This might also allow more effective home care, though of course in that case it might be too long before an ambulence arrives if an emergency is seen on the monitor. And you had better hope your internet connection does not go down.

Still, there's a lot to say for home care, considering just how many people die or suffer greatly due to hospital-caught infections. As I noted earlier, they are the 4th leading cause of death.

New mobile domain another bad idea

You may have seen a new proposal for a "mobile" top-level domain name for use by something called "mobile users" whatever they are. (The domain will not actually be named .mobile, rumours are they are hoping for a coveted one-letter TLD like .m "to make it easier to type on a mobile phone.)

Centuries ago, as trademark law began its evolution, we learned one pretty strong rule about building rules for a name system for commerce, and even for non-commerce.

Nobody should be given ownership of generic terms. Nobody should have ownership rights in a generic word like "apple" -- not Apple Computer, not Apple Records, not the Washington State Apple Growers, not a man named John Apple.

Rather, generics must be shared. Ownership rights can accrue to them only in specific contexts that are not generic. Because the word "Apple" has no generic meaning when it comes to computers, we allow a company to get rights in that name when applied to computers. A different company has those rights when it applies to records. More than this, different parties could own the same term with the same context in two different cities. There is probably a "China Delight" restaurant in your town.

We hammered out the rules to manage such naming systems literally over centuries, with many laws and zillions of court cases.

Then, when DNS came along we (and I include myself since I endorsed it at the time) threw it all away. We said, when it came to naming on the internet, we would create generic top level domains, and let people own generic names within them.

Thus, "com" for commerce has within it "drugstore.com." Centuries of law establshed nobody could own the generic word "drugstore" but when it comes to names used on the internet, we reversed that. No wonder that company paid near a million for that domain as I recall, and at the record, the inflated number of 7.5 million was paid for business.com

The old TLDs have that mistake built into them. On the internet, we are the only EFF organization because we were first. Nobody else can be that.

The new TLDs continue that trend. Be it .museum, which allows one body to control the generic word museum, or a new proposal for .mobile.

Because of this, people fight over the names, pay huge sums, sue and insist only one name is right for them.

I maintain that the only way to get a competitive innovative space is to slowly get rid of the generics and allow a competitive space of branded TLDs for resale. .yahoo, .dunn, .yellowpages, .google, .wipo, and a hundred other branded resellers competing on on even footing to create value in their brand and win customers with innovative designs, better service, lower prices and all the usual things. I presume .wipo would offer trademark holders powerful protections within their domain. Let them. Perhaps .braddomains would, when you bought a domain, give you every possible typo and homonym for your domain so people who hear it on the radio won't get it wrong typing it in. Perhaps .centraal (former, non-generic name of the now defunct "RealNames" company) would follow their keyword rules. I know .frankston would offer permanent numeric IDs to all. Let them all innovate, let them all compete.

We're nowhere near this system, but I didn't just make up the idea of not owning generics. I think centuries of experience shows it is the best way to go. I wrote this today in response to the .mobile proposal, but you can also find much more on the ideas in my site of DNS essays including this plan to break up ICANN, and essays on generics and also the goals we have for a domain system

Down with P2P software that isn't P2P

No surprise that after the RIAA started filing lawsuits against people they allege were distributing lots of copyrighted files, a movement has sprung up to build filesharing networks where the user hosting data can't be traced so easily.

Today, on Kazaa, all they need to do is try to find a file, look at what a user is sharing and try to download it. That gives them the IP address of the party in question.

The suits will push people into systems that don't make that information easily available. One common design being pushed involves removing the peer to peer aspect that made these systems so efficient and capable of distributing files. Namely the connections are no longer direct, the data flows between one or more intermediaries.

In this case, they can request a file but the data will come from an intermediary. Since that intermediary won't log what they pass on (they are just a router) you would have to have a live wiretap on the intermediary to find where the data came from, and that may be another intermediary. You would need live wiretaps on half the net to actually track somebody. The intermediaries have no idea what data they are routing, and are no more guilty of copyright infringement than UUNET is for owning routers.

But this is of course terribly inefficient, especially since the intermediaries are mostly at network endpoints.

There are designs which protect the privacy of users, but don't let the RIAA sue the hosting system. One was the Mojo Nation project, which died, but has spun off technologies like HiveCache and MNet.

In Mojo Nation, files were broken up into many blocks, with some redundancy. For example a file might have 8 different component blocks, any 4 of which can resassemble the file. Those 8 blocks would themselves be replicated all over the net. You could find out what IP sent you a block, but the owner of that IP address would not have any idea what was in it, it's just an encrypted black box to them, so they are not liable. At best you could order them to delete the block after showing that it's part of a copyrighted file using a DMCA takedown. But it's not practical to do.

At least it's P2P. It's sad that the RIAA's crusade will cause people to modify P2P networks into non-P2P, and gain the RIAA nothing.

I want universal DC power

I went around and counted that we seem to have around 30 birick and wall-wart DC power supplies plugged in around the house, and many more that are not plugged in which charge or power various devices. More and more of what we buy is getting to be more efficient and lower power, which is good.

But it's time for standardization in DC power and battery charging. In fact, I would like to move to a world where DC devices don't come with a power supply by default, because you are expected to be able to power them at one of the standard voltage/current settings.

One early experiment is on airplanes. I have an adapter that takes the 12v from the airplane, and has many tips which put out different voltages for different laptops. These are expensive right now, but on the right track.

Our other early venture is USB, which provides up to 500ma at 5 volts. Many small devices now use USB for power if that's all they need. There are devices that plug in to USB only for power, they don't use the data lines. Some come with a small cigarette lighter plug that has a USB socket on it for car use. This includes cellular chargers, lights etc.

I think a good goal would be a standardized data+power bus with a small number of standard plugs. One would be very tiny for small devices and only provide minimal USB-level power, a couple of watts. Another would handle mid-level devices, up to a couple of amps. A third would be large and handle heavy duty devices up to say 20 amps, replacing our wall plugs eventually. There might be a 4th for industrial use.

In full form, the data bus would be used for the components to exchange just what power they want and have. Years ago that would have been ridiculous overkill, today such parts are cheap. However, to make it simple there would be a basic passive system -- perhaps as simple as a finely tuned resistor in place of the data components -- to make it easy and cheap to adapt today's components.

A fully smart component would plug into the smart power and get a small "carrier" voltage designed to run the power electronics only. A protocol would establish what power the supply can provide and what the component wants, and then that power would be provided.  read more »

Fix some eBay feedback problems

Like many, I am interested in reputation systems, and eBay has built the largest public reputation system. Many have noted how feedback on eBay is overwhelmingly positive — a 97% positive rating would be a reason to be wary of a seller.

It’s also noted that people do this because they are scared of revenge feedback — I give you a negative, you do it back to me. One would think that since the buyer’s only real duty is to send the money that the seller should provide positive feedback immediately upon receipt of that money, but they don’t.

Some fixes have been proposed, including:

  • letting you see the count of total auctions the party has been buyer or seller in, so you can see how many resulted in no feedback at all. Right now only eBay knows how large that number is.
  • double-blind feedback. That is to say that feedback is not revealed until both parties have entered it, or if only one party enters it, after the feedback period has expired.
  • Marking revenge feedback, ie. putting a mark next to negatives that were a response to an outgoing negative.

Thus you could have very low fear of revenge feedback and there would be no argument about who should go first.

This idea’s fairly obvious, so like many other obvious ideas about eBay one wonders if eBay doesn’t feel some benefit to themselves from not doing it, though it’s hard to see. I’m also curious as to why eBay doesn’t offer a “going, going, gone” auction where the auction closes only after 5 minutes with no bidding. That seems to be in the interests of sellers (and eBay which gets a cut of the selling price) and it’s certainly not something they are unaware of.

The only proposition I’ve heard is that eBay has decided that there is a positive value to itself (and possibly sellers) from bid-sniping, the process of bidding preemptively in the last minute of an auction to not give other live bidders (who didn’t use the automatic rebidder) a chance to come in with more. The only way this could be good woudl be if Snipers deliberately overbid in order to trump anything. Any research or thoughts on this? It may also be the case that the sniped auctions are more “fun,” or more of a contest. And finally having fixed closing times does facilitate participating in multiple auctions for the same thing.

I have also posted updated eBay thoughts and an even simpler system which eliminates revenge and in fact now have an eBay tag for all eBay related posts, including thoughts on eBay’s solution to all this.

Please Note: This thread is for discussion of philosophical or abstract aspects of the feedback system. Please do not post stories of your own particular problems from a particular seller or transaction. Keep it abstract.

Syndicate content