Submitted by brad on Sat, 2004-05-22 09:04.
The new genertion of WiFi equipment supports WPA (WiFi Protected Access) a version of the IETF's EAP protocol, so that superior key authentication with different keys for each user and the keys are much harder to crack. In corporate networks, the keys can be fetched via RADIUS -- effectively allowing a single login password to provide all network access securely.
That's great, but not enough has been done that I have seen to make a good user interface for the home network. I set up family member's wireless networks with WEP keys and its a pain even for a skilled person. When a person visits my house and wants wireless access I need to key in a 32byte hex string.
For home networks, how about a nice simple protocol. When a new device attempts to connect to the network, note that. Then let the user go to the web configuration page for their access point. There it will list the new devices that have tried to get on the net. There will probably be only one. If the user clicks to approve it, transmit the WEP key back to that new device (encrypted with a public key the device provided) so it can now join the network. Possibly with reduced permissions, but that's a bonus.
The main goal is plug and play (or near to it) joining of the encrypted network in the ordinary home. If there are multiple APs, they can share the key with WPA or other protocols. Or frankly, it's not even a giant burden to have to confirm the new user to all the APs, since most homes don't have more than one. (Mine does, I can't get the signal to go from one corner of my house to the other.)
Want to make it even easier for the unskilled home user? Put a button on the access point. Push it, then the new laptop will ask for a key. A light will go on if one and one one device asked for access, and the laptop will confirm it. Then push the button again and the laptop gets a permanent key for access then and in the future. Of course a web interface is cheaper than a button and clearer but this is dirt simple. If two devices try to get access, then you get an error and have to try again or go to the web interface, but this would be rare and a sign that perhaps somebody was trying to sneak in.
Submitted by brad on Mon, 2004-05-17 07:13.
When SIP was designed for internet telephony, the feeling was to get rid of the phone number and replace it with IDs with the form of email addresses. E-mail addresses are of course easier to remember and read, though as a downside they tie your address to a domain, which is fine if it's yours, but silly if it's your service provider's.
However, to much surprise, handsets with numeric keypads not only continue to dominate the phone, but their use is growing. So much that complex "texting" systems have been designed and come with phones to let people enter text messages with the keypad.
In addition, popular IP phones feature not full keyboards, but traditional keypads, even though they have room. Mobile phones largely won't have keyboards for size constraints. As a result, IP phone users are using services like Free World Dialup and SipPhone so they can have phone numbers again, the thing we wanted rid of.
There is another ancient system involving phone numbers based on the letters Bell put on the keypad. Starting with Pennsylviana-6-5000, and moving to numbers like 1-800-FLOWERS.
Of course there are other answers to dialing -- menus, speech interfaces and so on. But if dialpads are with us for a while longer, does it make sense to rethink the system of finding words to spell out phone numbers?
If we use the existing system (with perhaps some minor mods) we could get a wide selection of spellable words by having longer numbers. No reason you can't have multiple numbers -- a "normal" 7 (or 10) digit number and then a longer number that is easier to remember but harder to key because it's longer. Thus I could probably have "BRADTEMPLETON" 2723-836753866 as a phone number, as well as my regular 7 digit number for use in systems that can't handle long numbers. Cell phones of course can easily have the length of numbers extended, but even ordinary phones can do this easily with a * or # code.
Of course the spell a word system has name collisions, so not everybody can get their preferred choice of name, but everybody can have an easy to remember string, I would venture. (Like with domain names.) read more »
Submitted by brad on Thu, 2004-05-06 09:45.
I have accepted an invitation to join the Board of Directors for the Foresight Institute for Nanotechnology.
Foresight was created by Chris Peterson and Eric Drexler, author of "Engines of Creation" to act as advocate and watchdog in the field of molecular nanotechnology, of which Eric can claim to be the modern father. I've been a senior associate of the institute for some years and spoken at their conference. I will MC the conference coming up next weekend.
While I put most of my focus right now into issues of computer technology, software, civil rights and the internet, if you ask me what the true "next big thing" is, it's in nanotech, so I'm very pleased to be part of Foresight.
I should also note that Foresight is seeking a new executive director to manage the operations of the institute and take a leadership role in the future of nanotechnology. Contact me if this could be the job for you -- but please, plain-text ASCII resumes only, no word processor files.
Submitted by brad on Tue, 2004-05-04 16:12.
I wrote some time ago of how I would like a car's MP3 player/computer to have 802.11, so that when it parks in my driveway, it notices it is home and syncs up new data and music.
That would be great, of course, but it seems there should be other things you would do with it. Networking with the car next to you on the road seems like a cool idea but I'm having trouble dreaming up applications. Listening to the music in the next car seems cute but probably would be boring after a while. Being able to talk to the driver of the next car seems like a nice social game (and it hardly needs 802.11) and might just result in road-rage.
If common, I could see it for dating, since people seem to attach a strong romantic image to making eye contact with an attractive person in another car. There was even a dating service I read about long ago which gave you bumper stickers so you could contact somebody if you felt sparks. The personals have a section for this.
You might be able to create longer mesh networks, to share traffic info or the sort of things you used to share on CB if there are enough cars, but this would be highly unreliable, and any application here might be better served by broadcast data that goes over longer ranges. (We are already seeing broadcast traffic data services, though they will never warn about speed traps, I suspect.)
And of course, if you can connect back to the internet that's highly useful, but again this would be highly intermittent connectivity. 802.11 isn't really set up for short-burst connectivity though one could create a protocol that was, good enough to fetch live audio etc. But this ends up being just another microcell network -- what can we get car to car?
So -- all sorts of cute little applications but nothing really compelling in my view. But since we will get wireless networking in our cars for the carport sync, I invite readers to dream up some apps.
Submitted by brad on Thu, 2004-04-29 12:54.
In 1965, Gordon Moore of intel published a paper suggesting that the number of transistors on a chip would double every year. Later, it was revised to suggest a number of 18 months, which became true in part due to marketing pressure to meet the law.
Recently, Intel revised the law to set the time at two years.
So this suggests a new law, that the time period in Moore's Law doubles about every 40 years.
Submitted by brad on Tue, 2004-04-20 14:58.
I suspect that some time this decade we will see 3 tech trends converge which might make a big difference in the utility of remote real estate, land that currently remains undeveloped because it is so remote.
The first is already here, the internet. Many people can now use the internet to work from anywhere, and both long-range wireless broadband and satellite let you get the internet anywhere. That can give you data, video and phone service as well as the conduit for work.
It also gives you shopping, thanks to the commitment of the shipping companies to deliver to any address, even remote ones. Now you don't need much locally -- just your groceries and urgent needs. Everywhere now has a giant bookstore and a giant everything-else store if you can get UPS.
The second trend is cheaper remote power. Possibly solar, but perhaps sooner the fuel cell, to give quiet, clean and cheap electricity anywhere you can get propane delivered. We're not there yet but some products are already on the market. If not there are other improving forms of off-grid power.
The next is the return of cheaper general aviation, allowing people to own planes so they can live far from cities and get to them quickly. This is the only trend to see a recent reversal, as 9/11 has put general burdens on aviation. Today the money you save on the cost of a home, comparing a remote location to a big city, can easily buy that plane.
Some things are still harder, including schooling and of course an active social life. But for a component of society that wanted to live remotely but could not make it workable, this may be about to change. Suddenly that remote hilltop with the fabulous view that was undeveloped because it was off-grid and too remote for the good life may get a house on it. We may see a lot of this.
Submitted by brad on Thu, 2004-04-08 17:47.
Many people feel there's a patent law crisis underway. The Patent office has been granting patents that either seem obvious, or aren't the sort of thing that should be patented. Some advance that software shouldn't be patentable at all, just as mathematics is not patentable.
I don't go that far, for reasons I will explain. But I have found a common thread in many of the bad patents which could be a litmus test for telling the bad from the good.
Patent law, as we know, requires inventions to be novel and not obvious to one skilled in the art.
But the patent office has taken too liberal a definition of novel. They are granting patents when the problem is novel, and the filer is the first to try to solve it. As such their answer to the new question is novel.
The better patents are ones that solve older problems.
Amazon was one of the earliest internet shopping operations. So of course they were among the first to look hard at the UI for that style of shopping, and thus were first to file an invention called one-click-buy. But one-click-buy was really just an obvious answer to a new problem. The same applies to XOR cursors, browser plug-ins, and streaming audio and video.
Some patents, however, are deserving. I remember seeing CS professors give lectures in the mid-70s about how Huffman coding was provably the be best form of data compression, even after Ziv and Lempel published their paper on their compression algorithms. They took a very old problem and came up with a new answer. Key management in cryptography was a 2000 year old problem, and Diffie, Hellman and Merkle came up with a bold new answer. (As did cryptographers at British intelligence, but I still don't think this makes this obvious.)
While it would not solve every problem, I think if patent examiners asked, "How long has somebody been trying to solve the problem this invention solves?" and held off patents when the problem was novel, or at least applied more scrutiny, we would have a lot less problem with the patent system.
Many people simply say, "we should not allow patenting of software."
This has always bothered me. To me, software and hardware are the same thing, and the rest of the world is slowly realizing that. The virtual world is the real world, and having one law for that done in software and another for that done in hardware is a poor course to take. read more »
Submitted by brad on Tue, 2004-03-30 17:38.
The weekend of May 14th, I will be attending (and MCing for part of) the Foresight Senior Associates Conference. This conference is always a lot of fun, with many at the edge (and beyond) ideas about nanotech, AI, anti-aging and other related topics. It's run by my friends Chris Peterson and Eric Drexler and their Foresight Institute. You may have read Eric's book "Engines of Creation."
They are offering readers of my blog a $200 discount on attending. To attend, you must be a senior associate, which requires a $250 annual donation, so the discount just about compensates for that. If you're into futurism, this is a fun place to be.
Submitted by brad on Mon, 2004-03-29 09:44.
In line with earlier thoughts about univeral DC power, let me ask why cell phones haven't standardized on USB (or a mini-USB plug) as an interface?
USB provides power. Not as much as some chargers, but enough to get a decent rate to many phones. And it has data, which can be used for phone control and configuration, speakerphone and headset interfaces, address book sync, ringtone download, memory card download, data-modem connections to PCs and anything else, all with one standard plug.
Every cell store has a rack of scores of adapters, chargers and cables. Each time you get a new phone they want to sell you new accessories, I guess. We have a standard. Why don't we use it, or extend it enough to be used.
(I'll admit it's not a good headset interface due to USB's silly master-slave protocol, since to connect to the PC the phone would be a slave, and to connect to the headset it would be the master. But this can be worked around, and I'll tolerate an extra headset jack.)
See below for some interesting safety ideas... read more »
Submitted by brad on Mon, 2004-03-22 09:11.
As you might guess from the prior entry, somebody I know recently had an ICU visit. The hospital had to cut back staff, laying off nurses' aides and hiring some extra nurses, then making them do the former work of the nurses aides (changing sheets etc) because of regulations forcing them to have a higher ratio of patients to nurses. So, more nurses per patient but the nurses end up doing less actual nursing per patient because they are doing the work the aides did. Clever, no?
Anyway, to add fuel to the offshore outsourcing debate, I wondered how practical it would be to outsource patient watching. A trained nurse in a lower-income area, possibly on the other side of the world, would watch a patient via a live video feed and data feeds of all the instruments. If they see a problem, they would send an alert to a physically present nurse or doctor. They could see and talk to the patient, if the patient is responsive.
Since the bandwidth would be expensive for this, I imagine a lower-res video for real-time, though still good enough to see important things with remote pan and zoom control. However, on-demand they could jump up the bandwidth during an event. They would also be able to send a command to replay something they saw in full-resolution, with some delay.
To do this the local recorder would record the full resolution video, even HDTV, and keep it for an hour on a hard disk. It woudl transmit a lower-res version live. Since most hospital beds are static scenes this would compress well. Motion, instead of causing artifacts would just call for more bandwidth from the total pool. However, when the watcher says, "let me see the last 10 seconds" the patient's recorder would re-transmit it in full HDTV if necessary.
But the main point is the overseas workers might be so cost effective that you can have near full-time monitoring of a patient by a skilled professional. In many hospitals and nursing homes, the staff might visit only once every few hours, perhaps every 15 minutes at best. You can die in 15 minutes.
Of course it's spooky from a privacy standpoint to be watched all the time, this would not be for everybody. And better instrumentation that's non-intrusive and can detect emergency events quickly would be even better. Though nothing will do as well as a trained person right now. This might also allow more effective home care, though of course in that case it might be too long before an ambulence arrives if an emergency is seen on the monitor. And you had better hope your internet connection does not go down.
Still, there's a lot to say for home care, considering just how many people die or suffer greatly due to hospital-caught infections. As I noted earlier, they are the 4th leading cause of death.
Submitted by brad on Thu, 2004-03-11 08:58.
You may have seen a new proposal for a "mobile" top-level domain name for use by something called "mobile users" whatever they are. (The domain will not actually be named .mobile, rumours are they are hoping for a coveted one-letter TLD like .m "to make it easier to type on a mobile phone.)
Centuries ago, as trademark law began its evolution, we learned one pretty strong rule about building rules for a name system for commerce, and even for non-commerce.
Nobody should be given ownership of generic terms. Nobody should have ownership rights in a generic word like "apple" -- not Apple Computer, not Apple Records, not the Washington State Apple Growers, not a man named John Apple.
Rather, generics must be shared. Ownership rights can accrue to them only in specific contexts that are not generic. Because the word "Apple" has no generic meaning when it comes to computers, we allow a company to get rights in that name when applied to computers. A different company has those rights when it applies to records. More than this, different parties could own the same term with the same context in two different cities. There is probably a "China Delight" restaurant in your town.
We hammered out the rules to manage such naming systems literally over centuries, with many laws and zillions of court cases.
Then, when DNS came along we (and I include myself since I endorsed it at the time) threw it all away. We said, when it came to naming on the internet, we would create generic top level domains, and let people own generic names within them.
Thus, "com" for commerce has within it "drugstore.com." Centuries of law establshed nobody could own the generic word "drugstore" but when it comes to names used on the internet, we reversed that. No wonder that company paid near a million for that domain as I recall, and at the record, the inflated number of 7.5 million was paid for business.com
The old TLDs have that mistake built into them. On the internet, we are the only EFF organization because we were first. Nobody else can be that.
The new TLDs continue that trend. Be it .museum, which allows one body to control the generic word museum, or a new proposal for .mobile.
Because of this, people fight over the names, pay huge sums, sue and insist only one name is right for them.
I maintain that the only way to get a competitive innovative space is to slowly get rid of the generics and allow a competitive space of branded TLDs for resale. .yahoo, .dunn, .yellowpages, .google, .wipo, and a hundred other branded resellers competing on on even footing to create value in their brand and win customers with innovative designs, better service, lower prices and all the usual things. I presume .wipo would offer trademark holders powerful protections within their domain. Let them. Perhaps .braddomains would, when you bought a domain, give you every possible typo and homonym for your domain so people who hear it on the radio won't get it wrong typing it in. Perhaps .centraal (former, non-generic name of the now defunct "RealNames" company) would follow their keyword rules. I know .frankston would offer permanent numeric IDs to all. Let them all innovate, let them all compete.
We're nowhere near this system, but I didn't just make up the idea of not owning generics. I think centuries of experience shows it is the best way to go. I wrote this today in response to the .mobile proposal, but you can also find much more on the ideas in my site of DNS essays including this plan to break up ICANN, and essays on generics and also the goals we have for a domain system
Submitted by brad on Thu, 2004-02-26 13:21.
No surprise that after the RIAA started filing lawsuits against people they allege were distributing lots of copyrighted files, a movement has sprung up to build filesharing networks where the user hosting data can't be traced so easily.
Today, on Kazaa, all they need to do is try to find a file, look at what a user is sharing and try to download it. That gives them the IP address of the party in question.
The suits will push people into systems that don't make that information easily available. One common design being pushed involves removing the peer to peer aspect that made these systems so efficient and capable of distributing files. Namely the connections are no longer direct, the data flows between one or more intermediaries.
In this case, they can request a file but the data will come from an intermediary. Since that intermediary won't log what they pass on (they are just a router) you would have to have a live wiretap on the intermediary to find where the data came from, and that may be another intermediary. You would need live wiretaps on half the net to actually track somebody. The intermediaries have no idea what data they are routing, and are no more guilty of copyright infringement than UUNET is for owning routers.
But this is of course terribly inefficient, especially since the intermediaries are mostly at network endpoints.
There are designs which protect the privacy of users, but don't let the RIAA sue the hosting system. One was the Mojo Nation project, which died, but has spun off technologies like HiveCache and MNet.
In Mojo Nation, files were broken up into many blocks, with some redundancy. For example a file might have 8 different component blocks, any 4 of which can resassemble the file. Those 8 blocks would themselves be replicated all over the net. You could find out what IP sent you a block, but the owner of that IP address would not have any idea what was in it, it's just an encrypted black box to them, so they are not liable. At best you could order them to delete the block after showing that it's part of a copyrighted file using a DMCA takedown. But it's not practical to do.
At least it's P2P. It's sad that the RIAA's crusade will cause people to modify P2P networks into non-P2P, and gain the RIAA nothing.
Submitted by brad on Mon, 2004-01-26 10:23.
I went around and counted that we seem to have around 30 birick and wall-wart DC power supplies plugged in around the house, and many more that are not plugged in which charge or power various devices. More and more of what we buy is getting to be more efficient and lower power, which is good.
But it's time for standardization in DC power and battery charging. In fact, I would like to move to a world where DC devices don't come with a power supply by default, because you are expected to be able to power them at one of the standard voltage/current settings.
One early experiment is on airplanes. I have an adapter that takes the 12v from the airplane, and has many tips which put out different voltages for different laptops. These are expensive right now, but on the right track.
Our other early venture is USB, which provides up to 500ma at 5 volts. Many small devices now use USB for power if that's all they need. There are devices that plug in to USB only for power, they don't use the data lines. Some come with a small cigarette lighter plug that has a USB socket on it for car use. This includes cellular chargers, lights etc.
I think a good goal would be a standardized data+power bus with a small number of standard plugs. One would be very tiny for small devices and only provide minimal USB-level power, a couple of watts. Another would handle mid-level devices, up to a couple of amps. A third would be large and handle heavy duty devices up to say 20 amps, replacing our wall plugs eventually. There might be a 4th for industrial use.
In full form, the data bus would be used for the components to exchange just what power they want and have. Years ago that would have been ridiculous overkill, today such parts are cheap. However, to make it simple there would be a basic passive system -- perhaps as simple as a finely tuned resistor in place of the data components -- to make it easy and cheap to adapt today's components.
A fully smart component would plug into the smart power and get a small "carrier" voltage designed to run the power electronics only. A protocol would establish what power the supply can provide and what the component wants, and then that power would be provided. read more »
Submitted by brad on Mon, 2004-01-19 10:56.
Like many, I am interested in reputation systems, and eBay has built the largest public reputation system. Many have noted how feedback on eBay is overwhelmingly positive — a 97% positive rating would be a reason to be wary of a seller.
It’s also noted that people do this because they are scared of revenge feedback — I give you a negative, you do it back to me. One would think that since the buyer’s only real duty is to send the money that the seller should provide positive feedback immediately upon receipt of that money, but they don’t.
Some fixes have been proposed, including:
- letting you see the count of total auctions the party has been buyer or seller in, so you can see how many resulted in no feedback at all. Right now only eBay knows how large that number is.
- double-blind feedback. That is to say that feedback is not revealed until both parties have entered it, or if only one party enters it, after the feedback period has expired.
- Marking revenge feedback, ie. putting a mark next to negatives that were a response to an outgoing negative.
Thus you could have very low fear of revenge feedback and there would be no argument about who should go first.
This idea’s fairly obvious, so like many other obvious ideas about eBay one wonders if eBay doesn’t feel some benefit to themselves from not doing it, though it’s hard to see. I’m also curious as to why eBay doesn’t offer a “going, going, gone” auction where the auction closes only after 5 minutes with no bidding. That seems to be in the interests of sellers (and eBay which gets a cut of the selling price) and it’s certainly not something they are unaware of.
The only proposition I’ve heard is that eBay has decided that there is a positive value to itself (and possibly sellers) from bid-sniping, the process of bidding preemptively in the last minute of an auction to not give other live bidders (who didn’t use the automatic rebidder) a chance to come in with more. The only way this could be good woudl be if Snipers deliberately overbid in order to trump anything. Any research or thoughts on this? It may also be the case that the sniped auctions are more “fun,” or more of a contest. And finally having fixed closing times does facilitate participating in multiple auctions for the same thing.
I have also posted updated eBay thoughts and an even simpler system which eliminates revenge and in fact now have an eBay tag for all eBay related posts, including thoughts on eBay’s solution to all this.
Please Note: This thread is for discussion of philosophical or abstract aspects of the feedback system. Please do not post stories of your own particular problems from a particular seller or transaction. Keep it abstract.
Submitted by brad on Fri, 2004-01-09 10:17.
For some time I have been musing over the design of an ideal home A/V system using digital technology. Sadly it's not coming, in part because it's illegal under the new Broadcast Flag rules.
To read my design of this system, and the musings of the legality of it and why that presents a problem, see a draft on an Ideal A/V digital system