Submitted by brad on Thu, 2005-11-03 12:34.
Everybody is annoyed at how long it takes computers to boot. Some use hibernate mode to save a copy of the system in a booted state, which is one approach. Booting procedures have also gotten better about running stuff in parallel.
How about watching a system as it boots, and noting what disk blocks are read to boot it. Then save that map for the disk defragmenter or other disk organizer and have it try to rearrange the files needed at boot so they are all contiguous
and close together. This should reduce the role of I/O as a boot bottleneck. Many disks today can read 50 megabytes in a second, and most OSs only need to access a few 100MB in order to boot, and they have the ram to only need to read files once.
Submitted by brad on Mon, 2005-10-24 14:31.
Recently I purchased an external battery for my Thinkpad. The internal batteries were getting weaker, and I also needed something for the 14 hour overseas flights. I picked up a generic one on eBay, a 17 volt battery with about 110 watt-hours, for about $120. It's very small, and only about 1.5 lbs. Very impressive for the money. (When these things first came out they had half the capacity and cost more like $300.)
There are downsides to an external: The laptop doesn't know how much charge is in the battery and doesn't charge it. You need an external charger. My battery came with its own very tiny charger which is quite slow (it takes almost a day to recharge from a full discharge.) The battery has its own basic guage built in, however. An external is not as efficient as an internal (you convert the 17v to the laptop's internal voltage and you also do wasteful charging of the laptop's internal if it is not full, though you can remove the internal at the risk of a sudden cutoff should you get to the end of the external's life.)
However, the plus is around 9 to 10 hours of life in a small, cheap package, plus the life of your laptop's internal battery. About all you need for any flight or long day's work.
It's so nice that in fact I think it's a meaningful alternative to the power jacks found on some airlines, usually only in business class. I bought an airline adapter a few years ago for a similar price to this battery, and even when I have flown in business class, half the time the power jack has not been working properly. Some airlines have power in coach but it's rare. And it costs a lot of money for the airlines to fit these 80 watt jacks in lots of seat, especially with all the safety regs on airlines.
I think it might make more sense for airlines to just offer these sorts of batteries, either free or for a cheap rental fee. Cheaper for them and for passengers than the power jacks. (Admittedly the airline adapter I bought has seen much more use as a car and RV adapter.) Of course they do need to offer a few different voltages (most laptops can take a range) but passengers could reserve a battery with their flight reservation to be sure they get the right one.
It would be even cheaper for airlines to offer sealed lead-acid batteries. You can buy (not rent) an SLA with over 200 watt-hours (more than you need for any flight) for under $20! The downside is they are very heavy (17lbs) but if you only have to carry it onto the plane from the gate this may not be a giant barrier.
Of course, what would be great would be a standard power plug on laptops for external batteries, where the laptop could use the power directly, and measure and charge the external. Right now the battery is the first part to fail in a laptop, and as such you want to replace batteries at different times from laptops. This new external should last me into my next laptop if it is a similar voltage.
Submitted by brad on Sun, 2005-09-25 12:05.
In recent times, I and my colleagues at the Foresight Nanotech Institute have moved towards discouraging the idea of self-replicating machines as part of molecular nanotech. Eric Drexler, founder of the institute, described these machines in his seminal work “Engines of Creation,” while also warning about the major dangers that could result from that approach.
Recently, dining with Ray Kurzweil on the release of his new book The Singularity Is Near : When Humans Transcend Biology, he expressed the concern that the move away from self-replicating assemblers was largely political, and they would still be needed as a defence against malevolent self-replicating nanopathogens.
I understand the cynicism here, because the political case is compelling. Self-replicators are frightening, especially to people who get their introduction to them via fiction like Michael Chrichton’s “Prey.” But in fact we were frightened of the risks from the start. Self replication is an obvious model to present, both when first thinking about nanomachines, and in showing the parallels between them and living cells, which are of course self-replicating nanomachines.
The movement away from them however, has solid engineering reasons behind it, as well as safety reasons. Life has not always picked the most efficient path to a result, just the one that is sufficient to outcompete the others. In fact, red blood cells are not self-replicating. Instead, the marrow contains the engines that make red blood cells and send them out into the body to do their simple job.
Read on read more »
Submitted by brad on Sun, 2005-08-07 00:44.
A lot of older computers that people are ready to throw away can be decent linux boxes, in schools or in other charitable locations.
I propose a simple small program (possibly fitting on a floppy as well as CD) which can be inserted into an old computer. It scans the harware and compares it with hardware databases of chipsets, cards and other parts which are known to work well under linux (or your favourite BSD or other OS) and to work well together. It would also evaluate the machine and put it in a “performance class” to describe just how good it is. It might connect to the net (if it can) to download the latest such lists and info and software updates.
The goal is to test if the machine can do a problem-free install, one that asks almost no questions, and converts the system to a nice linux box, ready for some student to run for e-mail, web, and writing. There are so many machines to donate that we can insist on perfection. The program could also tell the owner what upgrades it might need to be good or to reach a performance class. “This machine is good but with 128M of ram it would reach performance class N.” “This machine would be perfect if you swapped the ethernet card for one of these models” and so on.
Next, of course, is a simple distribution, to install from CD-rom or over the network, that can be quickly installed with no questions asked except perhaps time-zone (if it can’t figure that out from the old OS.) The goal is a system that can be run by untrained admins who may never have seen the insides of linux or any other OS.
Submitted by brad on Thu, 2005-07-14 19:40.
All my sites were off today as I did an emergency switch of servers.
The whole story is amusing, so I’ll tell it. I used to host my web sites with Verio shared hosting, but they were overpriced and did some bad censorship acts, so I was itching to leave. One day my internet connection went out, so I went onto my deck with my laptop to see what free wireless there was in the area. One strong one had an e-mail address as the SSID, though it was WEP-locked. Later, I e-mailed that address with a “hi neighbour” and met the guy around the corner. He had set the SSID that way to get just such a mail as mine. (I have a URL as my SSID now for the same purpose.)
My neighbour, it turned out, knew some people I knew in the biz, and told me about a special club he was in, called “Root Club.” The first rule of Root Club, he joked, was that you do not talk about root club. Now that I’m out, I can tell the story. Root Club was started as a group of sysadmins who shared a powerful colocated web server, and all shared the root password and sysadmin duties. read more »
Submitted by brad on Wed, 2005-06-22 00:00.
Corporate servers have used network storage, ranging from fileservers, to SAN for several years. Now, with USB IDE external drive cases selling for as little as $20, people are using external drives on their PC, and get pretty good response with 400 mbit USB 2 or with 1394/firewire. You can get most of the capacity of a 7200 rpm drive over USB 2.
So I want to call for the production of a cheap home external storage box. This box would have slots for 4 or 5 drives and cooling for them, ideally as big a fan as possible to keep the rpms and noise low in the desk model, and an even more powerful fan in the basement model.
The desk model might have sound insulation though that’s hard to combine with good cooling.
While this box could and probably should have USB or 1394, even better would be gigabit ethernet, which is fast enough for most people’s storage needs, especially if there is a dedicated gigabit ethernet card in the PC just for talking to the storage.
This could allow for a radical redesign of PC cases of all types, with no need for the space and heat of drives. And of course these diskless PCs would be much quieter. You could put your disk cube under your desk (and thus have it be a bit quieter) but ideally you would like the basement model, to which you string cat5e cable and get a mostly silent PC.
read on… read more »
Submitted by brad on Fri, 2005-05-06 03:55.
On both a personal and professional note, I am happy to report that the federal courts have unanimously ruled to strike down the FCC's broadcast flag (that's a PDF) due to our lawsuit against them.
I participated directly in this lawsuit, filing an affadavit on how, as a builder of a MythTV system and writer of software for MythTV, I would be personally harmed if the flag rule went into effect. The thrust of the case was that the FCC, which is empowered to regulated interstate communications, had no authority to regulate what goes on inside your PC. The court bought that, but we had to show that the actual plaintiffs in the case would be harmed, not simply the general public, thus the declarations by myself and various other members of EFF and other plaintiffs.
The broadcast flag was an insidious rule because, as I like to put it, it didn't prohibit Tivo from making a Tivo (as long as they got it certified as having pledged allegiance to the flag.) It stopped somebody from designing the next Tivo, the metaphorical Tivo, meaning bold new innovation in recording TV.
I would like to particularly thank Public Knowledge, which spearheaded this effort and funded most of it.
Here's an AP Interview with me on the issue.
Submitted by brad on Fri, 2005-04-15 12:45.
Dear [[blog-reader's name]]:
When it first started arising, in the 60s and 70s, everybody thought it was so cute and clever that computers could call us by name. Some programs even started by asking for your name, only to print "Hi, Bob!" to seem friendly in some way.
And of course a million companies were sold mailing list management tools to print form letters, filling in the name of the recipient and other attributes in varous places to make the letter seem personal. And again, it was cute in its way.
But not any more. We've all figured it out. Nobody says, "Wow, this letter has 'Dear Brad' in it, it must have been written personally for me." Nobody is fooled any more. In fact, the reverse is now true. It's bordering on offensive. If an E-mail starts with "Dear Brad" it is more likely than not to be spam.
Sometimes though, I get form letters from real companies I deal with, and they still like to put my name in it, like they used to on paper. As you probably know, in E-mail today, you don't put in salutations any more unless it's a mail to a stranger.
So let's get the word out. Stop it. No more form letters where the computer oh-so-cleverly manages to fill in a field with our name. (Unless it's amusing, and they are writing to "Dear Mr. Association") If it's legitimate bulk mail, don't try to pretend you're not bulk mail. That's what spammers do. Be honest that you're bulk mail.
If you have actual relevant data to fill in, fill it in, but put it in a table so I can skip the form letter garbage and get to the actual data about me you're trying to tell me. Put my name at the top in a nice computer-style box, "Prepared for: Brad Templeton."
Leave the use of my name to people writing messages for me. You're not fooling anybody.
[[Insert name here]]
Submitted by brad on Tue, 2005-04-12 05:07.
Linux distributions with package managers like apt, promise an easy world of installing lots of great software. But they've fallen down in one respect here. There are thousands of packages for the major distributions (I run 3 of them, debian, Fedora Core and Gentoo) but most packages depend on several other packages.
The developers and packagers tend to run recent, even bleeding-edge versions of their systems. So when they package, the software claims it depends on very recent versions of other programs, even if it doesn't. This is not surprising -- testing on lots of old systems is drudgework nobody relishes doing.
So when you see a new software package you want, the ideal is you can just grab it with apt-get or yum. The reality is you can only do this if you're running a highly up-to-date system. Debian has become the worst offender. Debian's "Stable" distribution is several years old now. To run debian reasonably, even to just be able to upgrade to fix bugs in software you use, you have to run the testing distribution, and most probably the unstable one. I run the unstable, and it's more stable than the name implies, but ordinary users should not be expected to run an unstable distribution.
To get new software, you are often forced to upgrade, sometimes your whole OS. And that's free to do and often it works, but you can't depend on it. More than once I have lost a day of uptime to major upgrade efforts.
Let's contrast that with Windows. The vast majority of Windows programs will install, in their latest version, on 7 year old Windows 98, and almost all will install on 5 year old Windows 2000. This is partly because Windows has fewer milestones to test to, but also because coders know that it's quite a hurdle to insist users pay money to upgrade Windows. (And Windows upgrades are even more of a pain than linux ones.)
The linux approach ends up forcing the user to choose between the risky course of constant incremental upgrades, taking occasional random plunges into major upgrades, or simply not being able to run interesting new software or the latest versions and fixes of older software.
That's a failure. Non-guru users are not able to deal with any of those choices.
Testing with every different version of every dependent package (and every kernel) is not going to happen, but it would be nice if packagers worked hard to figure out what versions of dependencies they really need, even if they don't test it enough. Packages might say, "I was tested with 2.1, I probaby work with 1.0 though." Then wait for test reports and possibly report being tested with earlier and earlier dependencies.
This doesn't mean that sometimes you won't truly need the latest version of a dependency, and shouldn't say so. But it sure would make it easier for the ordinary user to particpate in linux if this was the exception, not the rule.
Submitted by brad on Mon, 2005-02-21 12:34.
In writing an essay I'm working on about why hard disk video recorders are as novel as they are, I explored a concept I think is worthy of its own blog entry. This is the concept of Telepathic User Interface or TUI.
A TUI is a user interface that you use so much that it becomes unconscious. Perhaps the classic TUI is touch typewriter keyboard. I just think letters and they simply come out. I am no longer consious of the mechanism. In many cases I think sets of letters and even words and they just come out. From the mind to the computer -- telepathic.
Other examples include the car. After you drive a car for a while it becomes an extension of yourself. Learning the clutch is hard but soon you are not thinking about it at all. And the remote control on a Tivo, I write in the essay, has aspects of a TUI -- you learn how to move around a program without thinking.
A TUI is not always a natural interface or even a good interface. It's just one you use often enough to make it subconsious. It doesn't have to be intuitive -- an intuitive interface is simply one that's easy to guess the operation of.
When it comes to computer software, this helps us understand the dichotomy between the GUI/WIMP style and the command line and keyboard style which still has many devotees.
GUI interfaces are easy to learn, and easy to guess. And of course for positional inputs they are markedly superior and often the only choice. But by and large, the story of Mice and Menus took a path away from the TUI. You have to focus your eyes on the pointer in order to use a GUI, and you have to read to use a menu. It's much more difficult to use such a system unconsiously. (Mouse gesture interfaces change that a bit.)
Fans of text editors like VI and Emacs, with complex, non-intuitive keyboard interfaces love them because they have reached TUI state, at least in part. Many of the operations have become unconsious, and thus much faster and easier as far as the user is concerned.
Command line interfaces are never completely TUI, but they take advantage of the TUI nature of touch-typing. Because touch typing maps words from brain to screen, complex commands can have a fair bit of TUI to them.
It is a rare technology that can earn a TUI. You need to be using it a great deal, and regularly. Video games also develop TUIs because of their devotion. And while it doesn't seem to matter how intuitive the interface is, since many users will never attain the TUI state with a program, that's no excuse for trying to be more intuitive and easy to handle.
On the other hand, programs that don't provide keyboard shortcuts and other muscle-memory schemes for doing things will never develop a TUI, no matter how heavily used they are. Who changes a font in the Excel spreadsheet without being conscious of all the steps they are taking?
Submitted by brad on Thu, 2005-02-17 07:24.
You'll recall an earlier post about the Silicon Valley 100 and getting stuff for free. I promised I had something to say about toilets anyway, so I will describe my experience with the Swash I was given as well as the Daelim Cleanlet which I bought a few years ago.
If you've gone to Japan, you have probably seen these fancy high-tech toilet seats, which try for a bit of bidet function in a seat. Their prime function is to have a heated water reservoir and a little wand that comes out to squirt water at what the Daelim manual calls the "personal area" and the "feminine area." They also tend to heat the seat, and make it descend slowly so it doesn't make a noise when you put it down. Both of these also have the optional feature of fan to blow heated air to dry your personal or feminine area.
I've got these two units, and I have tried various others in Japan. None of them can really compete with the water flow and cleaning ability of a real bidet, but most people don't have the space in their bathrooms for one of those. I was going to suggest the slogan "Every asshole needs one" but I don't think they are likely to use it.
These bidet-seats are about the only high-tech toilet invention to get a decent market, which is surprising because if you ask the patent office, toilet inventions are among the most common patent applications. I guess people spend a lot of time on toilets with nothing else to think about. read more »
Submitted by brad on Mon, 2005-01-24 09:21.
In spite of all sorts of efforts, I remain amazed at how many cables still go in and out of my PC. My home theatre PC, which I recently wanted to take somewhere, had me unplugging power, ether, audio, digital audio/SPDIF, keyboard, mouse, cable in, video out and a serial cable providing PPP to the old Tivo. It could easily have had another video, USB devices like a printer and more.
How about 2 wires into the next generation PC, or failing that 3. Power (no way around that yet) and 10 gigabit optical fiber. Ok, so we're not quite ready to run our HDTV video display (which needs over 3 gigabits for 2MP) on the ethernet, though we could quite often get away with it for everything but gaming if the display device had an X server with video decoders in it. So let's accept the 3rd cable as the video cable.
We made a mistake going to dedicated protocol wires like usb and firewire. Hard to say it's a mistake since it's so much better than what we had before, but I think IP is better. Instead, we could have built small hub boxes that have the power and the ethernet (gigabit now), into which small peripherals that need power like keyboards, mice and such would be plugged. Of course printers and other devices that already have their own external power would just need the ethernet.
Or, to extend an idea I pushed last year in the blog, a universal DC power system would be developed where data was exchanged (on minimal 5v power) to tell the power supply what to provide before the full power came on. Then you would buy blocks with the data and more sophisticated and powerful switching supplies which could run the devices we currently have 20 bricks and wallwarts to power -- routers, scanners, phones, external drives etc.
Of course, where it made sense we could even drop the ether part and have wireless, though we still need the power of course except for the lowest power intermittent devices that can have batteries.
It's amazing how many wires snake out of my desk, and even more out the back of my audio/video shelf. Sure would be nice if it could be a lot fewer.
Submitted by brad on Sun, 2004-11-14 14:13.
There are a number of Linix "Live CD" distributions out there. These allow you to boot Linux from a CD and run it (somewhat slowly) without ever touching the hard disk in the machine. (They can access the disk however, which makes them good for system repair, both for Linux and Windows.) One popular one is Knoppix, and Mandrake makes one called MandrakeMove, which takes the important next step of letting you store your personal config choices on a USB thumbdrive or floppy. There are distributions that can fit on a thumbrive (after all, those drives are getting quite large for little money, but this is recent enough there hasn't been as much focus on this.)
Let me suggest where I would like this trend to continue. It's great to be able to take any machine and quickly convert it to your style and environment with a CD, or even better a business-card CD or thumbdrive. (Most systems can boot from a CD, fewer from a thumb-drive, most from a floppy leading to another device.) Storing some state on a floppy, thumbdrive, CD-R session -- preferences, home directory files and scripts, browswer config and bookmarks -- is a must. Indeed, if the tools let you build a custom CD just for you with your choice of packages you can bring in much of your whole working environment.
I haven't seen anybody provide automatic storage on the net, based on the assumption the machine you take over probably has an ethernet card. If it does, it would be great to go out and suck down your latest personal changes and files, starting with the most important to get you going, and bringing the rest in the background. This doesn't need a special server, though the group making this distribution might well offer to do so. You could keep and update much of this data in a special mailbox message or mailbox folder, especially with IMAP. Anybody can get access to that. (Or a web mail tool like GMAIL.) Of course if you have actual hosting this can also be used. The data would be encrypted, you would need a password -- not just your mail password -- to use it.
As you changed the data it would be updated to the net storage. Now you could go to any machine with a non-customized CD. Indeed, you could even, on a common fast machine, download a minimal environment (perhaps 60 megabytes which is just a few minutes on a fast broadband link) and after it boots, get the custom information including which other packages are important.
The key is to do things in the Windows filesystem, most likely to be what you find on the machine where you are the guest. read more »
Submitted by brad on Mon, 2004-09-13 11:53.
We all would love solar power to work better, but it's hard to have it make economic sense yet, at least if you're near the grid. A solar panel takes 4 years just to give back the energy it took to build it, and it never pays back the money put in if you compare it to putting the money into the stock market. And that's with full utilization. If you use panels and batteries, any time your batteries are near full the power is being discarded, and you also have to replace your batteries every so often and dispose of the old lead-filled ones. Yuk. A grid-tie can use all the power of a panel but that's an expensive, whole-house thing.
But here's a start -- a solar-using PC power supply. My PCs, like many folks', are on all day, including the peak-demand heat of the day. Desktops draw anywhere from 50 to 200 watts even when idling.
So make a PC power supply that has 3 external connections. One for the wall plug. And two optional ones, one for a 12v solar panel and one for a battery. Then sell it with a 50w or 100w solar panel -- most importantly, the panel should not ever generate more power than the PC uses.
Because of that, during the bright part of the day, the panel will be providing most, or just barely all, of the power for the PC. The wall plug will provide the rest. At night, the wall plug would provide all the power. It's a grid-tie but it doesn't feed power back to the grid, it just reduces demand on it. The 100w panel takes 100w off the grid load during the peak demand times. And we use every watt the panel generates, we never throw any away. read more »
Submitted by brad on Tue, 2004-08-10 12:44.
I've been a longtime user of the Tivo, and when my mother got an HDTV, I pushed her to get a PVR. In Canada, the only really workable option for her was to rent the HD-8000 HD PVR from Rogers, her cable company. No Tivo service in Canada, and she wasn't ready for a PC based PVR (And HD ones are still immature.)
Two things I learned from the process. The first was how amazed I was at how badly the HD-8000 was designed. It strikes me as a first generation unit, not something that was designed after people looked at the Tivo and the Replay. Trying to watch a show in the middle of recording it is possible, but really cumbersome. It's very easy to lose your buffer on a live program you were watcing, or to lose your place in a recorded program you were watching. Browsing shows is guide-based, requiring you to browse only a particular day at a time. I could go on.
The other remarkable thing was seeing my low-tech mother's reaction. In spite of all I tell her about the PVR, she still wants to watch TV live most of the time. As a retiree and caregiver, she's home most of the time, and while she intellectually understands what the box does, her habits are so-long set that she really doesn't "get" it.
Which may explain the poor UI on the HD-8000. They don't expect their users to get it either. They expect their users to see it as a fancy VCR, with the ability to pause live TV. (Tivo owners learn that pausing live TV is more of a gimmick feature, in that you almost never watch live TV.)
Watching the recorded HD does make me jealous, though. HD PVR choices here are limited. You can get DirecTV's HD-Tivo for $1000, or build a MythTV box for a similar amount of money. It is the need for the PVR that has stopped me from getting HDTV, which otherwise I want very much.
But my Mother doesn't remember that when called on the phone, she can pause it. Or that you should always record a show you see that you want to watch, to give you the freedom to switch from it and come back later without risk. She is happy with her old habit of switching channels when a commercial comes on, and coming back to the other show later, presumably missing some of it. She is even happy watching low def live, when PVRed hi-def is a few steps away. My mother helps me remember that all users are not like me, which is good.
Submitted by brad on Sat, 2004-08-07 06:18.
Today, for the 2nd time, I lost a wireless access point in the process of putting new firmware into it. The new firmware apparently has some problems, but that's to be expected as a risk.
I've only seen it rarely, but the right thing to do is to have a rom or small un-writable section of the flash that contains a fully tested minimalist new firmware accepter. So that no matter what you do to the firmware, there is some way to get the old stuff back in, through some use of physical switches. I know have to send this thing back for warranty repair over something that I should be able to fix here.
Now other than that the WRT54G is a fine wireless access point precisely because the firmware is open source and you can get fancy extra features from other folks, but because this means more updating, there should be an escape hatch.
Submitted by brad on Wed, 2004-07-28 10:36.
I called earlier for ideas for uses of ad-hoc wireless card data networks (with 802.11 or similar.) I've been having trouble finding any compelling because I think the space is narrow, especially for the driver. I don't see much data you will want that only other cars around you will have. It has to be fresh, live data (otherwise your car would have loaded it when parked) and it has to be giant data (otherwise you would pick it up over the 3G or 4G cellular networks at lower data rates) and not suffer from both the connectivity and the data availability being intermittent and random in nature.
However, seeing the Dresner paper on a Reservation-Based Intersection Control Mechanism (with cool simulations) made me wonder if we might be able to get something sooner.
People might be too scared of the technology to handle a high-volume intersection but what about a low volume one, such as a 4-way stop? In particular, what if we have to assume many cars don't have a network?
A networked 4-way stop would have a network node broadcasting its existence and state. If the node at the intersection were down, it would act like an ordinary 4-way stop. Networked cars approaching the intersection would broker travel through it. (They would all have GPS, 802.11 and the node at the intersection would have a map.)
If a car was given access, right lights on the stop signs would light up. (Their power needs are much less than a traffic light, possibly even solar.) The one with the cleared car would light yellow. The cleared driver would get a signal (audio and visible) that they are cleared, inside their car.
Drivers seeing the red light would stop (network enabled or not) and wait for the light to go off after the cleared cars go through. Drivers seeing the yellow light who are not the cleared car (and thus not a networked car) would stop and proceed through the intersection like a normal 4-way stop.
The cleared driver would approach the intersection at reduced speed and check for drivers stopped at the other signs. If there were none she would move through the intersection without stopping. If some were present the display would say which intersections had networked cars. If all were networked, the driver woudl proceed. If some are not networked, the driver would proceed with more caution (perhaps a 5mph rolling stop, ready to full-stop if needed) or speak a command or push a button to enhance the stop signal for the non-networked cars.... read more »
Submitted by brad on Fri, 2004-06-25 11:20.
A couple of years ago, a series of digital picture frame products appeared. Some took memory cards. One plugged into a modem so grandma could get new grandchild pictures each day without doing things. But they were all super low resolution and high priced.
Panels have come down a lot recently. I see wall mount 1280x1024 panels getting to about $350, wall mountable (though you need power.) That's a resolution I could handle.
How about throwing picture frame ability into these? Either the memory card slot as before, or perhaps 802.11? In the latter case, you could even tolerate not bothering with jpg decompression or much else on the panel, let the PC do it all over the network.
For a few extra bucks, however, a wireless, wall-mount, high-res flat panel display is something I can see people buying many of. Give them a full X server or mini-media server so you can stream mpeg video at them, and I could see a raft of applications as home display and control devices.
They could show you TV, your doorway security cam when the door rings, your caller-ID when the phone rings, weather, traffic, you name it, and be a digital picture frame when nothing else is going on.
Throw in an infrared receiver and they could work with remote controls.
Of course you could also make a mini box that has all this and a VGA output. They do make such boxes with TV output to be media servers connected to your TV and stereo. Has anybody seen one designed to mount flat on the wall behind a flat panel display?
All pointers suggest this product could be under $400 soon, then under $200 at which point you would see a lot of people buying one for each room. Right now 1280x1024 seems the hi-res sweet spot, though in fact 1280 x 854 or of course 1536x1024 to get a photographic aspect ratio woudl be even nicer.
Maybe not for grandma's baby pictures yet, but who knows? If grandma has DSL, you could buy her one of these, and a cheap wireless access point even though she doesn't have any other wireless equipment, and with proper security, let the pictures and display be controlled by you or a photo managing service.
Submitted by brad on Sat, 2004-05-22 09:04.
The new genertion of WiFi equipment supports WPA (WiFi Protected Access) a version of the IETF's EAP protocol, so that superior key authentication with different keys for each user and the keys are much harder to crack. In corporate networks, the keys can be fetched via RADIUS -- effectively allowing a single login password to provide all network access securely.
That's great, but not enough has been done that I have seen to make a good user interface for the home network. I set up family member's wireless networks with WEP keys and its a pain even for a skilled person. When a person visits my house and wants wireless access I need to key in a 32byte hex string.
For home networks, how about a nice simple protocol. When a new device attempts to connect to the network, note that. Then let the user go to the web configuration page for their access point. There it will list the new devices that have tried to get on the net. There will probably be only one. If the user clicks to approve it, transmit the WEP key back to that new device (encrypted with a public key the device provided) so it can now join the network. Possibly with reduced permissions, but that's a bonus.
The main goal is plug and play (or near to it) joining of the encrypted network in the ordinary home. If there are multiple APs, they can share the key with WPA or other protocols. Or frankly, it's not even a giant burden to have to confirm the new user to all the APs, since most homes don't have more than one. (Mine does, I can't get the signal to go from one corner of my house to the other.)
Want to make it even easier for the unskilled home user? Put a button on the access point. Push it, then the new laptop will ask for a key. A light will go on if one and one one device asked for access, and the laptop will confirm it. Then push the button again and the laptop gets a permanent key for access then and in the future. Of course a web interface is cheaper than a button and clearer but this is dirt simple. If two devices try to get access, then you get an error and have to try again or go to the web interface, but this would be rare and a sign that perhaps somebody was trying to sneak in.
Submitted by brad on Mon, 2004-05-17 07:13.
When SIP was designed for internet telephony, the feeling was to get rid of the phone number and replace it with IDs with the form of email addresses. E-mail addresses are of course easier to remember and read, though as a downside they tie your address to a domain, which is fine if it's yours, but silly if it's your service provider's.
However, to much surprise, handsets with numeric keypads not only continue to dominate the phone, but their use is growing. So much that complex "texting" systems have been designed and come with phones to let people enter text messages with the keypad.
In addition, popular IP phones feature not full keyboards, but traditional keypads, even though they have room. Mobile phones largely won't have keyboards for size constraints. As a result, IP phone users are using services like Free World Dialup and SipPhone so they can have phone numbers again, the thing we wanted rid of.
There is another ancient system involving phone numbers based on the letters Bell put on the keypad. Starting with Pennsylviana-6-5000, and moving to numbers like 1-800-FLOWERS.
Of course there are other answers to dialing -- menus, speech interfaces and so on. But if dialpads are with us for a while longer, does it make sense to rethink the system of finding words to spell out phone numbers?
If we use the existing system (with perhaps some minor mods) we could get a wide selection of spellable words by having longer numbers. No reason you can't have multiple numbers -- a "normal" 7 (or 10) digit number and then a longer number that is easier to remember but harder to key because it's longer. Thus I could probably have "BRADTEMPLETON" 2723-836753866 as a phone number, as well as my regular 7 digit number for use in systems that can't handle long numbers. Cell phones of course can easily have the length of numbers extended, but even ordinary phones can do this easily with a * or # code.
Of course the spell a word system has name collisions, so not everybody can get their preferred choice of name, but everybody can have an easy to remember string, I would venture. (Like with domain names.) read more »