Technology

Why isn't my cell phone a bluetooth GPS

GPS receivers with bluetooth are growing in popularity, and it makes sense. I want my digital camera to have bluetooth as well so it can record where each picture is taken.

But as I was drivng from the airport last night, I realized that my cell phone has location awareness in it (for dialing 911 and location aware apps) and my laptop has bluetooth in it, and mapping software if connected to a GPS — so why couldn’t my cell phone be talking to my laptop to give it my location for the mapping software? Or ideed, why won’t it tell a digital camera that info as well?

Are people making cell phones that can be told to transmit their position to a local device that wants such data?

Update: My Sprint Mogul, whose GPS is enabled by the latest firmware update, is able to act as a bluetooth GPS using a free GPS2Blue program.

Have the OS give user permissions on "privileged" IP ports.

Very technical post here. Among the children of Unix (Linux/BSDs/MacOS) there is a convention that for a program to open a TCP or UDP port from 0 to 1023, it must have superuser permission. The idea is that these ports are privileged, and you don’t want just any random program taking control of such a port and pretending to be (or blocking out) a system service like Email or DNS or the web.

This makes sense, but the result is that all programs that provide such services have to start their lives as the all-powerful superuser, which is a security threat of its own. Many programs get superuser powers just so they can open their network port and, and then discard the powers. This is not good security design.

While capability-based-security (where the dispatcher that runs programs gives them capability handles for all the activities they need to do) would be much better, that’s not an option here yet.

I propose a simple ability to “chown” ports (ie. give ownership and control like a file) to specific Unix users or groups. For example, if there is a “named” user that manages the DNS name daemon, give ownership of the DNS port (53) to that user. Then a program running as that user could open that port, and nobody else except root (superuser) could do so. You could also open some ports to any user, if you wanted.  read more »

Let's see neighbourhood fiber lan

The phone companies failed at the fiber to the curb promise in most of the USA and many other places. (I have had fiber to the curb at my house since 1992 but all it provides is Comcast cable.)

But fiber is cheap now, and getting cheaper, and unlike wires it presents no electrical dangers. I propose a market in gear for neighbourhoods setting up a fast NLAN, by running a small fiber bundle through their backyards (or, in urban row housing, possibly over their roofs.) Small fiber conduits could be buried in soil more easily than watering hoses, or run along fences. Then both ends, meeting the larger street or another NLAN, could join up for super-high connectivity.

I would join both ends because then breaks in this amateur-installed line don’t shut it down. The other end need not be at super-speed, just enough so phones work etc. until a temporary above-ground patch can be run above the break.

Of course, you would need consent of all the people on the block (though at the back property line you only need the consent of one of the two sides at any given point.) Municipal regulations could also give neighbours access to the poles though they would probably have to pay a licenced installer.

An additional product to sell would be a neighbourhood server kit, to provide offsite backup for members and video storage. Depending on legal changes, it could be possible to have a block cable company handling the over-the-air DTV stations, saving the need to put up antennas. Deals could be cut with the satellite companies to place a single dish with fancy digital decoder in one house. The cable companies would hate this but the satellite companies might love it.

Of course there does need to be something to connect to at the end of the street for most of these apps, though not all of them. After all, fiber is not that much better than a bundle of copper wires over the short haul of a neighbourhood. But if there were a market, I bet it would come, either with fiber down main streets, fixed wireless or aggregated copper.

Flat panel monitors that interlock on thin edges

Some flat panel displays being made today have modestly thin edges, and people like using them for multi-monitor systems with a desktop that spans one or more monitors.

I suggest a monitor design where the edge moulding on the monitor can come off, and be replaced, with care by a special interlock unit. The interlock would join two monitors together strongly and protect the LCD panel but try to bring the two panels as close together as possible. Most of the strength would be on the back, and on the front, the cover would just be a thin but strong strip, in choice of colours, to cover only the small gap between the monitors.

The result would be a good way to make display walls, and of course big multi-monitor displays. Dell is now selling a 2560 x 1600 monitor for $2100 that is very tempting, but two 1600 x 1200s, for similar screen real estate, can now be had for under $1000, and they don’t require a new $300 video card to boot. Four 1280x1024 isplays, though smaller at 17”, can be hand for under $1000 and even more screen real estate with two dual-head video cards (which cost under $50). Though with 4 screens people don’t necessarily want them so flat any more. However a 2x2 grid of 17” displays at $1000 would attract customers if the lines between were small.

Of course, in time that lovely 4MP display will get cheaper, and an even better one will come along. I am tempted by the 4MP because that’s half the pixels of my 8MP digital camera, and I could finally see some of my images at at least half-res without having to print them. But other than for that, multi-monitor is just fine.

Of course if you use multi-monitors, be sure to visit my panoramic photography pages for super-wide photos you can use as wallpapers on such setups. Regular blog readers can ask me nice and I’ll get you an image 1024 or 1200 high if available.

Hybrid Languages

There are a lot of popular programming languages out there, each popular for being good at a particular thing. The C family languages are fastest and have a giant legacy. Perl is a favoured choice for text manipulations. Today's darling is Ruby, leader of the agile movement. Python is a cleaner, high-level language. PHP aims at the quick web/HTML scripter language and has a simpler access to SQL databases than most. Java's a common choice for large projects, with lots of class libraries, slower than C but faster than interpreted languages.

However, my goal here is not to debate the merits of these languages, which are only barely summed up above (and no doubt incorrectly to some perceptions.) My goal is to point out that we all love our different languages for different purposes. And more to the point, one of the reasons we love a particular language is that we *know it*. In many cases we might decide we could more quickly solve a problem in a language we know well, even though another language might be better suited overall.

Sometimes I'm sitting coding in one of the more concrete languages, like C or Java, and I think to myself, "This problem would be 2 lines in Perl." It would probably be slower, and perl would not be a suitable choice for the whole project, so I spend the time to solve the problem in the language I'm coding.

Many of the languages have mechanisms to deal with foreign or "native" methods, ie. to deal with objects or functions from another language. Most of these systems are clunky. You would not use them for 3 lines of code, nor would it be particularly readable.

So I propose being able to "switch languages" in the middle of a piece of code. You're programming in C, and suddenly you break out into perl, to so something you immediately know how to do in perl. You get access to the core data types of the original language, and as much of the complex ones as can be made simple. If you need to get real in-depth access to the complex data types of the other language, go back to its foreign methods interface and write a remote function.

Read on...  read more »

Combining traffic light control and wireless mesh networking

Here's an idea I had years ago and tried to promote to some of the earliest wireless companies, such as Metricom, without success. I just posted it on Dave Farber's IP list, so I should write it up again for my own blog...

The idea is a win-win situation for wireless service and municipalities. Combine wireless data service with traffic light control. Offer a wireless mesh company the use of a city's traffic light poles -- which provide a nice high spot at every major intersection in town, with power available -- in exchange for using that network for traffic control. Indeed, I think this space is so valuable to the wireless companies that they should probably buy traffic control software and offer it free to the cities.

The bandwidth for light control is of course trivial. One could also support traffic cams (though hopefully not universal surveillance cams) to help provide dynamic adjustments to the traffic system.

Today, full-bore automatic traffic lights are expensive -- $150,000 in many cases. That's because of the need to bring in safety-equipment grade power, and to dig up the road to lay down vehicle sensors,
as well as data of course. That's changing. New lights use LEDs and thus a fair bit less power. (Some cities have realized that the LED switch pays for itself very quickly.) I think car sensor tech is changing too, and especially with a large market, either LIDAR or CCD cameras with automatic recognition should be capable of good traffic detection without digging up the road.

So it's a win all around. Cities get better traffic flow (and less gas is burned) and wireless networks sprout everywhere to compete with the monopoly cable/ILEC crew.

For places where a full street light is too expensive, I have also suggested the [wireless brokered 4-way stop](/archives/000118.html) as an alternative.

Rethinking household/office power, beyond 60hz

I’ve written before about the desire for a new universal dc power standard. Now I want to rethink our systems of household and office power.

These systems range from 100v to 240v, typically at 50 or 60hz. But very little that we plug in these days inherently wants that sort of power. Most of them quickly convert it to something else. DC devices use linear and switched mode power supplies to generate lower voltage DC. Flourescent lights convert to high voltage AC. Incandescent bulbs and heating elements use the voltage directly, but can be designed for any voltage and care little about the frequency. There are a dwindling number of direct 60hz AC motors in use in the home. In the old days clocks counted the cycles but that’s very rare now.

On top of that, most of what we plug in uses only modest power. The most commonly plugged in things in my house are small power supplies using a few watts. Most consumer electronics are using in the 50-200w range. A few items, such as power tools, major appliances, cooking appliances, heatters, vacuum cleaners and hairdryers use the full 1000 to 1800 watts a plug can provide.

So with this in mind, how might we redesign household and office power…  read more »

Cool Walls

On the wall now near desks are plates with power and ethernet (and phone until VoIP takes over.) I’ve been wondering if we shouldn’t add another jack — air, and plumb our walls with pipes to move air for cooling electronic devices.

This idea started by reading about a guy who attached a plastic vent hose from the output of his PC fan to a hole he cut in his wall. This directs much of the heat and some of the noise into the wall and up to the attic.

I started wondering, shouldn’t we deliberately plumb our houses to cool our devices? And even more, our office buildings? And can we put the blowers at the other end of the pipes, to move the noise away from our devices? How much would we save on air conditioning?

Read on…  read more »

Banks, let me enumerate the line items in my deposits, or let me deposit at home.

At my bank (Wells Fargo) and some others I have checked, the ATM lets you make a deposit with an envelope. You must key in the total amount being deposited, even if you put several cheques in the envelope. This in turn shows up as just one transaction in my statement, and in my download of my transactions to my computer.

That’s not what I want of course. I want to see the different deposits split out individually. The bank certainly splits them out in any event to send each cheque out to the bank that will honour it. Why not have me start the process. It might also assure more accurate addition of the amounts.

Of course, this would take a little more time at the ATM, but a lot less time than what I do now — put each cheque into a different envelope, and deposit them one at a time. Or at least put the cheques of different classes into different envelopes. Of course, if I planned ahead, I could enter them all into the accounting software before I go to the bank, and in that case need not enter the individual tallies. But you don’t always plan like this.

Does any bank’s ATM do this?

Of course even better would be to let me make my deposits at home, with my scanner. No, I’m not kidding. More and more, people are happy to get scans of their cancelled cheques back instead of the physical paper ones. The banks are moving to doing it all inter-bank with scans. So let the customer do it too. Of course, the system would scan the OCR digits with cheque number, account number and routing number and not let the same cheque be deposited twice. A live query could be made after you scan with the payer’s bank. And you would be required to hold on to the cheques you scan, since any one could be challenged, and if challenged you would have to bring the physical one down to the bank. And perhaps you would have to bring them all down eventually for final records.

And eventually of course I could duplicate paypal, by writing you a cheque and sending you a scan of it which you can then cash — in which case we should just go to full electronic money.

Naturally all of this would only be for well trusted regular customers, and the money would probably be on invisible hold in your bank account just like ATM deposits often are until the bank looks at them.

Boot-oriented disk defragmenter

Everybody is annoyed at how long it takes computers to boot. Some use hibernate mode to save a copy of the system in a booted state, which is one approach. Booting procedures have also gotten better about running stuff in parallel.

How about watching a system as it boots, and noting what disk blocks are read to boot it. Then save that map for the disk defragmenter or other disk organizer and have it try to rearrange the files needed at boot so they are all contiguous
and close together. This should reduce the role of I/O as a boot bottleneck. Many disks today can read 50 megabytes in a second, and most OSs only need to access a few 100MB in order to boot, and they have the ram to only need to read files once.

External laptop batteries, especially on planes

Recently I purchased an external battery for my Thinkpad. The internal batteries were getting weaker, and I also needed something for the 14 hour overseas flights. I picked up a generic one on eBay, a 17 volt battery with about 110 watt-hours, for about $120. It's very small, and only about 1.5 lbs. Very impressive for the money. (When these things first came out they had half the capacity and cost more like $300.)

There are downsides to an external: The laptop doesn't know how much charge is in the battery and doesn't charge it. You need an external charger. My battery came with its own very tiny charger which is quite slow (it takes almost a day to recharge from a full discharge.) The battery has its own basic guage built in, however. An external is not as efficient as an internal (you convert the 17v to the laptop's internal voltage and you also do wasteful charging of the laptop's internal if it is not full, though you can remove the internal at the risk of a sudden cutoff should you get to the end of the external's life.)

However, the plus is around 9 to 10 hours of life in a small, cheap package, plus the life of your laptop's internal battery. About all you need for any flight or long day's work.

It's so nice that in fact I think it's a meaningful alternative to the power jacks found on some airlines, usually only in business class. I bought an airline adapter a few years ago for a similar price to this battery, and even when I have flown in business class, half the time the power jack has not been working properly. Some airlines have power in coach but it's rare. And it costs a lot of money for the airlines to fit these 80 watt jacks in lots of seat, especially with all the safety regs on airlines.

I think it might make more sense for airlines to just offer these sorts of batteries, either free or for a cheap rental fee. Cheaper for them and for passengers than the power jacks. (Admittedly the airline adapter I bought has seen much more use as a car and RV adapter.) Of course they do need to offer a few different voltages (most laptops can take a range) but passengers could reserve a battery with their flight reservation to be sure they get the right one.

It would be even cheaper for airlines to offer sealed lead-acid batteries. You can buy (not rent) an SLA with over 200 watt-hours (more than you need for any flight) for under $20! The downside is they are very heavy (17lbs) but if you only have to carry it onto the plane from the gate this may not be a giant barrier.

Of course, what would be great would be a standard power plug on laptops for external batteries, where the laptop could use the power directly, and measure and charge the external. Right now the battery is the first part to fail in a laptop, and as such you want to replace batteries at different times from laptops. This new external should last me into my next laptop if it is a similar voltage.

On the need for self-replicating nanotech assemblers

In recent times, I and my colleagues at the Foresight Nanotech Institute have moved towards discouraging the idea of self-replicating machines as part of molecular nanotech. Eric Drexler, founder of the institute, described these machines in his seminal work “Engines of Creation,” while also warning about the major dangers that could result from that approach.

Recently, dining with Ray Kurzweil on the release of his new book The Singularity Is Near : When Humans Transcend Biology, he expressed the concern that the move away from self-replicating assemblers was largely political, and they would still be needed as a defence against malevolent self-replicating nanopathogens.

I understand the cynicism here, because the political case is compelling. Self-replicators are frightening, especially to people who get their introduction to them via fiction like Michael Chrichton’s “Prey.” But in fact we were frightened of the risks from the start. Self replication is an obvious model to present, both when first thinking about nanomachines, and in showing the parallels between them and living cells, which are of course self-replicating nanomachines.

The movement away from them however, has solid engineering reasons behind it, as well as safety reasons. Life has not always picked the most efficient path to a result, just the one that is sufficient to outcompete the others. In fact, red blood cells are not self-replicating. Instead, the marrow contains the engines that make red blood cells and send them out into the body to do their simple job.

Read on  read more »

Linux tester and linuxator for donated computers

A lot of older computers that people are ready to throw away can be decent linux boxes, in schools or in other charitable locations.

I propose a simple small program (possibly fitting on a floppy as well as CD) which can be inserted into an old computer. It scans the harware and compares it with hardware databases of chipsets, cards and other parts which are known to work well under linux (or your favourite BSD or other OS) and to work well together. It would also evaluate the machine and put it in a “performance class” to describe just how good it is. It might connect to the net (if it can) to download the latest such lists and info and software updates.

The goal is to test if the machine can do a problem-free install, one that asks almost no questions, and converts the system to a nice linux box, ready for some student to run for e-mail, web, and writing. There are so many machines to donate that we can insist on perfection. The program could also tell the owner what upgrades it might need to be good or to reach a performance class. “This machine is good but with 128M of ram it would reach performance class N.” “This machine would be perfect if you swapped the ethernet card for one of these models” and so on.

Next, of course, is a simple distribution, to install from CD-rom or over the network, that can be quickly installed with no questions asked except perhaps time-zone (if it can’t figure that out from the old OS.) The goal is a system that can be run by untrained admins who may never have seen the insides of linux or any other OS.

Trials and switching servers

All my sites were off today as I did an emergency switch of servers.

The whole story is amusing, so I’ll tell it. I used to host my web sites with Verio shared hosting, but they were overpriced and did some bad censorship acts, so I was itching to leave. One day my internet connection went out, so I went onto my deck with my laptop to see what free wireless there was in the area. One strong one had an e-mail address as the SSID, though it was WEP-locked. Later, I e-mailed that address with a “hi neighbour” and met the guy around the corner. He had set the SSID that way to get just such a mail as mine. (I have a URL as my SSID now for the same purpose.)

My neighbour, it turned out, knew some people I knew in the biz, and told me about a special club he was in, called “Root Club.” The first rule of Root Club, he joked, was that you do not talk about root club. Now that I’m out, I can tell the story. Root Club was started as a group of sysadmins who shared a powerful colocated web server, and all shared the root password and sysadmin duties.  read more »

Network storage on the cheap for the home

Corporate servers have used network storage, ranging from fileservers, to SAN for several years. Now, with USB IDE external drive cases selling for as little as $20, people are using external drives on their PC, and get pretty good response with 400 mbit USB 2 or with 1394/firewire. You can get most of the capacity of a 7200 rpm drive over USB 2.

So I want to call for the production of a cheap home external storage box. This box would have slots for 4 or 5 drives and cooling for them, ideally as big a fan as possible to keep the rpms and noise low in the desk model, and an even more powerful fan in the basement model. The desk model might have sound insulation though that’s hard to combine with good cooling.

While this box could and probably should have USB or 1394, even better would be gigabit ethernet, which is fast enough for most people’s storage needs, especially if there is a dedicated gigabit ethernet card in the PC just for talking to the storage.

This could allow for a radical redesign of PC cases of all types, with no need for the space and heat of drives. And of course these diskless PCs would be much quieter. You could put your disk cube under your desk (and thus have it be a bit quieter) but ideally you would like the basement model, to which you string cat5e cable and get a mostly silent PC.

read on…  read more »

We strike down the broadcast flag!

On both a personal and professional note, I am happy to report that the federal courts have unanimously ruled to strike down the FCC's broadcast flag (that's a PDF) due to our lawsuit against them.

I participated directly in this lawsuit, filing an affadavit on how, as a builder of a MythTV system and writer of software for MythTV, I would be personally harmed if the flag rule went into effect. The thrust of the case was that the FCC, which is empowered to regulated interstate communications, had no authority to regulate what goes on inside your PC. The court bought that, but we had to show that the actual plaintiffs in the case would be harmed, not simply the general public, thus the declarations by myself and various other members of EFF and other plaintiffs.

The broadcast flag was an insidious rule because, as I like to put it, it didn't prohibit Tivo from making a Tivo (as long as they got it certified as having pledged allegiance to the flag.) It stopped somebody from designing the next Tivo, the metaphorical Tivo, meaning bold new innovation in recording TV.

I would like to particularly thank Public Knowledge, which spearheaded this effort and funded most of it.

Here's an AP Interview with me on the issue.

Moratorium on computers calling me by name (and form letters)

Dear [[blog-reader's name]]:

When it first started arising, in the 60s and 70s, everybody thought it was so cute and clever that computers could call us by name. Some programs even started by asking for your name, only to print "Hi, Bob!" to seem friendly in some way.

And of course a million companies were sold mailing list management tools to print form letters, filling in the name of the recipient and other attributes in varous places to make the letter seem personal. And again, it was cute in its way.

But not any more. We've all figured it out. Nobody says, "Wow, this letter has 'Dear Brad' in it, it must have been written personally for me." Nobody is fooled any more. In fact, the reverse is now true. It's bordering on offensive. If an E-mail starts with "Dear Brad" it is more likely than not to be spam.

Sometimes though, I get form letters from real companies I deal with, and they still like to put my name in it, like they used to on paper. As you probably know, in E-mail today, you don't put in salutations any more unless it's a mail to a stranger.

So let's get the word out. Stop it. No more form letters where the computer oh-so-cleverly manages to fill in a field with our name. (Unless it's amusing, and they are writing to "Dear Mr. Association") If it's legitimate bulk mail, don't try to pretend you're not bulk mail. That's what spammers do. Be honest that you're bulk mail.

If you have actual relevant data to fill in, fill it in, but put it in a table so I can skip the form letter garbage and get to the actual data about me you're trying to tell me. Put my name at the top in a nice computer-style box, "Prepared for: Brad Templeton."

Leave the use of my name to people writing messages for me. You're not fooling anybody.

Yours truly,
[[Insert name here]]

Open Source's backwards-compatibility failure

Linux distributions with package managers like apt, promise an easy world of installing lots of great software. But they've fallen down in one respect here. There are thousands of packages for the major distributions (I run 3 of them, debian, Fedora Core and Gentoo) but most packages depend on several other packages.

The developers and packagers tend to run recent, even bleeding-edge versions of their systems. So when they package, the software claims it depends on very recent versions of other programs, even if it doesn't. This is not surprising -- testing on lots of old systems is drudgework nobody relishes doing.

So when you see a new software package you want, the ideal is you can just grab it with apt-get or yum. The reality is you can only do this if you're running a highly up-to-date system. Debian has become the worst offender. Debian's "Stable" distribution is several years old now. To run debian reasonably, even to just be able to upgrade to fix bugs in software you use, you have to run the testing distribution, and most probably the unstable one. I run the unstable, and it's more stable than the name implies, but ordinary users should not be expected to run an unstable distribution.

To get new software, you are often forced to upgrade, sometimes your whole OS. And that's free to do and often it works, but you can't depend on it. More than once I have lost a day of uptime to major upgrade efforts.

Let's contrast that with Windows. The vast majority of Windows programs will install, in their latest version, on 7 year old Windows 98, and almost all will install on 5 year old Windows 2000. This is partly because Windows has fewer milestones to test to, but also because coders know that it's quite a hurdle to insist users pay money to upgrade Windows. (And Windows upgrades are even more of a pain than linux ones.)

The linux approach ends up forcing the user to choose between the risky course of constant incremental upgrades, taking occasional random plunges into major upgrades, or simply not being able to run interesting new software or the latest versions and fixes of older software.

That's a failure. Non-guru users are not able to deal with any of those choices.

Testing with every different version of every dependent package (and every kernel) is not going to happen, but it would be nice if packagers worked hard to figure out what versions of dependencies they really need, even if they don't test it enough. Packages might say, "I was tested with 2.1, I probaby work with 1.0 though." Then wait for test reports and possibly report being tested with earlier and earlier dependencies.

This doesn't mean that sometimes you won't truly need the latest version of a dependency, and shouldn't say so. But it sure would make it easier for the ordinary user to particpate in linux if this was the exception, not the rule.

Telepathic User Interface

In writing an essay I'm working on about why hard disk video recorders are as novel as they are, I explored a concept I think is worthy of its own blog entry. This is the concept of Telepathic User Interface or TUI.

A TUI is a user interface that you use so much that it becomes unconscious. Perhaps the classic TUI is touch typewriter keyboard. I just think letters and they simply come out. I am no longer consious of the mechanism. In many cases I think sets of letters and even words and they just come out. From the mind to the computer -- telepathic.

Other examples include the car. After you drive a car for a while it becomes an extension of yourself. Learning the clutch is hard but soon you are not thinking about it at all. And the remote control on a Tivo, I write in the essay, has aspects of a TUI -- you learn how to move around a program without thinking.

A TUI is not always a natural interface or even a good interface. It's just one you use often enough to make it subconsious. It doesn't have to be intuitive -- an intuitive interface is simply one that's easy to guess the operation of.

When it comes to computer software, this helps us understand the dichotomy between the GUI/WIMP style and the command line and keyboard style which still has many devotees.

GUI interfaces are easy to learn, and easy to guess. And of course for positional inputs they are markedly superior and often the only choice. But by and large, the story of Mice and Menus took a path away from the TUI. You have to focus your eyes on the pointer in order to use a GUI, and you have to read to use a menu. It's much more difficult to use such a system unconsiously. (Mouse gesture interfaces change that a bit.)

Fans of text editors like VI and Emacs, with complex, non-intuitive keyboard interfaces love them because they have reached TUI state, at least in part. Many of the operations have become unconsious, and thus much faster and easier as far as the user is concerned.

Command line interfaces are never completely TUI, but they take advantage of the TUI nature of touch-typing. Because touch typing maps words from brain to screen, complex commands can have a fair bit of TUI to them.

It is a rare technology that can earn a TUI. You need to be using it a great deal, and regularly. Video games also develop TUIs because of their devotion. And while it doesn't seem to matter how intuitive the interface is, since many users will never attain the TUI state with a program, that's no excuse for trying to be more intuitive and easy to handle.

On the other hand, programs that don't provide keyboard shortcuts and other muscle-memory schemes for doing things will never develop a TUI, no matter how heavily used they are. Who changes a font in the Excel spreadsheet without being conscious of all the steps they are taking?

About the Brondell Swash toilet

You'll recall an earlier post about the Silicon Valley 100 and getting stuff for free. I promised I had something to say about toilets anyway, so I will describe my experience with the Swash I was given as well as the Daelim Cleanlet which I bought a few years ago.

If you've gone to Japan, you have probably seen these fancy high-tech toilet seats, which try for a bit of bidet function in a seat. Their prime function is to have a heated water reservoir and a little wand that comes out to squirt water at what the Daelim manual calls the "personal area" and the "feminine area." They also tend to heat the seat, and make it descend slowly so it doesn't make a noise when you put it down. Both of these also have the optional feature of fan to blow heated air to dry your personal or feminine area.

I've got these two units, and I have tried various others in Japan. None of them can really compete with the water flow and cleaning ability of a real bidet, but most people don't have the space in their bathrooms for one of those. I was going to suggest the slogan "Every asshole needs one" but I don't think they are likely to use it.

These bidet-seats are about the only high-tech toilet invention to get a decent market, which is surprising because if you ask the patent office, toilet inventions are among the most common patent applications. I guess people spend a lot of time on toilets with nothing else to think about.  read more »

Syndicate content