Technology

I remember IBM

Everybody’s pulling out IBM PC stories on the 25th anniversary so I thought I would relate mine. I had been an active developer as a teen for the 6502 world — Commodore Pet, Apple ][, Atari 800 and the like, and sold my first game to Personal Software Inc. back in 1979. PSI was just starting out, but the founders hired me on as their first employee to do more programming. The company became famous shortly thereafter by publishing VisiCalc, which was the first serious PC application, and the program that helped make Apple as a computer company outside the hobby market.

In 1981, I came back for a summer job from school. Mitch Kapor, who had worked for Personal Software in 1980 (and had been my manager at the time) had written a companion for VisiCalc, called VisiPlot. VisiPlot did graphs and charts, and a module in it (VisiTrend) did statistical analysis. Mitch had since left, and was on his way to founding Lotus. Mitch had written VisiPlot in Apple ][ Basic, and he won’t mind if I say it wasn’t a masterwork of code readability, and indeed I never gave it more than a glance. Personal Software, soon to be renamed VisiCorp, asked me to write VisiPlot from scratch, in C, for an un-named soon to be released computer.

I didn’t mention this, but I had never coded in C before. I picked up a copy of the Kernighan and Ritchie C manual, and read it as my girlfriend drove us over the plains on my trip from Toronto to California.

I wasn’t told much about the computer I would be coding for. Instead, I defined an API for doing I/O and graphics, and wrote to a generalized machine. Bizarrely (for 1981) I did all this by dialing up by modem to a unix computer time sharing service called CCA on the east coast. I wrote and compiled in C on unix, and defined a serial protocol to send graphics back to, IIRC an Apple computer acting as a terminal. And, in 3 months, I made it happen.

(Very important side note: CCA-Unix was on the arpanet. While I had been given some access to an Arpanet computer in 1979 by Bob Frankston, the author of VisiCalc, this was my first day to day access. That access turned out to be the real life-changing event in this story.)

There was a locked room at the back of the office. It contained the computer my code would eventually run on. I was not allowed in the room. Only a very small number of outside companies were allowed to have an IBM PC — Microsoft, UCSD, Digital Research, VisiCorp/Software Arts and a couple of other applications companies.

On this day, 25 years ago, IBM announced their PC. In those days, “PC” meant any kind of personal computer. People look at me strangely when I call an Apple computer a PC. But not long after that, most people took “PC” to mean IBM. Finally I could see what I was coding for. Not that the C compilers were all that good for the 8088 at the time. However, 2 weeks later I would leave to return to school. Somebody else would write the library for my API so that the program would run on the IBM PC, and they released the product. The contract with Mitch required they pay royalties to him for any version of VisiPlot, including mine, so they bought out that contract for a total value close to a million — that helped Mitch create Lotus, which would, with assistance from the inside, outcompete and destroy VisiCorp.

(Important side note #2: Mitch would use the money from Lotus to found the E.F.F. — of which I am now chairman.)

The IBM PC was itself less exciting than people had hoped. The 8088 tried to be a 16 bit processor but it was really 8 bit when it came to performance. PC-DOS (later MS-DOS) was pretty minimal. But it had an IBM name on it, so everybody paid attention. Apple bought full page ads in the major papers saying, “Welcome IBM, Seriously.” Later they would buy ads with lines like Steve Jobs saying, “When I invented the personal computer…” and most of us laughed but some of the press bought it. And of course there is a lot more to this story.

And I was paid about $7,000 for the just under 4 months of work, building almost all of an entire software package. I wish I could program like that today, though I’m glad I’m not paid that way today.

So while most people today will have known the IBM-PC for 25 years, I was programming for it before it released. I just didn’t know it!

Get a giant display screen

Yesterday I received a Dell 3007WFP panel display. The price hurt ($1600 on eBay, $2200 from Dell but sometimes there are coupons) and you need a new video card (and to top it off, 90% of the capable video cards are PCI-e and may mean a new motherboard) but there is quite a jump by moving to this 2560 x 1600 (4.1 megapixel) display if you are a digital photographer. This is a very similar panel to Apple's Cinema, but a fair bit cheaper.

It's great for ordinary windowing and text of course, which is most of what I do, but it's a great deal cheaper just to get multiple displays. In fact, up to now I've been using CRTs since I have a desk designed to hold 21" CRTs and they are cheaper and blacker to boot. You can have two 1600x1200 21" CRTs for probably $400 today and get the same screen real estate as this Dell.

But that really doesn't do for photos. If you are serious about photography, you almost surely have a digital camera with more than 4MP, and probably way more. If it's a cheap-ass camera it may not be sharp if viewed at 1:1 zoom, but if it's a good one, with good lenses, it will be.

If you're also like me you probably never see 99% of your digital photos except on screen, which means you never truly see them. I print a few, mostly my panoramics and finally see all their resolution, but not their vibrance. A monitor shows the photos with backlight, which provides a contrast ratio paper can't deliver.

At 4MP, this monitor is only showing half the resolution of my 8MP 20D photos. And when I move to a 12MP camera it will only be a third, but it's still a dramatic step up from a 2MP display. It's a touch more than twice as good because the widescreen aspect ratio is a little closer to the 3:2 of my photos than the 4:3 of 1600x1200. Of course if you shoot with a 4:3 camera, here you'll be wasting pixels. In both cases, of course, you can crop a little so you are using all the pixels. (In fact, a slideshow mode that zoom/crops to fully use the display would be a handy mode. Most slideshows offer 1:1 and zoom to fit based on no cropping.)

There are many reasons for having lots of pixels aside from printing and cropping. Manipulations are easier and look better. But let's face it, actually seeing those pixels is still the biggest reason for having them. So I came to the conclusion that I just haven't been seeing my photos, and now I am seeing them much better with a screen like this. Truth is, looking at pictures on it is better than any 35mm print, though not quite at a 35mm slide of quality.

Dell should give me a cut for saying this.

Long ago I told people not to shoot on 1MP and 2MP digital cameras instead of film, because in the future, displays would get so good the photos will look obviously old and flawed. That day is now well here. Even my 3MP D30 pictures don't fill the screen. I wonder when I'll get a display that makes my 8MP pictures small.

No more monitor out of scan range

It can be very frustrating when a PC decides to send a signal to a monitor that is outside its scan range. Yes, the systems try hard to avoid it, via things like plug and play EDID information on monitor specs, and reverting changes to monitor settings if you don’t confirm them after a few seconds, but sometimes it still happens. It happens after monitor swap, it happens if you don’t have a monitor turned on when you boot or if you have KVM switch that doesn’t talk about the monitor.

The result can be frustrating. If you know how to reboot your PC without seeing the screen you can try that but even that can fail.

So I suggest that monitors be a bit better about signals that are outside of their range. If the dot clock is too fast, for example, consider dividing it by two if the electronics can handle that, showing half the pixels. If there are too many scan lines, just show as many as you can. The bottom of the screen will be missing, but that’s better than no view at all. If the refresh frequency is too high (though usually that’s because the dot clock is too fast) you can skip every other frame, for a very flickering display, but at least not a blank one. Whatever you can do, you can save people from hitting the reset button.

eBay - let me list unfavourite sellers

Ok, so there's a million things to fix about eBay, and as I noted before my top beef is the now-common practice of immense shipping charges and below-cost prices for products -- making it now impossible to search by price because the listed price is getting less relevant.

Here's one possible fix. Just as you can have a list of favourite sellers, allow me to add a seller to my list of blocked sellers. I would no longer see listings from them. Once I scan a seller's reputation and see that I don't trust them, I don't want to be confused by their listings. Likewise if I want to block the sellers who use the fat shipping, I could do that, so I could unclutter my listings. That might be something to make a bit more temporary.

Ideally let sellers know they are getting on these lists, too. They should know that their practices are costing them bidders.

Is there a good electronic calendar workflow?

I’ve been playing with various calendar systems, such as Mozilla calendar, Korganizer, Google Calendar, Chandler and a few others, and I’m finding them wanting. I have not used iCal or Outlook so perhaps they solve all my problems, but I doubt they do.

I see two ways to want to merge in additional calendars, neither of which is supported very well.

The first type of merger is an intmate one, for calendars in which I will attend most or all events. Effectively they are like extensions of my own calendar, in that I should be considered busy for any event in these calendars, unless I explicity say otherwise. One example would be a couple’s calendar, for social events attended as a couple — parties, weddings etc. Family calendars and workgroup calendars could also qualify.

The other class of calendar is a suggested calendar. These calendars are imported but I will be attending relatively few events from them. It’s more I want to browse them. There are many such calendars now available on the calendar sharing services.

In a few of the tools you can copy an event from an imported calendar into your personal calendar, but after you do you now see two of the event. What you really want is a pointer to the imported event. Minor changes in the imported event should flow through into your final personal calendar. Changes in date or changes that cause a conflict should also flow through but be flagged as needing attention.

Tools like Google calendar, which allow you to access your calendar from remote locations (and easily publish public calendars) are good but they have privacy problems. As you may know if you read this blog, information on your own computer is protected by the 4th amendment. Information on somebody else’s computer (like Google’s) is not. As such, you would like to have any export of your personal calendar be encrypted, and accessible only while you are logged on with the password. Distilled, “free/busy” information may remain unencrypted for access even when you’re not online. However, this is a hard engineering problem to get right — in the long run we need the scope of the 4th amendment re-expanded so that “your papers” include not just your records stored at home, but your records stored on external servers.

Have I just not used enough tools? Do some calendars work this way that I haven’t seen?

Don't be Evite: Put date of party into party title

I get a lot of party invites by Evite, and it’s very frustrating. I’ve missed some events because they refuse to improve their interface.

When I get event invites, I save them to a mail folder. Then I can browse the mail folder later to check dates. If I am not in front of my calendar (which alas is not available everywhere) I can go back and enter items I save.

When I am on the road, sometimes my connectivity is bursty. That means I download mail and read it offline. But this is useless with Evites, as they don’t tell you anything about the event except a usually vague title if you are offline. After that it’s easy to forget you needed to go back and re-read the thing while online. Almost all other invites I get put the party date into the subject line, as it should be.

I’ve complained to Evite several times about this. So have many other people. They say they “are taking it under advisement.” One friend pushed Evite (using the threat of spam complaint, which is not really valid here) to put in a block so she doesn’t get evites. Her friend get told to send her a direct invitation. I’ve concluded that since this change is pretty easy to do, Evite has decided deliberately to be user-unfriendly here, in order to get more people to click on the links to see the ads.

While Google gets a lot of ribbing over the “don’t be evil” mantra, the truth is it started out with a simple principle like this one. Don’t do things deliberately against user interest because it seems they might generate a bit more advertising revenue. Examples of this sort of “evil” include pop-up ads, animated ads and paying for search results, which are all things other sites did. I would have hoped more companies would have learned the lesson of that, and try to emulate the successful strategy of Google. No luck, at least with Evite.

Do don’t be Evite. If you use their product, stuff at least the date, and if necessary the place, into what they consider the short title of the party, even if you must shorted the title. Yes, you will then enter it again, but your guests will thank you.

IRC Server and other collaboration tools in a wireless AP

Most people use wireless access points to provide access to the internet, of course, but often there are situations where you can’t get access, or access fast enough to be meaningful. (ie. a dialup connection quickly gets overloaded with all but the lightest activity.)

I suggest that AP firmwares be equipped with local services that can be used even with no internet connection. In particular, collaboration tools such as a simple IRC server, and a web server with tiny wiki or web chat application. Of course, there are limitations on flash size, so it might be you would make a firmware for some APs which rips out the external connection stuff to make room for collaboration.

There are a variety of open source firmwares out there, particularly for the Linksys WRT54 line of APs, where these features could be added. There are a few APs that have USB ports where you can add USB or flash drives so that you have a serious amount of memory and could have lots of collaborative features.

Then, at conferences, these collaboration APs could be put up, whether or not there is a connection. Indeed, some conferences might decide to deliberately not have an outside connection but allow collaboration.

A multi power supply for your desk from a PC power supply

I’ve blogged several times before about my desire for universal DC power — ideally with smart power, but even standardized power supplies would be a start.

However, here’s a way to get partyway, cheap. PC power supplies are really cheap, fairly good, and very, very powerful. They put out lots of voltages. Most of the power is at +5v, +12v and now +3.3v. Some of the power is also available at -5v and -12v in many of them. The positive voltages above can be available as much as 30 to 40 amps! The -5 and -12 are typically lower power, 300 to 500ma, but sometimes more.

So what I want somebody to build is a cheap adapter kit (or a series of them) that plug into the standard molex of PC power supplies, and then split out into banks at various voltages, using the simple dual-pin found in Radio Shack’s universal power supplies with changeable tips. USB jacks at +5 volts, with power but no data, would also be available because that’s becoming the closest thing we have to a universal power plug.

There would be two forms of this kit. One form would be meant to be plugged into a running PC, and have a thick wire running out a hole or slot to a power console. This would allow powering devices that you don’t mind (or even desire) turning off when the PC is off. Network hubs, USB hubs, perhaps even phones and battery chargers etc. It would not have access to the +3.3v directly, as the hard drive molex connector normally just gives the +5 and 12 with plenty of power.

A second form of the kit would be intended to get its own power supply. It might have a box. These supplies are cheap, and anybody with an old PC has one lying around free, too. Ideally one with a variable speed fan since you’re not going to use even a fraction of the capacity of this supply and so won’t get it that hot. You might even be able to kill the fan to keep it quiet with low use. This kit would have a switch to turn the PS on, of course, as modern ones only go on under simple motherboard control.

Now with the full set of voltages, it should be noted you can also get +7v (from 5 to 12), 8.7v (call it 9) from 3.3 to 12, 1.7v (probably not that useful), and at lower currents, 10v (-5 to +5), 17v (too bad that’s low current as a lot of laptops like this), 24v, 8.3v, and 15.3v.

On top of that, you can use voltage regulators to produce the other popular voltages, in particular 6v from 7, and 9v from 12 and so on. Special tips would be sold to do this. This is a little bit wasteful but super-cheap and quite common.

Anyway, point is, you would get a single box and you could plug almost all your DC devices into it, and it would be cheap-cheap-cheap, because of the low price of PC supplies. About the only popular thing you can’t plug in are the 16v and 22v laptops which require 4 amps or so. 12v laptops of course would do fine. At the main popular voltages you would have more current than you could ever use, in fact fuses might be in order. Ideally you could have splitters, so if you have a small array of boxes close together you can get simple wiring.

Finally, somebody should just sell nice boxes with all this together, since the parts for PC power supplies are dirt cheap, the boxes would be easy to make, and replace almost all your power supplies. Get tips for common cell phone chargers (voltage regulators can do the job here as currents are so small) as well as battery chargers available with the kit. (These are already commonly available, in many cases from the USB jack which should be provided.) And throw in special plugs for external USB hard drives (which want 12v and 5v just like the internal drives.)

There is a downside. If the power supply fails, everything is off. You may want to keep the old supplies in storage. Some day I envision that devices just don’t come with power supplies, you are expected to have a box like this unless the power need is very odd. If you start drawing serious amperage the fan will need to go on and you might hear it, but it should be pretty quiet in the better power supplies.

Why isn't my cell phone a bluetooth GPS

GPS receivers with bluetooth are growing in popularity, and it makes sense. I want my digital camera to have bluetooth as well so it can record where each picture is taken.

But as I was drivng from the airport last night, I realized that my cell phone has location awareness in it (for dialing 911 and location aware apps) and my laptop has bluetooth in it, and mapping software if connected to a GPS — so why couldn’t my cell phone be talking to my laptop to give it my location for the mapping software? Or ideed, why won’t it tell a digital camera that info as well?

Are people making cell phones that can be told to transmit their position to a local device that wants such data?

Update: My Sprint Mogul, whose GPS is enabled by the latest firmware update, is able to act as a bluetooth GPS using a free GPS2Blue program.

Have the OS give user permissions on "privileged" IP ports.

Very technical post here. Among the children of Unix (Linux/BSDs/MacOS) there is a convention that for a program to open a TCP or UDP port from 0 to 1023, it must have superuser permission. The idea is that these ports are privileged, and you don’t want just any random program taking control of such a port and pretending to be (or blocking out) a system service like Email or DNS or the web.

This makes sense, but the result is that all programs that provide such services have to start their lives as the all-powerful superuser, which is a security threat of its own. Many programs get superuser powers just so they can open their network port and, and then discard the powers. This is not good security design.

While capability-based-security (where the dispatcher that runs programs gives them capability handles for all the activities they need to do) would be much better, that’s not an option here yet.

I propose a simple ability to “chown” ports (ie. give ownership and control like a file) to specific Unix users or groups. For example, if there is a “named” user that manages the DNS name daemon, give ownership of the DNS port (53) to that user. Then a program running as that user could open that port, and nobody else except root (superuser) could do so. You could also open some ports to any user, if you wanted.  read more »

Let's see neighbourhood fiber lan

The phone companies failed at the fiber to the curb promise in most of the USA and many other places. (I have had fiber to the curb at my house since 1992 but all it provides is Comcast cable.)

But fiber is cheap now, and getting cheaper, and unlike wires it presents no electrical dangers. I propose a market in gear for neighbourhoods setting up a fast NLAN, by running a small fiber bundle through their backyards (or, in urban row housing, possibly over their roofs.) Small fiber conduits could be buried in soil more easily than watering hoses, or run along fences. Then both ends, meeting the larger street or another NLAN, could join up for super-high connectivity.

I would join both ends because then breaks in this amateur-installed line don’t shut it down. The other end need not be at super-speed, just enough so phones work etc. until a temporary above-ground patch can be run above the break.

Of course, you would need consent of all the people on the block (though at the back property line you only need the consent of one of the two sides at any given point.) Municipal regulations could also give neighbours access to the poles though they would probably have to pay a licenced installer.

An additional product to sell would be a neighbourhood server kit, to provide offsite backup for members and video storage. Depending on legal changes, it could be possible to have a block cable company handling the over-the-air DTV stations, saving the need to put up antennas. Deals could be cut with the satellite companies to place a single dish with fancy digital decoder in one house. The cable companies would hate this but the satellite companies might love it.

Of course there does need to be something to connect to at the end of the street for most of these apps, though not all of them. After all, fiber is not that much better than a bundle of copper wires over the short haul of a neighbourhood. But if there were a market, I bet it would come, either with fiber down main streets, fixed wireless or aggregated copper.

Flat panel monitors that interlock on thin edges

Some flat panel displays being made today have modestly thin edges, and people like using them for multi-monitor systems with a desktop that spans one or more monitors.

I suggest a monitor design where the edge moulding on the monitor can come off, and be replaced, with care by a special interlock unit. The interlock would join two monitors together strongly and protect the LCD panel but try to bring the two panels as close together as possible. Most of the strength would be on the back, and on the front, the cover would just be a thin but strong strip, in choice of colours, to cover only the small gap between the monitors.

The result would be a good way to make display walls, and of course big multi-monitor displays. Dell is now selling a 2560 x 1600 monitor for $2100 that is very tempting, but two 1600 x 1200s, for similar screen real estate, can now be had for under $1000, and they don’t require a new $300 video card to boot. Four 1280x1024 isplays, though smaller at 17”, can be hand for under $1000 and even more screen real estate with two dual-head video cards (which cost under $50). Though with 4 screens people don’t necessarily want them so flat any more. However a 2x2 grid of 17” displays at $1000 would attract customers if the lines between were small.

Of course, in time that lovely 4MP display will get cheaper, and an even better one will come along. I am tempted by the 4MP because that’s half the pixels of my 8MP digital camera, and I could finally see some of my images at at least half-res without having to print them. But other than for that, multi-monitor is just fine.

Of course if you use multi-monitors, be sure to visit my panoramic photography pages for super-wide photos you can use as wallpapers on such setups. Regular blog readers can ask me nice and I’ll get you an image 1024 or 1200 high if available.

Hybrid Languages

There are a lot of popular programming languages out there, each popular for being good at a particular thing. The C family languages are fastest and have a giant legacy. Perl is a favoured choice for text manipulations. Today's darling is Ruby, leader of the agile movement. Python is a cleaner, high-level language. PHP aims at the quick web/HTML scripter language and has a simpler access to SQL databases than most. Java's a common choice for large projects, with lots of class libraries, slower than C but faster than interpreted languages.

However, my goal here is not to debate the merits of these languages, which are only barely summed up above (and no doubt incorrectly to some perceptions.) My goal is to point out that we all love our different languages for different purposes. And more to the point, one of the reasons we love a particular language is that we *know it*. In many cases we might decide we could more quickly solve a problem in a language we know well, even though another language might be better suited overall.

Sometimes I'm sitting coding in one of the more concrete languages, like C or Java, and I think to myself, "This problem would be 2 lines in Perl." It would probably be slower, and perl would not be a suitable choice for the whole project, so I spend the time to solve the problem in the language I'm coding.

Many of the languages have mechanisms to deal with foreign or "native" methods, ie. to deal with objects or functions from another language. Most of these systems are clunky. You would not use them for 3 lines of code, nor would it be particularly readable.

So I propose being able to "switch languages" in the middle of a piece of code. You're programming in C, and suddenly you break out into perl, to so something you immediately know how to do in perl. You get access to the core data types of the original language, and as much of the complex ones as can be made simple. If you need to get real in-depth access to the complex data types of the other language, go back to its foreign methods interface and write a remote function.

Read on...  read more »

Combining traffic light control and wireless mesh networking

Here's an idea I had years ago and tried to promote to some of the earliest wireless companies, such as Metricom, without success. I just posted it on Dave Farber's IP list, so I should write it up again for my own blog...

The idea is a win-win situation for wireless service and municipalities. Combine wireless data service with traffic light control. Offer a wireless mesh company the use of a city's traffic light poles -- which provide a nice high spot at every major intersection in town, with power available -- in exchange for using that network for traffic control. Indeed, I think this space is so valuable to the wireless companies that they should probably buy traffic control software and offer it free to the cities.

The bandwidth for light control is of course trivial. One could also support traffic cams (though hopefully not universal surveillance cams) to help provide dynamic adjustments to the traffic system.

Today, full-bore automatic traffic lights are expensive -- $150,000 in many cases. That's because of the need to bring in safety-equipment grade power, and to dig up the road to lay down vehicle sensors,
as well as data of course. That's changing. New lights use LEDs and thus a fair bit less power. (Some cities have realized that the LED switch pays for itself very quickly.) I think car sensor tech is changing too, and especially with a large market, either LIDAR or CCD cameras with automatic recognition should be capable of good traffic detection without digging up the road.

So it's a win all around. Cities get better traffic flow (and less gas is burned) and wireless networks sprout everywhere to compete with the monopoly cable/ILEC crew.

For places where a full street light is too expensive, I have also suggested the [wireless brokered 4-way stop](/archives/000118.html) as an alternative.

Rethinking household/office power, beyond 60hz

I’ve written before about the desire for a new universal dc power standard. Now I want to rethink our systems of household and office power.

These systems range from 100v to 240v, typically at 50 or 60hz. But very little that we plug in these days inherently wants that sort of power. Most of them quickly convert it to something else. DC devices use linear and switched mode power supplies to generate lower voltage DC. Flourescent lights convert to high voltage AC. Incandescent bulbs and heating elements use the voltage directly, but can be designed for any voltage and care little about the frequency. There are a dwindling number of direct 60hz AC motors in use in the home. In the old days clocks counted the cycles but that’s very rare now.

On top of that, most of what we plug in uses only modest power. The most commonly plugged in things in my house are small power supplies using a few watts. Most consumer electronics are using in the 50-200w range. A few items, such as power tools, major appliances, cooking appliances, heatters, vacuum cleaners and hairdryers use the full 1000 to 1800 watts a plug can provide.

So with this in mind, how might we redesign household and office power…  read more »

Cool Walls

On the wall now near desks are plates with power and ethernet (and phone until VoIP takes over.) I’ve been wondering if we shouldn’t add another jack — air, and plumb our walls with pipes to move air for cooling electronic devices.

This idea started by reading about a guy who attached a plastic vent hose from the output of his PC fan to a hole he cut in his wall. This directs much of the heat and some of the noise into the wall and up to the attic.

I started wondering, shouldn’t we deliberately plumb our houses to cool our devices? And even more, our office buildings? And can we put the blowers at the other end of the pipes, to move the noise away from our devices? How much would we save on air conditioning?

Read on…  read more »

Banks, let me enumerate the line items in my deposits, or let me deposit at home.

At my bank (Wells Fargo) and some others I have checked, the ATM lets you make a deposit with an envelope. You must key in the total amount being deposited, even if you put several cheques in the envelope. This in turn shows up as just one transaction in my statement, and in my download of my transactions to my computer.

That’s not what I want of course. I want to see the different deposits split out individually. The bank certainly splits them out in any event to send each cheque out to the bank that will honour it. Why not have me start the process. It might also assure more accurate addition of the amounts.

Of course, this would take a little more time at the ATM, but a lot less time than what I do now — put each cheque into a different envelope, and deposit them one at a time. Or at least put the cheques of different classes into different envelopes. Of course, if I planned ahead, I could enter them all into the accounting software before I go to the bank, and in that case need not enter the individual tallies. But you don’t always plan like this.

Does any bank’s ATM do this?

Of course even better would be to let me make my deposits at home, with my scanner. No, I’m not kidding. More and more, people are happy to get scans of their cancelled cheques back instead of the physical paper ones. The banks are moving to doing it all inter-bank with scans. So let the customer do it too. Of course, the system would scan the OCR digits with cheque number, account number and routing number and not let the same cheque be deposited twice. A live query could be made after you scan with the payer’s bank. And you would be required to hold on to the cheques you scan, since any one could be challenged, and if challenged you would have to bring the physical one down to the bank. And perhaps you would have to bring them all down eventually for final records.

And eventually of course I could duplicate paypal, by writing you a cheque and sending you a scan of it which you can then cash — in which case we should just go to full electronic money.

Naturally all of this would only be for well trusted regular customers, and the money would probably be on invisible hold in your bank account just like ATM deposits often are until the bank looks at them.

Boot-oriented disk defragmenter

Everybody is annoyed at how long it takes computers to boot. Some use hibernate mode to save a copy of the system in a booted state, which is one approach. Booting procedures have also gotten better about running stuff in parallel.

How about watching a system as it boots, and noting what disk blocks are read to boot it. Then save that map for the disk defragmenter or other disk organizer and have it try to rearrange the files needed at boot so they are all contiguous
and close together. This should reduce the role of I/O as a boot bottleneck. Many disks today can read 50 megabytes in a second, and most OSs only need to access a few 100MB in order to boot, and they have the ram to only need to read files once.

External laptop batteries, especially on planes

Recently I purchased an external battery for my Thinkpad. The internal batteries were getting weaker, and I also needed something for the 14 hour overseas flights. I picked up a generic one on eBay, a 17 volt battery with about 110 watt-hours, for about $120. It's very small, and only about 1.5 lbs. Very impressive for the money. (When these things first came out they had half the capacity and cost more like $300.)

There are downsides to an external: The laptop doesn't know how much charge is in the battery and doesn't charge it. You need an external charger. My battery came with its own very tiny charger which is quite slow (it takes almost a day to recharge from a full discharge.) The battery has its own basic guage built in, however. An external is not as efficient as an internal (you convert the 17v to the laptop's internal voltage and you also do wasteful charging of the laptop's internal if it is not full, though you can remove the internal at the risk of a sudden cutoff should you get to the end of the external's life.)

However, the plus is around 9 to 10 hours of life in a small, cheap package, plus the life of your laptop's internal battery. About all you need for any flight or long day's work.

It's so nice that in fact I think it's a meaningful alternative to the power jacks found on some airlines, usually only in business class. I bought an airline adapter a few years ago for a similar price to this battery, and even when I have flown in business class, half the time the power jack has not been working properly. Some airlines have power in coach but it's rare. And it costs a lot of money for the airlines to fit these 80 watt jacks in lots of seat, especially with all the safety regs on airlines.

I think it might make more sense for airlines to just offer these sorts of batteries, either free or for a cheap rental fee. Cheaper for them and for passengers than the power jacks. (Admittedly the airline adapter I bought has seen much more use as a car and RV adapter.) Of course they do need to offer a few different voltages (most laptops can take a range) but passengers could reserve a battery with their flight reservation to be sure they get the right one.

It would be even cheaper for airlines to offer sealed lead-acid batteries. You can buy (not rent) an SLA with over 200 watt-hours (more than you need for any flight) for under $20! The downside is they are very heavy (17lbs) but if you only have to carry it onto the plane from the gate this may not be a giant barrier.

Of course, what would be great would be a standard power plug on laptops for external batteries, where the laptop could use the power directly, and measure and charge the external. Right now the battery is the first part to fail in a laptop, and as such you want to replace batteries at different times from laptops. This new external should last me into my next laptop if it is a similar voltage.

On the need for self-replicating nanotech assemblers

In recent times, I and my colleagues at the Foresight Nanotech Institute have moved towards discouraging the idea of self-replicating machines as part of molecular nanotech. Eric Drexler, founder of the institute, described these machines in his seminal work “Engines of Creation,” while also warning about the major dangers that could result from that approach.

Recently, dining with Ray Kurzweil on the release of his new book The Singularity Is Near : When Humans Transcend Biology, he expressed the concern that the move away from self-replicating assemblers was largely political, and they would still be needed as a defence against malevolent self-replicating nanopathogens.

I understand the cynicism here, because the political case is compelling. Self-replicators are frightening, especially to people who get their introduction to them via fiction like Michael Chrichton’s “Prey.” But in fact we were frightened of the risks from the start. Self replication is an obvious model to present, both when first thinking about nanomachines, and in showing the parallels between them and living cells, which are of course self-replicating nanomachines.

The movement away from them however, has solid engineering reasons behind it, as well as safety reasons. Life has not always picked the most efficient path to a result, just the one that is sufficient to outcompete the others. In fact, red blood cells are not self-replicating. Instead, the marrow contains the engines that make red blood cells and send them out into the body to do their simple job.

Read on  read more »

Syndicate content