Submitted by brad on Thu, 2007-03-29 22:32.
If you’ve looked around, you probably noticed a high-def DVD player, be it HD-DVD or Blu-Ray, is expensive. Expect to pay $500 or so unless you get one bundled with a game console where they are subsidized.
Now they won’t follow this suggestion, but the reality is they didn’t need to make the move to these new DVD formats. Regular old DVD can actually handle pretty decent HDTV movies. Not as good as the new formats, but a lot better than plain DVD. I’ve seen videos with the latest codecs that pack a quite nice HD picture into 2.5 to 3 gigabytes for an hour. I’ve even seen it in less, down to 1.5 gigabytes (actually less that SD DVDs) at 720p 24 fps, though you do notice some problems. But it’s still way better than a standard DVD. Even so, a dual layer DVD can bring about 9 gb, and a double sided dual layer DVD gives you 18gb if you are willing to flip the disk over to get at special features or the 2nd half of a very long movie. Or of course just do 2-disk sets.
Now you might feel that the DVD industry would not want to make a new slew of regular DVD players with the fancier chips in them able to do these mp4 codecs when something clearly better is around the corner. And if they did do this, it would delay adoption of whatever high def DVD format they are backing in the format wars. But in fact, these disks could have been readily playable already, with no change, for the millions who watch DVDs on laptops and media center PCs. More than will have HD DVD or Blu-Ray for some time to come, even with the boost the Playstation 3 gives to Blu-Ray. read more »
Submitted by brad on Sun, 2007-03-25 13:56.
One of my current peeves is just how much time we spend maintaining and upgrading computer operating systems, even as ordinary users. The workload for this is unacceptably high, though it’s not as though people are unaware of the problem.
Right now I’m updating one system to the beta of the new Ubuntu Feisty Fawn. (Ubuntu is the Linux distro I currently recommend.) They have done some work on building a single upgrader, which is good, but I was shocked to see an old problem resurface. In a 2 hour upgrade process, it asked me questions it didn’t need to ask me, and worse, it asked them at different times in the process. read more »
Submitted by brad on Thu, 2007-03-22 01:34.
This year’s theme for Burning Man is “the Green Man.” It represents a lot of things. For many it just is an inspiration for art centered on nature or the environment. Others are taking it as a signal to try to be better environmentally. That’s going to be a very tough road for a festival centered on building a temporary city far from everything and pyrotechnic art.
So I wrote up some thoughts on the challenges involved. The toughest problem is that transporting an entire city to the desert and then taking it back is a great personal and artistic endeavour, but not one that can be considered green. All efforts to reduce the pollution at the event are dwarfed by the fuel burned to get there. So what can be done?
Read about the problems of having a green man.
Submitted by brad on Sat, 2007-03-17 19:33.
When I watch SF TV shows, I often try to imagine a backstory that might make the story even better and SF like. My current favourite show is Battlestar Galactica, which is one of those shows where a deep mystery is slowly revealed to the audience.
So based on my own thoughts, and other ideas inspired from newsgroups, I’ve jotted down a backstory to explain the results you see in the show. Of course, much of it probably won’t end up being true, but there are hints that some of it might.
In my Battlestar Galactica back-story I explain why
- Why everybody — even the so-called humans — is a Cylon
- Who the Final 5 are and what they are doing
- Why all this has happened before and is happening again
- How the Cylons were made, and where they got their biotech
Of course, ignore this if you don’t watch the show. It’s pure fanfic/speculation.
The show remains one of the great SF TV shows, though it has been bogging down of late. This timeline may be a plea to return the show to some good hard SF roots. Posthumanism and strife between humans and AIs are hot themes in modern SF, and BSG is most interesting if it’s set in our future with things to say about the relationship between man, machine and artificial biological intelligence.
Update: I have updated the article based on the season finale, which confirmed a number of my speculations though of course not all of them.
Submitted by brad on Fri, 2007-03-16 14:24.
It’s nice to have a headset on your desk telephone, for handsfree conversations. A number of phones have a headset jack, either of the submini plug used by cell phones, or using a phone handset jack. Many companies buy headset units that plug into the handset line to provide a headset, some of them are even wireless.
But bluetooth headsets today are cheap, standardized and have a competitive market. And they are of course wireless. Many people already have them for their cell phone. I have seen a very small number of desk phones support having a bluetooth headset, and that shouldn’t be al that expensive, but it’s rare and only on high-end phones.
Here’s the idea: Put bluetooth headset support into the PBX. Bluetooth headsets can’t dial, they can effectively only go on-hook and off-hook with a single button. You would associate (in the PBX) your bluetooth headset with your desk phone. A bluetooth master would be not too far from your desk, and tied into the PBX, or into a PC that talks to the PBX. When your BT headset was in range of this master, it would be tied to ith with Bluetooth. (You would have to do an actual bluetooth pairing in advance. In addition, many people have bluetooth headsets normally linked to their cell phone, and call attempts from the headset go to the cell phone. The system would have to switch that over to the PBX.) read more »
Submitted by brad on Tue, 2007-03-13 18:35.
When I watch the boundless energy of young children, and their parents’ frustration over it, I wonder how high-tech will alter how children are raised in the next few decades. Of course already TV, and now computers play a large role, and it seems very few toys don’t talk or move on their own.
But I’ve also realized that children, both from a sense of play and due to youthful simplicity, will tolerate some technologies far before adults will. For example, making an AI to pass the Turing Test for children may be much, much simpler than making one that can fool an adult. As such, we may start to see simple AIs meant for interacting with, occupying the minds of and educating children long before we find them usable as adults.
Another technology that young children might well tolerate sooner is virtual reality. We might hate the cartoonish graphics and un-natural interfaces of today’s VRs but children don’t know the interfaces aren’t natural — they will learn any interface — and they love cartoon worlds. read more »
Submitted by brad on Mon, 2007-03-12 19:08.
I've ranted before about just how hard it has become to configure and administer computers. And there are services where you can hire sysadmins to help you, primarily aimed at novice users.
But we advanced users often need help today, too. Mostly when we run into problems we go to message boards, or do web searches and find advice on what to do. And once we get good on a package we can generally fix problems with it in no time.
I would love a service where I can trade my skill with some packages for help from others on other packages. There are some packages I know well, and could probably install for you or fix for you in a jiffy. Somebody else can do the same favour for me. In both cases we would explain what we did so the other person learned.
All of this would take place remotely, with VNC or ssh. Of course, this opens up a big question about trust. A reputation system would be a big start, but might not be enough. Of course you would want a complete log of all files changed, and how they were changed -- this service might apply more to just editing scripts and not compiling new binaries. Best of all, you could arrange to have a virtualized version of your machine around for the helper to use. After examining the differences you could apply to them to your real machine. Though in the end, you still need reputations so that people wanting to hack machines would not get into the system. They might have to be vetted as much as any outside consultant you would hire for money.
There seems a real efficiency to be had if this could be made to work. How often have you pounded for hours on something that a person skilled with the particular software could fix in minutes? How often could you do the same for others? Indeed, in many cases the person helping you might well be one of the developers of a system, who also would be learning about user problems. (Admittedly those developers would quickly earn enough credit to not have to maintain any other part of their system.)
The real tool would be truly secure operating systems where you can trust a stranger to work on one component.
Submitted by brad on Sun, 2007-03-11 19:51.
Lots of people love model airplanes, and I bet they would love to simulate dogfights. They can't fire actual projectiles, as that would be dangerous, expensive, unworkable due to the weight and actually damage planes.
It should be possible to set up a system for dogfights using light, however. One way would be to have planes mount lasers that send out a coded pulse with a bit of dispersion, and have the other planes mount receivers with diffusers to pick up light from a lot of directions. It might be better to go in reverse, the way many shooting games do -- the planes broadcast a coded pulse from some bright LED in a specific colour and the "gun" is just a narrow sight that tries to pick up these pulses. When the gun gets one, it sends it down to the coordinator on the ground, and that tells the target plane it's been hit (possibly forcing it to leave the airspace after some number of hits, or impair the flying controls, etc.)
Of course you need authenticated equipment. If people provide their own it's too easy to cheat, and one could also just make a gun that has no barrel instead of a wide one, or have one on the ground. So some honour might be required here.
It would of course be hard to do, with no cockpit view. Some larger model planes can carry small video cameras for a more realistic dogfight of that sort, but I suspect people could figure something out. The gun could have sensors for the pulses that are wider than the actual "direct hit" sensor, allowing them to tell you when you're getting close, and even showing a screen on a laptop that is not a camera view from the plane but at least a view of how close you are to the target.
Submitted by brad on Thu, 2007-03-08 20:34.
I wrote earlier about the bluetooth vibrator watch. I pushed this in part to promote the idea that phones should (almost) never ring. That ringing is rude to others and violates your own privacy, too.
Sony, Citizen and some others are now releasing bluetooth watches that go beyond this. Your watch should become a very small control station for your larger PDA/phone. Of course digital watches have a small screen, and there are also some nice analog watches where the background of the watch is secretly a screen. This should become cheaper with time.
As before, when a call comes in, your watch should gently vibrate or even just tingle your skin with a small charge. On the screen should be the caller-ID, and the buttons should be marked with choices, such as rejecting the call or accepting it. (These features will be in some of the upcoming bluetooth watches) If you accept it, the caller would hear you saying that you are getting out your real headset/handset and will talk to them in a few seconds. If you were in a meeting, they might be told it will be more than a few seconds, as you must excuse yourself from the room.
Your watch of course knows if it is on your wrist in many ways, including temperature, so the phone can know to actually ring if you’ve taken the watch off — for example when going to bed, if you want it to ring when you’re in bed, that is.
As the screens increase in resolution, they could also show things like the subject of emails and pages. No more pulling out the blackberry or cell phone — just a subtle glance at your watch when it tingles. Be nice if you can set your presence on your watch so that all calls go to voice mail, too.
Most flip phones have a 2nd small screen on them so you can see the time and caller-id when the phone is closed. This would not be needed if you use a watch like this, so the cost of the phone can be reduced to make up for the more expensive watch.
Your watch could also bind to your desk phone at the office. And the phone would also know if you are in the office or not.
Imagine a world of peace where you’re never hearing phones going off, and you aren’t seeing people constantly pulling out phones and blackberries to check calls and messages. Imagine a world where people no longer wear cell phones on their belts, either.
The watch could have a small headset in it too, but that would add bulk, and I think it’s better to pull out a dedicated one.
The only real downside to this — you would probably have to charge your watch once a week. This might not easily fit in with the smaller ladies’ watch designs. It should be possible in any larger design. E-ink technology, which takes no power to run a display, could also make a great material for the background of your watch dial, or even display a tolerable virtual watch dial for the many who prefer an analog set of hands. It might be necessary to design a protocol even lower power than bluetooth to give the watches even better battery life, and of course a standard charging interface found in hotels and offices would be great.
I think once this happens it will be hard to imagine how we tolerated it any other way. Yes, people get fun and status from their ringtones, but I think we can handle sacrificing that.
The watch could also be a mini-screen for a few other PDA and phone functions. For example, if you use a bluetooth earpiece, you can keep your phone in your pocket or purse, which is really nice, but sometimes you want a bit of display, for example to assist with voice command mode.
(Of course if you know about Voxable, you know I believe phone calls should simply not happen at all at the wrong times, but that’s a different leap.)
Submitted by brad on Thu, 2007-03-08 14:31.
I have written several times before about Peerflix — Now that I’ve started applying some tags as well as categories to my items you can now see all the Peerflix stories using that link — and the issues behind doing a P2P media trading/loaning system. Unlike my own ideas in this area, Peerflix took a selling approach. You sold and bought DVDs, initially for their own internal currency. It was 3 “Peerbux” for new releases, 2 for older ones, and 1 for bargain bin disks.
That system, however, was failing. You would often be stuck for months or more with an unpopular disk. Getting box sets was difficult. So in December they moved to pricing videos in real dollars. I found that interesting because it makes them, in a way, much closer to a specialty eBay. There are still a lot of differences from eBay — only unboxed disks are traded, they provide insurance for broken disks and most importantly, they set the price on disks.
One can trade DVDs on eBay fairy efficiently but it requires a lot of brain effort because you must put time into figuring good bid and ask prices for items of inconsequential price. Peerflix agreed that this is probably a poor idea, so they decided to set the prices. I don’t know how they set their initial prices, but it may have been by looking at eBay data or similar information. read more »
Submitted by brad on Mon, 2007-03-05 15:18.
Hey photo editing programs — I’m looking at you, Photoshop — a lot of you allow people to place text into graphic images, usually as a text layer. Most graphics with text on the web are made this way. Then we export the image as a jpeg or png/gif, flatting the layers so our artful text is displayed. This is how all the buttons with words are made, as well as the title banner graphics on most web sites.
So photo editors, when you render and flatten the layers, take the visible text (you know what it is) and include it in a tag inside the file, such as the EXIF information. Possibly as the caption if there isn’t already one. Let us disable this, including on just a single layer, but providing it would be a good default.
Then all the web spiders/search engines would be able to find that text. Web page editors could offer that text as a possible “alt” text for the graphic. And the blind would be able to have their web-page readers read to them the text embedded in graphics.
Submitted by brad on Tue, 2007-02-20 19:38.
Recently I opened up a surprising can of worms with a blog post about CitizenRe wondering if they had finally solved the problem of making solar power compete with the electrical grid. At that post you will see a substantial comment thread, including contributions by executives of the firm, which I welcome. At first, I had known little about CitizenRe and the reputation it was building. I thought i should summarize some of the issues I have been considering and other elements I have learned.
CitizenRe’s offer is very appealing. They claim they will build a plant that can make vastly cheaper solar. Once they do, they will install it on your roof and “rent” it to you. You buy all the power it produces from them at a rate that beats your current grid power cost. Your risks are few — you put down a deposit of $500 to $1500 depending on system size, you must cover any damage to the panels, and they offer removal and replacement for a very modest fee if you need to reroof or even move. You lock in your rate, which is good if grid rates go up and bad if grid rates go down or other solar becomes cheaper, but on the whole it’s a balanced offer.
In fact, it seems too good to be true. It’s way, way cheaper than any offering available today. Because it sounds so good, many people are saying “show me.” I want to see just how they are going to pull that off. Many in the existing solar industry are saying that much louder. They are worried that if CitizenRe fails to deliver, all their customers will have been diverted to a pipedream while they suffer financial ruin. Of course, they are also worried that if CitizenRe does deliver, they will be competed out of business, so they do have a conflict of interest.
Here are some of the things to make me skeptical. read more »
Submitted by brad on Fri, 2007-01-12 15:30.
(Note: I have posted a followup article on CitizenRe as a result of this thread.
Also a solar economics spreadsheet.)
I’ve been writing about the economics of green energy and solar PV, and have been pointed to a very interesting company named CitizenRe. Their offering suggests a major cost reduction to make solar workable.
They’re selling PV solar in a new way. Once they go into operation, they install and own the PV panels on your roof, and you commit to buy their output at a rate below your current utility rate. Few apparent catches, though there are some risks if you need to move (though they try to make that easy and will move the system once for those who do a long term contract.) You are also responsible for damage, so you either take the risk of panel damage or insure against it. Typically they provide an underpowered system and insist you live where you can sell back excess to the utility, which makes sense.
But my main question is, how can they afford to do it? They claim to be making their own panels and electrical equipment. Perhaps they can do this at such a better price they can make this affordable. Of course they take the rebates and tax credits which makes a big difference. Even so, they seem to offer panels even in lower-insolation places like New England, and to beat the prices of cheaper utilities which only charge around 8 cents/kwh.
My math suggests that with typical numbers of 2 khw/peak watt/year, to deliver 8 cents/kwh for 25 years requires an installed cost of under $2/peak watt — even less in the less sunny places. Nobody is even remotely close to this in cost, so this must require considerable reduction from rebates and tax credits.
A few other gotchas — if you need to re-roof, you must pay about $500 to temporarily remove up to 5kw of panels. And there is the risk that energy will get cheaper, leaving you locked in at a higher rate since you commit to buy all the power from the panels. While many people fear the reverse — grid power going up in price, where this is a win — in fact I think that energy getting cheaper is actually a significant risk as more and more money goes into cleantech and innovation in solar and other forms of generation.
It’s interesting that they are offering a price to compete with your own local utility. That makes sense in a “charge what the market will bear” style, but it would make more sense to market only to customers buying expensive grid power in states with high insolation (ie. the southwest.)
Even with the risks this seems like a deal with real potential — if it’s real — and I’ll be giving it more thought. Of course, for many, the big deal is that not only do they pay a competitive price, they are much greener, and even provide back-up power during the daytime. I would be interested if any readers know more about this company and their economics.
Update: There is a really detailed comment thread on this post. However, I must warn CitizenRe affiliates that while they must disclose their financial connection, they must also not provide affiliate URLs. Posts with affiliate URLs will be deleted.
Some salient details: There is internal dissent. I and many others wonder why an offer this good sounding would want to stain itself by being an MLM-pyramid. Much stuff still undisclosed, some doubt on when installs will take place.
Submitted by brad on Thu, 2007-01-04 14:21.
A recent Forbes items pointed to my earlier posts on eBay Feedback so I thought it was time to update them. Note also the eBay tag for all posts on eBay including comments on the new non-feedback rules.
I originally mused about blinding feedback or detecting revenge feedback. It occurs to me there is a far, far simpler solution. If the first party leaves negative feedback, the other party can’t leave feedback at all. Instead, the negative feedback is displayed both in the target’s feedback profile and also in the commenter’s profile as a “negative feedback left.” (I don’t just mean how you can see it in the ‘feedback left for others’ display. I mean it would show up in your own feedback that you left negative feedback on a transaction as a buyer or seller. It would not count in your feedback percentage, but it would display in the list a count of negatives you left, and the text response to the negative made by the other party if any.)
Why? Well, once the first feedbacker leaves a negative, how much information is there, really, in the response feedback? It’s a pretty rare person who, having been given a negative feedback is going to respond with a positive! Far more likely they will not leave any feedback at all if they admit the problem was their fault. Or that they will leave revenge. So if there’s no information, it’s best to leave it out of the equation.
This means you can leave negatives without fear of revenge, but it will be clearly shown to people who look at your profile whether you leave a lot of negatives or not, and they can judge from comments if you are spiteful or really had some problems. This will discourage some negative feedback, since people will not want a more visible reputation of giving lots of negatives. A typical seller will expect to have given a bunch of negatives to deadbeat buyers who didn’t pay, and the comments will show that clearly. If, however, they have an above average number of disputes over little things, that might scare customers off — and perhaps deservedly.
I don’t know if eBay will do this so I’ve been musing that it might be time for somebody to make an independent reputation database for eBay, and tie it in with a plugin like ShortShip. This database could spot revenge feedbacks, note the order of feedbacks, and allow more detailed commentary. Of course if eBay tries to stop it, it has to be a piece of software that does all the eBay fetching from user’s machines rather then a central server.
Submitted by brad on Tue, 2006-12-19 19:49.
This week I participated in this thread on Newcomb’s Paraodox which was noted on BoingBoing.
A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only. But there’s a catch.
The being claims that he is able to predict what any human being will decide to do. If he predicted you would take only the closed box, then he placed a million dollars in it. But if he predicted you would take both boxes, he left the closed box empty. Furthermore, he has run this experiment with 999 people before, and has been right every time.
What do you do?
A short version of my answer: The parodox confuses people because it stipulates you are a highly predictable being to the alien, then asks you to make a choice. But in fact you don’t make a choice, you are a choice. Your choice derives from who you are, not the logic you go through before the alien. The alien’s power dictates you already either are or aren’t the sort of person who picks one box or two, and in fact the alien is the one who made the choice based on that — you just imagine you could do differently than predicted.
Those who argue that since the money is already in the boxes, you should always take both miss the point of the paradox. That view is logically correct, but those who hold that view will not become millionaires, and this was set by the fact they hold the view. It isn’t that there’s no way the contents of the boxes can change because of your choice, it’s that there isn’t a million there if you’re going to think that way.
Of course people don’t like that premise of predictability and thus, as you will see in the thread, get very involved in the problem.
In thinking about this, it came to me that the alien is not so hypothetical. As you may know from reading this blog, I was once administered Versed, a sedative that also blocks your ability to form long term memories. I remember the injection, but not the things I said and did afterwards.
In my experiment we recruit subjects to test the paradox. They come in and an IV drip is installed, though they are not told about Versed. (Some people are not completely affected by Versed but assume our subjects are.) We ask subjects to give a deliberated answer, not to just try to be random, flip a coin or whatever.
So we administer the drug and present the problem, and see what you do. The boxes are both empty — you won’t remember that we cheated you. We do it a few times if necessary to see how consistent you are. I expect that most people would be highly consistent, but I think it would be a very interesting thing to research! If a few are not consistent, I suspect they may be deliberately being random, but again it would be interesting to find out why.
We videotape the final session, where there is money in the boxes. (Probably not a million, we can’t quite afford that.) Hypothetically, it would be even better to find another drug that has the same sedative effects of Versed so you can’t tell it apart and don’t reason differently under it, but which allows you to remember the final session — the one where, I suspect, we almost invariably get it right.
Each time you do it, however, you think you’re doing it for the first time. However, at first you probably (and correctly) won’t want to believe in our amazing predictive powers. There is no such alien, after all. That’s where it becomes important to videotape the last session or even better, have a way to let you remember it. Then we can have auditors you trust completely audit the experimenter’s remarkable accuracy (on the final round.) We don’t really have to lie to the auditors, they can know how we do it. We just need a way for them to swear truthfully that on the final round, we are very, very accurate, without conveying to the subject that there are early, unremembered rounds where we are not accurate. Alas, we can’t do that for the initial subjects — another reason we can’t put a million in.
Still, I suspect that most people would be fairly predictable and that many would find this extremely disturbing. We don’t like determinism in any form. Certainly there are many choices that we imagine as choices but which are very predictable. Unless you are bi, you might imagine you are choosing the sex of your sexual partners — that you could, if it were important, choose differently — but in fact you always choose the same.
What I think is that having your choices be inherent in your makeup is not necessarily a contradiction to the concept of free will. You have a will, and you are free to exercise it, but in many cases that will is more a statement about who you are than what you’re thinking at the time. The will was exercised in the past, in making you the sort of mind you are. It’s still your will, your choices. In the same way I think that entirely deterministic computers can also make choices and have free will. Yes, their choices are entirely the result of their makeup. But if they rate being an “actor” then the choices are theirs, even if the makeup’s initial conditions came from a creator. We are created by our parents and environment (and some think by a deity) but that’s just the initial conditions. Quickly we become something unto ourselves, even if there is only one way we could have done that. We are not un-free, we just are what we are.
Submitted by brad on Fri, 2006-12-08 23:34.
Last week I wrote about linux’s problems with dependencies and upgrades and promised some suggestions this week.
There are a couple of ideas here to be stolen from (sacrilige) windows which could be a start here, though they aren’t my long term solution.
Microsoft takes a different approach to updates, which consists of
little patches and big service packs. The service packs integrate a lot
of changes, including major changes, into one upgrade. They are not
very frequent, and in some ways akin to the major distribution releases
of systems like Ubuntu (but not its parent Debian ), Fedora Core and
Installing a service pack is certainly not without risks, but
the very particular combination of new libraries and changed apps in
a service pack is extensively tested together, as is also the case for
a major revision of a linux distribution. Generally installing one of
these packs has been a safe procedure. Most windows programs also do not
use hand-edited configuration files for local changes, and so don’t suffer
from the upgrade problems associated with this particular technique nearly
as much. read more »
Submitted by brad on Sat, 2006-12-02 01:13.
We all spend far too much of our time doing sysadmin. I’m upgrading and it’s as usual far more work than it should be. I have a long term plan for this but right now I want to talk about one of Linux’s greatest flaws — the dependencies in the major distributions.
When Unix/Linux began, installing free software consisted of downloading it, getting it to compile on your machine, and then installing it, hopefully with its install scripts. This always works but much can go wrong. It’s also lots of work and it’s too disconnected a process. Linuxes, starting with Red Hat, moved to the idea of precompiled binary packages and a package manager. That later was developed into an automated system where you can just say, “I want package X” and it downloads and installs that program and everything else it needs to run with a single command. When it works, it “just works” which is great.
When you have a fresh, recent OS, that is. Because when packagers build packages, they usually do so on a recent machine, typically fully updated. And the package tools then decide the new package “depends” on the latest version of all the libraries and other tools it uses. You can’t install it without upgrading all the other tools, if you can do this at all.
This would make sense if the packages really depended on the very latest libraries. Sometimes they do, but more often they don’t. However, nobody wants to test extensively with old libraries, and serious developers don’t want to run old distributions, so this is what you get.
So as your system ages, if you don’t keep it fully up to date, you run into a serious problem. At first you will find that if you want to install some new software, or upgrade to the lastest version to get a fix, you also have to upgrade a lot of other stuff that you don’t know much about. Most of the time, this works. But sometimes the other upgrades are hard, or face a problem, one you don’t have time to deal with.
However, as your system ages more, it gets worse. Once you are no longer running the most recent distribution release, nobody is even compiling for your old release any more. If you need the latest release of a program you care about, in order to fix a bug or get a new feature, the package system will no longer help you. Running that new release or program requires a much more serious update of your computer, with major libraries and more — in many ways the entire system. And so you do that, but you need to be careful. This often goes wrong in one way or another, so you must only do it at a time when you would be OK not having your system for a day, and taking a day or more to work on things. No, it doesn’t usually take a day — but it might. And you have to be ready for that rare contingency. Just to get the latest version of a program you care about.
Compare this to Windows. By and large, most binary software packages for windows will install on very old versions of Windows. Quite often they will still run on Windows 95, long ago abandoned by Microsoft. Win98 is still supported. Of late, it has been more common to get packages that insist on 7 year old Windows 2000. It’s fairly rare to get something that insists on 5-year-old Windows XP, except from Microsoft itself, which wants everybody to need to buy upgrades.
Getting a new program for your 5 year old Linux is very unlikley. This is tolerated because Linux is free. There is no financial reason not to have the latest version of any package. Windows coders won’t make their program demand Windows XP because they don’t want to force you to buy a whole new OS just to run their program. Linux coders forget that the price of the OS is often a fairly small part of the cost of an upgrade.
Systems have gotten better at automatic upgrades over time, but still most people I know don’t trust them. Actively used systems acquire bit-rot over time, things start going wrong. If they’re really wrong you fix them, but after a while the legacy problems pile up. In many cases a fresh install is the best solution. Even though a fresh install means a lot of work recreating your old environment. Windows fresh installs are terrible, and only recently got better.
Linux has been much better at the incremental upgrade, but even there fresh installs are called for from time to time. Debian and its children, in theory, should be able to just upgrade forever, but in practice only a few people are that lucky.
One of the big curses (one I hope to have a fix for) is the configuration file. Programs all have their configuration files. However, most software authors pre-load the configuration file with helpful comments and default configurations. The user, after installing, edits the configuration file to get things as they like, either by hand, or with a GUI in the program. When a new version of the program comes along, there is a new version of the “default” configuration file, with new comments, and new default configuration. Often it’s wrong to run your old version, or doing so will slowly build more bit-rot, so your version doesn’t operate as nicely as a fresh one. You have to go in and manually merge the two files.
Some of the better software packages have realized they must divide the configuration — and even the comments — made by the package author or the OS distribution editor from the local changes made by the user. Better programs have their configuration file “include” a normally empty local file, or even better all files in a local directory. This does not allow comments but it’s a start.
Unfortunately the programs that do this are few, and so any major upgrade can be scary. And unfortunately, the more you hold off on upgrading the scarier it will be. Most individual package upgrades go smoothly, most of the time. But if you leave it so you need to upgrade 200 packages at once, the odds of some problem that diverts you increase, and eventually they become close to 100%.
Ubuntu, which is probably my favourite distribution, has announced that their “Dapper Drake” distribution, from mid 2006, will be supported for desktop use for 3 years, and 5 years for server use. I presume that means they will keep compiling new packages to run on the older base of Dapper, and test all upgrades. This is great, but it’s thanks to the generousity of Mark Shuttleworth, who uses his internet wealth to be a fabulous sugar daddy to the Linux and Ubuntu movements. Already the next release is out, “Edgy” and it’s newer and better than Dapper, but with half the support promise. It will be interesting to see what people choose.
When it comes to hardware, Linux is even worse. Each driver works with precisely one kernel it is compiled for. Woe onto you once you decide to support some non-standard hardware in your Linux box that needs a special driver. Compiling a new driver isn’t hard once, until you realize you must do it all again any time you would like to slightly upgrade your kernel. Most users simply don’t upgrade their kernels unless they face a screaming need, like fixing a major bug, or buying some new hardware. Linux kernels come out every couple of weeks for the eager, but few are so eager.
As I get older, I find I don’t have the time to compile everything from source, or to sysadmin every piece of software I want to use. I think there are solutions to some of these problems, and a simple first one will be talked about in the next installment, namely an analog of Service Packs
Submitted by brad on Sun, 2006-11-19 00:58.
I’m not a gamer. I wrote video games 25 years ago but stopped when game creation became more about sizzle (graphics) than steak (strategy.) But the story of the release of the Playstation 3 is a fascinating one. Sony couldn’t make enough, so to get them, people camped out in front of stores, or in some cases camped out just to get a certificate saying they could buy one when they arrived. But word got out that people would pay a lot for them on eBay. The units cost about $600, depending on what model you got, but people were bidding thousands of dollars even in advance, for those who had received certificates from stores.
It was amusing to read the coverage of the launch at Sony’s own Sonystyle store in San Francisco. There the press got bored as they asked people in line why they were lining up to get a PS3. The answer most commonly seemed to be not a love of gaming, but to flip the box for a profit.
And flip they did. There were several tens of thousands of eBay auctions for PS3s, and prices were astounding. About 20,000 auctions closed. Another 25,000 are still running at this time. Some auctions concluded for ridiculous numbers like $110,000 for 4 of them, or a more “reasonable” $20,000 for 5. Single auctions reached as high as $25,000, though in many of these cases, it’s bad news for the seller because the high bidders are people with zero eBay reputation who obviously won’t complete the transaction. In other cases serious sellers will try to claim their bid was a typo. There are some auctions with serious multiple bidders that got to 3 and 4 thousand dollars, but by mid-day today they were all running about $2,000, and they started dropping very quickly. As I watched in a few minutes they fell from $1,500 to going below a thousand. Still plenty of profit for those willing to brave the lines.
It’s interesting to consider what the best strategy for a seller is. It’s hard to predict what form a frenzy like this will take, and when the best price will come. The problem is eBay has a minimum 1 day for the auction, so you must guess the peak 1 day in advance. Since many buyers were keen to see the auction listing showing that the person had the unit in hand, ready to ship, the possible strategy of listing the item before going to get it bore some risks. Some showed scans of their pre-purchase.
The most successful sellers were probably those who picked a clever “buy it now” price which was taken during the early frenzy by people who did not realize how much the price would drop. All the highest auctions (including those with fake buyers) were buy-it-now results. Of course, it’s mostly luck in guessing what the right price was. I presume the buy-it-now/best-offer feature (new on eBay) might have done well for some sellers.
However, those who got a bogus buyer are punished heavily. They can re-list, but must wait a day to sell by auction, and will have lost a bunch of money in that day. If they can find the buyer they might be able to sue. If they are smart, they would re-list with a near-market buy-it-now to catch the market while it’s hot.
Real losers are those who placed a reserve on their auctions, or a high starting bid price. In many cases their auctions will close with no succesful bidder, and they’ll sell for less later. Using a reserve or high starting bid makes no sense when you have such a high-demand item. Those paranoid about losing money should have at most started bidding at their purchase price. I can’t think of any reason for a reserve price auction in this case — or in most other cases, for that matter. Other than with experimental rare products, they are just annoying.
Particularly sad was one auction where the seller claimed to be a struggling single mom who had kids that lucked out and got spots in line, along with pictures of the kids holding the boxes. She set a too-high starting price, and will have to re-list.
Another bad strategy was to do a long multi-day listing.
It’s possible the rarity of these items will grow, as people discover they just can’t get one for their kids for Christmas, but I doubt it.
The other big question this raises is this: Could Sony have released the machine differently? Sony obviously left millions on the table here, about 30 to 40 million I would guess. That’s tolerable for Sony, and they might have decided to give it up for the publicity that surrounds a buying craze. But I have to wonder, would they not have been better served to conduct their own auctions, perhaps a giant dutch auction, for the units, with some allocated at list price by lottery or for those willing to wait in line so that it doesn’t seem so elitist. (As if any poor person is going to buy a PS3 and keep it if they can make a fast thousand in any event.)
Some retailers took advantage of demand by requiring customers to buy several games with the box, presumably Sony approved that. With no control from Sony all the retailers would be trying to capture all this money themselves, which they could easily have done — selling on eBay directly if need be.
I predict in the future we will see a hot Christmas item sold through something like a dutch auction, since being the first to do that would generate a lot of publicity. Dutch auctions are otherwise not nearly so exciting. When Google went public through one, the enemies of dutch auctions worked to make sure people thought it was boring, causing Google to leave quite a bit of money on the table, but far less than they would have left had they used traditional underwriters.
On a side note — if you shop on eBay, I recommend the mozilla/firefox/iceweasel plugin “Shortship” which fixes one of eBay’s most annoying bugs. It lets you see the total of price plus shipping, and sort by it, at least within one ebay display page.
Submitted by brad on Sat, 2006-10-28 15:59.
In furtherance of my prior ideas on smart power, I wanted to add another one — the concept of backup power.
As I wrote before, I want power plugs and jacks to be smart, so they can negotiate how much power the device needs and how much the supply can provide, and then deliver it.
However, sometimes, what the supply can provide changes. The most obvious example is a grid power failure. It would not be hard, in the event of a grid power failure, to have a smaller, low capacity backup system in place, possibly just from batteries. In the event of failure of the main power, the backup system would send messages to indicate just how much power it can deliver. Heavy power devices would just shut off, but might ask for a few milliwatts to maintain internal state. (Ie. your microwave oven clock would not need an internal battery to retain the time of day and its memory.) Lower power devices might be given their full power, or they might even offer a set of power modes they could switch to, and the main supply could decide how much power to give to each device.
Of course, devices not speaking this protocol, would just shut off. But things like emergency lights need not be their own system — though there are reasons from still having that in a number of cases, since one emergency might involve the power system being destroyed. However, battery backup units could easily be distributed around a building.
In effect, one could have a master UPS, for example, that keeps your clocks, small DC devices and even computers running in a power failure, but shuts down ovens and incandescent bulbs and the like, or puts devices into power-saving modes.
We could go much further than this, and consider a real-time power availability negotiation, when we have a power supply or a wire with a current limit. For example, a device might normally draw 100mw, but want to burst to 5w on occasion. If it has absolutely zero control over the bursts, we may have to give it a full 5w power supply at all times. However, it might be able to control the burst, and ask the power source if it can please have 5w. The source could then accept that and provide the power, or perhaps indicate the power may be available later. The source might even ask other devices if they could briefly reduce their own power usage to provide capacity to the bursting device.
For example, a computer that only uses a lot of power when it’s in heavy CPU utilization might well be convinced to briefly pause a high-intensity non-interactive task to free up power for something else. In return, it could ask for more power when it needs it. A clothes-dryer or oven our furnace or other such items could readily take short pauses in their high power drain activities — anything that uses a cycle rather than 100% on can do this.
This is also useful for items with motors. A classic problem in electrical design is that things like motors and incandescent lightbulbs draw a real spike of high current when they first turn on. This requires fuses and circuit breakers to be “slow blow” because the current is often briefly more than the circuit should sustain. Smart devices could arrange to “load balance” their peaks. You would know that the air conditioner compressor would simply never start at the same time as the fridge or a light bulb, resulting in safer circuits even though they have lower ratings. Not that overprovisioning for safety is necessarily a bad thing.
This also would be useful in alternative energy, where the amount of power available changes during the day.
Of course, this also applies to when the price of power changes during the day, which is one application we already see in the world. Many power buyers have time-based pricing of their power, and have timers to move when they use the power. In many cases whole companies agree their power can be cut off during brown-outs in order to get a cheaper price when it’s on. With smart power and real-time management, this could happen on a device by device basis.
These ideas also make sense in power over ethernet (which is rapidly dropping in price) which is one of the 1st generation smart power technologies. There the amount of power you can draw over the thin wires is very low, and management like this can make sense.