Radio transmitter to solve selfish merge

I have written before about the selfish merge which is a tricky problem to solve. One lane vanishes, and the merge brings everybody to a standstill. Selfish drivers zoom up the vanishing lane to the very end and are let in by other drivers there, causing the backup. The selfish strategy is the fastest way through the blockage, yet causes the blockage.

My thinking on Burning Man Exodus made me wonder if we might have a robot signal drivers not with lights but with radio. At the merge point we would place a computer with a radio transmitter, and detectors to measure the speed of traffic in each lane. If traffic flowed at a good speed, it would do nothing. If traffic slowed, signs would light up saying “Tune to and Obey AM 1610. $500 fine for lane changing without clearance.”

The robot would be at the merge point, and also have traffic lights marked with lane numbers of names.

The radio robot would then move the lanes through the merge. The key is the robot can tell an entire lane to start moving slowly simultaneously, and to stop simultaneously, even over a longer distance. So it can command the left lane to start moving and the others to remain stopped and not to change lanes. When the left lane has emptied, it can command it to stop and the red light for that lane would go on (clearly visible at the merge point.) A camera could record anybody running the red light or changing lanes into that lane as it is emptying. As it is clearing, the radio voice can tell the next lane to prepare to move, and give it the green light and the verbal command to do so. Lower priority would be given to the lane that is vanishing and those stuck in it — they were supposed to do a nice zipper merge a mile back, and are only stuck in it because they didn’t do so. This means that zooming up in the vanishing lane becomes punished rather than rewarded, and as a result, this jam-clearing approach would be needed far less.

The system would have to be experimented with and tuned for the best results.

There is a problem that there has to be some point where the system starts, after which lane changes are forbidden. There is a risk that a jam could be created there rather than at the physical merge point, by people in the vanishing lane trying to get into to continuing lane. This is the parameter we would tune — how much punishment can we give the people who wait too long in the vanishing lane before they start creating a jam a bit further up the road? Perhaps no punishment is needed, just equal treatment.

Of course there are two types of merges. Some are temporary, due to construction. Others are permanent. I am primarily aiming at the temporary ones here though it’s possible that solutions could be found for permanent merge-jams. However, in permanent merges, drivers get to know the parameters and will try to game them. If we move where the merge is it’s hard not to simply move the jam.

There is also the question of the very few cars without radios, and those who can’t understand basic instructions in the languages given on the radio. (The instructions can be said in up to 3 languages, I would think.) Such drivers would have to just follow the other cars, which is doable, even if their reaction time will not be as quick. Drivers who can’t read the signs already face the risk of violating traffic laws, of course.

I also don’t know how much gain you get from everybody being able to stop and start at once on voice command. Obviously moving cars need wider spacing than stopped cars, so you can’t actually start everybody at once like a train. Still, I think it should be possible to drain a blockage faster with the combination of coordinated starting and nobody else being allowed to merge into the lane during the period.

It’s also possible the voice could tell cars in the vanishing lane to simultaneously enter the continuing lane once it has been cleared, but that requires a way to stop oncoming traffic from entering that lane during that process, and it’s easier if all equipment can be placed at the merge point.

Improving Exodus at Burning Man

I’ve created a new blog category “Burning Man” to track my posts on the event. I was using a simpler tag before.

Today I want to talk about the Burning Man Exodus problem, a problem you might find interesting even if you don’t come to Burning Man. This year, even at 8pm Monday there was a long line and a 2 hour wait to get off the playa. Normally by about 5pm there is no wait. With 45,000 or more this year, and I presume at least 15,000 to 20,000 vehicles, and various chokepoints limiting traffic to 450 cars/hour, how do you drain the playa when everybody wants to go Sunday and Monday. (In addition, with so many now leaving Sunday, it makes Monday less interesting driving some who could leave Monday to leave earlier.)

It has now been routine to see waits of 5 hours or more at the peak times. I believe a solution should be possible involving some sort of appointment system, where cars are given a set time to leave, and they leave then. If they want to go at a peak time, instead of waiting 5 hours in line, they spend 5 hours in the city, or doing more cleanup, instead of idling their car in a giant line. Not that the line doesn’t become a little bit of a party, but it’s still not like being in camp. And for my exodus on Monday night there as the worst dust storm ever for Exodus, you could not see the car in front of you, or the fence beside you.

However, a good system to hand out appointments is hard to design. First of all, we have a mostly volunteer crew, and they don’t have much law enforcement power to stop violators or ticket them. (More participation by the police in this, when the city truly needs them, instead of having them be there for pot busts that nobody wants would be a great thing.)

Here are some of the constraints:  read more »

News: Burning Man burns on Monday

Update: I now have a whole Burning Man area on the blog!

I’ve not been blogging of late because I’m at Burning Man, and while normally I don’t report breaking news in this blog, we just witnessed a strange event. Through accident or arson, the Man was set alight this evening shortly after totality began in the eclipse of the moon.

The man was not loaded with explosives or fireworks as he is before his planned burn, so it was a more sedate affair, and soon fire crews arrived to “save the man” — something we have been asking for in mock protests for years. They did put him out, and he still stands, a bit worse for wear.

I managed to get some photos of the burn….

Efforts to save the man…

The injured man, missing a hand and burnt, under the eclipsed moon…

Coming up: Burning Man, Singularity Summit, Foresight Vision Weekend

Here are three events coming up that I will be involved with.

Burning Man of course starts next weekend and consumes much of my time. While I’m not doing any bold new art project this year, maintaining my 3 main ones is plenty of work, as is the foolishly taken on job of village organizer and power grid coordinator. I must admit I often look back fondly on my first Burning Man, where we just arrived and were effectively spectators. But you only get to do that once.

Right after Burning Man, the Singularity Institute is hosting a Singularity Summit — a futurist conference with a good rack of speakers. Last year they did it as a free event at Stanford and got a giant crowd (because it was free there were no-shows, however, making it sad that some were turned away.) This year there is a small fee, and it’s at the Palace of Fine Arts in San Francisco.

On the first weekend of November, we at the Foresight Institute will host our 2007 Vision Weekend doing half of it in “unconference” style — much more ad-hoc. It will be at Yahoo HQ in Sunnyvale, thanks to their generous sponsorship. More details on that to come.

Blog has been moved to a new server -- notes on shopping for hosting

As I noted earlier, my web site got hacked. As a result, I decided to leave my old hosting company,, and find a new host. While another VPS would probably have managed, I know a woman in San Jose who runs a hosting company,, who offered me a good deal on a fast dedicated server. I’ll grow into it, and in the meantime you should see much greater performance from my site.

I will make some final commentary on PowerVPS. I left for a variety of reasons, and they were certainly not 100% bad.

  • They were on the other coast, so my ping times to them were 80ms or so. This was no fun for ssh and would have made running things on them impractical. I was surprised that most of the virtual hosting companies with good reputations and prices were not on the west coast.
  • At first I looked for hosting in Canada. This was not simply because I was a Canadian. I thought it might be good to get hosting (in Vancouver) that was not subject to U.S. law. Not because I intend to break U.S. law, but being at the EFF we’ve been fighting some of these laws and it would be good to be on another level. And I’m Canadian. However, all the hosting offerings in Canada I tried that matched my parameters were much more expensive.
  • VPSs are in general a great idea. However, it’s hard to make them swap. That means each VPS duplicates in RAM a copy of apache and mysql and the rest, which is wasteful. Dedicated servers, which swap, allow the big programs that have a lot of pages which are rarely used to swap them out to disk, while the active programs get use of all of the ram. You can’t overdo this, but it’s pretty handy. One VPS provider, Iron Mountain, does what I have been advocating — gives users access to a virtualized MySQL server on a fast machine, so you don’t have to run your own. Doing this is rare.
  • They would not support Ubuntu, only Centos. I am running Ubuntu on almost all my machines. I really like the idea that I can just duplicate efforts onto my hosting server, with now learning how to do things in a different distro. And that I can compile stuff at home and just move it to the web host. CentOS is the most popular distro in the hosting world, and people have done a lot of fancy things for it (control panels, automated installs etc.) and I understand why a company will decide to only support one distro. But that just means I go to a company that picked the distro I want.
  • PowerVPS screwed up when most of their customers got hacked. The hack wasn’t their fault, as far as I know, but once they realized so many of their customers were compromised, they should have E-mailed all of us immediately. Because they didn’t, I only noticed the attack when they broke some of my scripts. My site redirected unsuspecting users to a frame which might have infected them, which I regret. I should have been told about this as soon as possible.
  • The kicker: When I told them I wanted to replace my server after the hack, they said I had two options. I could back up the server (many gigs of data) and they would erase it and give me a new one with a fresh Centos 4. Then I could restore the files and rebuild everything, being down during the period I did this. Or I could buy a new server, transfer, and then move the DNS or the IP as desired. They would not temporarily give me the 2nd server, and then delete the old when I was ready. They said too many people took too long, and freaked out if deleted. Being forced to buy a new server simply sent me on a shopping trip. Stupid, stupid, stupid. Why send your customers on a shopping trip?
  • Another sin: When I went shopping, I looked at the list of special coupon offers various competitors offered. There I saw PowerVPS selling the same server I was paying $85 for for 30% off, lifetime discount. Be very careful when you offer new customers a much better price than existing customers get. I hate it, and I will leave you for it.

Now as I say, it was not all bad. Their support was good, and during the recent episode where I was on the homepage, they temporarily upgraded my VPS capacity — which is one of the prime things a VPS can do that a dedicated server can’t. I liked those things but the above mistakes lost a customer.

Let me know if you encounter any problems with the server move.

Updated note: After you change a server’s IP, all users should switch to a new IP after the “time to live” on the past lookup expires, which in my case was set to about 3 hours. However, turns out many people have broken (or deliberately broken) software that retains stale records for much longer. The leading culprit right now are web spiders, including googlebot, which continue to hit the old address. Actual users doing so are rare. For E-mail, a previous move found that spammers continued to use the old addresses for months after the fact. They presumably kept DNS lookup data on their CD-ROMs, or didn’t want to be subject to attempts to use DNS to block them, or had some other reason.

New list of document classifications

It was an interesting experience watching our team argue before the U.S. District Court of Appeals that the EFF’s lawsuit against AT&T for helping the NSA spy on conversations without warrants should be dismissed because it impinges on state secrets. While the judges probed both sides, I read some signs from their grilling of the U.S. Government’s lawyer that they really have some concern over the important issues. They appear to realize that we can’t leave such programs completely without judicial oversight just because an NSA official declares them to be state secrets.

As one judge said, “are we supposed to bow down” before such declarations?

Anyway, this inspired me to make up a new list of all the different classifications for secret information:

  1. Unclassified (Ordinary documents)
  2. Sensitive (to delay FOIA)
  3. Double Super Secret (For Time Magazine Only)
  4. Treated as Top Secret (Non-secret document from Vice President’s Office)
  5. Leakable (Identity of covert agents married to those causing political trouble)
  6. Secret
  7. Top Secret
  8. SCI (Sensitive Compartmented Information)
  9. Embarrassing (Highest possible classification)

Wanted -- better tools to fill out, sign forms

I get forms to fill out and sign in electronic form all the time now. Often they come as PDFs or word documents, every so often by fax, and more and more rarely on paper. My handwriting is terrible and of course I no longer have a working typewriter. But none of the various tools I have seen for the job have had a nice easy workflow.

Now some PDFs are built as forms, and in Acrobat and a few other programs you can fill out the form fairly nicely. However, it’s actually fairly rare for people to build their PDFs as fillable forms. When they do, the basic Acrobat tools generate a form which free Acrobat reader will let you fill out — but bars you from saving the form you filled it out. You can only print it! Adobe charges more, on a per form basis, to make savable forms. However, some other readers, like Foxit Reader, will let you save what you fill into forms, even if the creator didn’t pay Adobe.

You still can’t sign such forms in electronic fashion, however. And as noted, many forms of all types aren’t enabled this way. Forms that come as Microsoft Word documents can be filled out in MS Word or the free Open Office writer or abiword. And you can even insert a graphic of a signature, which gets you closer to the target.

Often however, you are relegated to taking a fax, scanned paper document or PDF converted to bitmap, and editing it in a bitmap editor. Unfortunately the major bitmap editors, like Photoshop or GIMP, tend to be aimed entirely at fancy text and they are dreadful and entering a lot of text on a form. They don’t even make it so easy as quickly clicking and typing.

I encountered a commercial package named “Form Pilot” which is for Windows only but appears to run on WINE. It’s better than the graphics editors, and it does let you click and type easily. However, it has some distance to go. Here’s what I want:

  • Be smart and identify the white spaces on the form, and notably the lines or boxes. Figure a good type size if the default isn’t right.
  • When I click in one of those boxes, or above a line, automatically put me at a nice position above the line for typing. This is not a hard problem, hardly even OCR, just finding borders and lines. Let me use a different click if I want to do precise manual positioning.
  • When I hit TAB or some similar key, advance to the next such box or line in the form.
  • If I type too much in a box, do an automating shrinking of the text so that it fits.
  • Of course, let me go back and edit my text, and save the document with the text as a different layer so I can go back and change things.


Now the interesting issue of signing. For this, I would want to scan in a sheet of paper which I have placed many signatures on, and have it isolate and store them as a library of signatures.

When I wish to apply a signature, have it pick a random one. In addition, have it make some minor modifications to the signature. Modifications could include removing or adding a pixel here or there along the lines, or adjusting the aspect ratio of the signature slightly. Change the colour of the ink or thickness. There are many modifications which could generate thousands of unique signature forms. If you run out, scan another sheet.

Then make a log of the document I signed and the parameters of the signature that was added, and record that. All this is to assure the user that people who get the document can’t take the signature and copy it again to use on a different document and claim you signed it. You’ll have a log, if you want it, of just what documents were signed. Even without the log you can have assurance of uniqueness and can refute fake signatures easily.

(Refuting forged signatures is actually pretty easy on electronic documents.)

When done, let me save the document or print it, or hook up with a service so that I can easily fax it. The result should be a process of receiving a document or form, filling it out and signing it and sending it back (by fax or email of course) that’s even easier than the original method on paper.

I was surprised, by the way, at how bad all the free bitmap painters I tried were at typing. Gimp and Krita are poor. xpaint and kolourpaint seemed to have the easiest flow even though they are much older and primitive in UI. If you know of programs that do this well, let me know.

RC Blimp for mine exploration

As workers search for trapped miners in Utah, having drilled a 9” hole down to what is hoped to be their area, they plan to use things like sound and detecting CO2 and O2 in the atmosphere to find the miners.

It occurs to me that it should be possible to fit one of those inflatable radio controlled blimps down such a small tube, inflating it after it gets to the bottom. There are models that support small video cameras (and LED lights would not be too hard) especially in the denser air at the bottom of a mine. You would send down a radio relay station as well, and if things were really fancy, a way for the blimp to be told to dock for recharge or exchange of battery packs. (Small butane motors might also provide better power for weight.)

It’s also possible that power could be provided by paying out a wire, if it could generate enough thrust to drag that wire. There is a high risk the wire could get caught except on smooth floors, though. One might imagine paying out wire as far as one can go, and then disconnecting, fully charged, for a modest time on internal power. These blimps are cheap, you could send down several. They could easily sail over debris a ground based robot could not handle, though they could not crawl through small holes without deflating.

Another option would be an enclosed fan hovering robot. Such a robot would be able to go through smaller holes, though it’s hard to imagine remote pilots good enough to send them through such channels with only a video camera to see by. In the future, we may well have hovering robots able to use sonar to keep themselves stable and away from obstacles. They would go on ground when they could, then use bursts of hover to get over obstacles. But the blimp is something that could certainly work in ordinary mine channels today, though only for a limited battery life.

My world's oldest "blog" is 20 years old tomorrow (Aug 7, 2007)

Twenty years ago Tuesday, I created the newsgroup rec.humor.funny as a moderated place for posting the funniest jokes on the net, as chosen by the editor. In light of that anniversary, I have written up a bit of history of the creation of RHF. From there you can also find links to pieces I wrote earlier about the attempt to ban RHF and how RHF led to my creation of ClariNet.

One reason people may pay a bit more attention to this anniversary is I think that RHF, with its associated web site has a claim at being the world’s longest still-running “blog.” Of course, there is much debate about the origins of blogging, and there are various contenders based on what definition you put to the word.

I provide more detailed examination of those definitional questions and the other contenders on a page about the world’s oldest blog. In short, I contend that a blog is something that is:

  • Serial (a series of publications over time)
  • Done with a personal editorial voice (rather than being news reporting)
  • On the world wide web

While most agree with that last point (since personal journals, published diaries and columns existed long before computers) many forget that when Tim Berners-Lee defined what the web was, he was very explicit about including the many media and protocols he was tying together with HTML and HTTP, including USENET, Gopher, E-mail and the rest. So the web dates back well before HTML, and so does the weblog.

I personally point to mod.ber, a short-lived moderated newsgroup from 1983 as the first blog. It was clearly the boing-boing of its day. But it doesn’t exist, so RHF may get to claim the title.

As you will know if you have followed RHF, while I continue to publish it and provide the software and systems, I only edited it for the first 5 or so years. After that Maddi Hausmann took over, and in 1995, Jim Griffith took the reigns to this day. He, however, is ready to retire shortly and we’re looking for a replacement — a note will be posted in RHF and here with more details after the anniversary.

As you’ll see in the histories, the decision to start RHF changed my life in sweeping ways. It was one of those junctures that Clarence from “It’s a wonderful life” could change if he wanted to show me a different path.

Happy 20th Birthday rec.humor.funny.

Yipes, badwared...

A few weeks ago, my site got hacked. The attacker inserted an iframe pointing to a malware site into most of my html pages. That of course is bad, but the story doesn’t end there. (I should of course have upgraded my OS from the ancient one my hosting company gave years ago, but they don’t really support that, and feel an upgrade consists of rebuilding from scratch.)

I cleaned out the entire site and searched for any remnants of the bad link. Having done this I thought all was well. However, as it turns out while the domain and other domains were clear, the domain, which I don’t use for anything, still had a web server on it, pointing at a different directory far from where I keep my own web sites. (I try to never put my stuff in system directories.)

Unfortunately google, for unknown reasons, looked at, even though there are no links to it anywhere on the web. And found the placeholder page, with hacked link in it. From there it declared the entire site, including, to be a malware site. I think that’s a bug, since there were never any malware links on pages — this is a drupal site, and while the hacker’s script attempts to modify PHP scripts, it did not do so correctly, and just broke them. Running linux, I didn’t see the malware hacks on the other sites where they made the changes, but found them soon enough and removed them for now.

Alas, that means for some time people have been directed away from this blog by google. It shows up in search results, but you can’t actually click on the results, and there are warnings that going to the site may harm your computer (you get these warnings even on non-windows computers, which is reasonable, I guess, if incorrect.) I’ve asked the site, which Google teams with, to confirm the hacks are gone, and now I have to rush out to rebuild the site from a fresh install. Sigh.

Update: Google reacted to the cleanup of very quickly and no longer lists the domain as unsafe. I did file a review request with — perhaps they are much faster than they let on.

I’m shopping for hosting. I think I will upgrade to dedicated hosting, even though virtualized hosting has its merits. As I wrote before it would be great if MySQL could be virtualized independently of the OS. The ideal marriage would be a virtualized linux with access to sharable, non-virtualized services like web serving and database. The trick is memory. A typical virtual host will have 16 copies of MySQL and 16 copies of Apache and 16 copies of PHP or similar running on it. Because virtual machines don’t truly understand how much memory they have, or see the paging of the underlying OS, they can’t manage memory as well. But their ability to burst in unused capacity is a big win.

Two year contract required

I’m a big fan of making money by selling services but a disturbing trend is the requirement that customers sign a one or two (or even three) year contract in order to sign up for a service. Such contracts will have a fat termination fee if you want to end the contract early.

This is almost universal for cell phones, and of course it makes some sense when they are selling/giving you a subsidized phone. They need to be sure you will stay with them long enough to make the subsidy (From $200 to $400 if you include dealer kickbacks) back. That’s not so hard, because with many people getting cell phone plans as high as $100/month, they make it back quickly.

However, cell phone companies notoriously require a new contract for just about any change in your calling plan, including simply switching to a new plan they just started offering that you like better. Usually that’s just a one year contract. This makes much less sense. Switching your plan doesn’t cost them anything much aside from a call to customer service. They just want to put you on that contract.

DSL ISPs (and not just the phone company ones) are also notorious here. Some need it to subsidize installation or equipment, but again it’s also done simply to change price plans. In many cases you will also see major discounts offered if you commit to a contract (or of course even better if you just pay 12 months at once.)

I understand the attraction of the company for contracts. They can predict and book revenue. Quantity discounts have always had their reasons.

But they may not realize a serious negative about the contracts. They are a barrier to getting customers. In particular, a demand for a contract (when there is no major subsidy) says to me we think that without a contract, we could lose you as a customer. We fear that, if not for the contract, you would leave us. And that immediately makes me think the same thing. “What is it that makes them think they can’t keep me just by providing good service at good prices?” They already won my business, which is the hardest part. Now all they have to do is keep me happy and they will be very likely to keep it.

This recently backfired for Verizon. I’ve been off contract with them for years, though I had often debated switching to a different plan. Every time they told me I would need to sign a one year contract, and get no subsidy for doing so. (For a 2 year contract, they would have subsidized a new phone, but I wasn’t ready to do that.) So when phones broke I often picked them up on eBay rather than take their 2 year subsidy.

When it came time to really want to change plans, their demand for a new contract made them the same as all their competitors, who will also demand a new contract. And thus there was no particular reason not to switch. They encouraged me to compare all the various offers, all of which require a new contract, and all of which can offer me a phone subsidy with a 2 year contract. And all of which can keep the number, thanks to hard-won number portability. Had they been willing to let me make changes without a contract, I would have had no incentive to go shopping around at the competition. There I learned about much better deals they had, and thus left Verizon.

Perhaps they think they need a contract to keep me from the competition. But truth is, that might work temporarily but it just delays things. When a contract expires, somebody is going to be ahead, be it the competition or be it them, and they just moved the switch in time and probably locked me into the competition for their efforts.

The best company in the business shouldn’t need a contract to hold me. If the competition is offering a snazzy new subsidized phone for a contract, then my no-contract company can certainly offer that. Or, ideally, just offer me a lower monthly rate if I bring my own phone, with no need for a contract — my choice.

Over time, the public might wake up to realize that the contract is much more expensive than the phone subsidy. A typical data phone requires a plan of $60 to $80 per month, and many are on plans of $100 or more. That’s a $2400 purchase at $100/month, all to get a $200 phone subsidy. Of course most customers plan to buy from somebody over the period, so it makes sense to take the subsidy if you aren’t likely to be changing all the time, which most of us aren’t. But I am curious why all the firms feel these contracts are really in their interest.

Update: I should point out that there are reasons to get warmer to a contract when getting a new phone. Typically there is a $200 subsidy on the phone, and sometimes much more. And quite commonly, the penalty for getting out of the contract is $200, and in fact my law reduces on a pro-rata basis as you move through the life of the contract. As such, there is no reason not to sign the contract if you want that brand-new phone. In addition, there are contract trading sites (where other people will take over your contract for less than the penalty price because they don’t need a phone) to get out even cheaper.

However, you don’t want a contract without this level of quid pro quo. A contract just to change plans is ridiculous. Some carriers are getting that message.

Real Estate thoughts

A friend asked for advice on selling real estate. I’m no expert, but I thought I would write up some of my thoughts in a blog post for everybody:

  • The national average commission is 5%, though agents always ask for 6%. Do you want to do worse than average?
  • Of course, home prices have soared far beyond inflation, but the realtor cut remains the same. This is the power of the realtor monopoly, which many have tried to break. Someday somebody will. I think Google could do it.
  • A good realtor will usually get you 6% more than you will get on your own, which is how they justify their price. But that doesn’t mean a realtor couldn’t get you that same bump for far less if the market were more competitive.
  • Except in hot seller’s markets, open houses are not to sell your house. They are so the agent (or one of their associates) can meet new buyers, and try to sell them any house, not just yours. In hot markets, houses really do sell via the open house. (Also see below.)
  • A great story. A broker calls his agents in for a meeting. He asks them, “You’re listing a house and you’ve gotten one of the buyers you represent interested in it. Who are you working for?”

    One agent says, “The seller is the one you have a contract with, work for him.

    Another agent says, “The buyer is the one who decides to make the offer. Work for her.”

    A third agent says, “Actually, the law in this state requires that you try as hard as you can to represent the interests of both.

    The broker listens and then growls at them, “You’re all wrong! You’re working for me!”

  • In other words, the agent is working at making a sale happen. I’ve never met a seller’s agent who would not quickly betray their seller to make a sale happen. By “betray their seller” I mean tell the prospective buyer information the seller would normally never reveal, such that they will take less. Some would argue (validly) in some cases that this is in the seller’s interest too.
  • More often than you think, houses end up selling to friends and neighbours. A friend just listed a house and ended up with competing bids from the neighbour 2 doors down and another a few more doors down. People often love the chance to get a bigger house in the same location — no need to reclocate kids, learn new area etc. You need a neighbourhood that people love of course.
  • Because of that, consider doing one week of basic “for sale by owner” marketing to let neighbours and friends know you are selling. You will get swarmed by realtors wanting your listing, which is OK if you want them to compete over you. Otherwise tell them you’ve already picked the broker you will list with if the FSBO doesn’t work
  • You may still want an agent to handle your FSBO. There are agencies that do all the non-marketing part of real estate transactions for much lower fees, or you can talk a traditional agent into do it for far less as well.
  • As an alternate, ask for a clause in your contract that says if the house sells to a neighbour or to somebody in your circle of friends, the commission is much less. In general the commission should be much less if your agent also represents the buyer, which would typically be the case here. Threaten to do FSBO (and give the agent nothing) if they won’t accept this clause.
  • Zillow is really cool and useful.

Barry Bonds, please stop at 754

At this point it seems only people in San Francisco want to see Barry Bonds break Aaron’s all time home-run record of 755. He has 753 right now. In San Francisco, the crowds get on their feet every time he gets on deck, and that was even before he got on the cusp of the record. Outside SF, fans boo him, and it’s commonly believed that should he tie or break the record in Los Angeles or many other cities, he will get booed for doing it. In SF there is a willing suspension of disbelief. We know about the steroids and got over it, and now just want to see what sort of performance enhanced man can deliver.

Bonds is presumably off the steroids now, and his drop in performance shows it. Since he knows he can’t dare be caught with them, he probably will never take them again, and thus not be caught. There will only be the allegations of others.

My view is that the San Francisco Reality Distortion Field will fade, and nobody will speak of Bonds’ upcoming record with anything but cynicism. Record books will all put an asterisk next to it, and not like the one they sometimes put on Roger Maris’ record.

But Bonds still has a chance to show some class. People say he has none, so this is unlikely, but still possible. He should stop hitting home runs, one shy of the record. Or, if he really insists, after tying it. Nobody would doubt that he could have hit another 1 or 2 and broken the record, if not more. He might indeed play another season and break it by a wider margin, though he won’t have any more 70 HR seasons. The die hards will bitterly come to accept he was a user.

But this final act would get a very different reading in the history books, one of going out with some class.

Of course, there is the issue that the team might be screamingly upset. Normally, they would sue him for not fulfilling his very expensive contract. And he would have to retire this year, forgoing several million dollars, so this is not without cost. But fume as they might, I can’t imagine the team actually trying to sue him for a classy act. The PR cost would be far too high.

Update: Well, I guess he didn’t stop at 754, though he is holding off to get 756 at AT&T Park for the home fans. San Diego fans were nicer than I expected for the actual HR, though they booed most other times.

Forbid exploding to tan under the burning sun

Something light hearted. I purchased, some time ago, a small Li-Ion battery for external power for my laptop and other devices. These batteries are great, getting down near $100, weighing very little and, with 110 watt-hours, able to keep a laptop going all day at a conference or over most of a transoceanic flight.

This particular battery, made in China, contains one of the more amusing bad-english warnings on the label, though, particularly item #3.

Battery label

Google Mobile Maps with traffic

I’m quite impressed with Google’s mobile maps application for smartphones. It works nicely on the iPhone but is great on other phones too.

Among other things, it will display live traffic on your map. And I recently saw, when asking it for directions, that it told me that there would be “7 minutes of traffic delay” along my route. That’s great.

But they missed the obvious extension from that. Due to the delay, 101 is no longer my fastest route. They should use the traffic delay data to re-plot my route, and in this case, suggest 280. (Now it turns out that 280 is always better anyway, because aside from the fact it has less traffic, people drive at a higher average speed on it than 101, and the software doesn’t know that. Normally it’s a win except when it’s raining in the hills and not down by the shore.)

Now I’ve been wanting mapping and routing software to get a better understanding of real road speeds for a while. It could easily get that by taking GPS tracklogs from cabs, trucks and other vehicles willing to give them. It could know the real average speed of travel on every road, in every direction, at any given hour of the day. And then it could amend that with live traffic data. (Among other things, such data would quickly notice map errors, like one-way streets, missing streets, streets you can’t drive etc.)

Now to get really smart, the software should also have a formula for “aging” traffic congestion based on history and day of the week. For example, while there may be slow traffic on a stretch of highway at 6:30 pm, if I won’t get there until 7:30 it should be expected to speed up. As I get closer it can recalculate, though of course some alternate roads (like 101 vs. 280) must be chosen well in advance.

And hey, Google Mobile maps, while your at it, could you add bookmarks? For example, I would like to make a bookmark that generates my standard traffic view, and remember areas I need maps of frequently. And of course since traffic data can make them different, bookmark routes such as one’s standard commute. For this, it might make sense to let people bookmark the routes in full google maps, where you can drag the route to your taste, and save it for use in the mobile product, even comparing the route times under traffic. One could also have the device learn real data about how fast I drive on various routes, though for privacy reasons this should not be store unencrypted on servers. (We would not want our devices betraying us and getting us speeding tickets or liability in accidents due to speeding, so only averages rather than specific superlimit speeds should be stored.)

Also — there are other places in a PDA/phone with an address, most notably events in the calendar. It would be nice while looking at an event in the calendar (or to-do list) to be able to click “locate on the map.”

We don't live in a 3D world

Ever since the first science fiction about cyberspace (First seen in Clarke’s 1956 “The City and the Stars” and more fully in 1976’s “Doctor Who: The Deadly Assassin”) people have wanted to build online 3-D virtual worlds. Snow Crash gelled it even further for people. 3D worlds have done well in games, including Mmorpgs and recently Second Life has attracted a lot of attention, first for its interesting world and its even more interesting economy, but lately for some of the ways it has not succeeded, such as a site for corporate sponsored stores.

Let me present one take on why 3D is not all it’s cracked up to be. Our real world is 3D of course, but we don’t view it that way. We take it in via our 2D eyes, and our 1.5D ears and then build a model of its 3D elements good enough to work in it. In a way I will call this 2.5D because it’s more than 2D but less than 3. But because we start in two dimensions, and use 2D screens, 3D interfaces on a flat screen are actually worse than ones designed for 2D. Anybody who tired the original VRML experiments that attempted to build site navigation in 3D, where you had to turn around your virtual body in order to use one thing or another, realized that.

Now it turns out the fact that 3D is harder is a good thing when it comes to games. Games are supposed to be a challenge. It’s good that you can’t see everything and can get confused. It’s good that you can sneak up behind your enemy, unseen, and shoot him. Because it makes the game harder to win, 3D works in games.

But for non-games, including second life, 3D can just plain make it harder. We have a much easier time with interfaces that are logical, not physical, and present all the information we need to use the system in one screen we can always see. The idea that important things can be “behind us” makes little sense in a computer environment. And that’s true for social settings. When you sit in a room of people and talk, it’s a bug that some people are behind you and some are in front of you. You want to see everybody, and have everybody see your face, the way the speaker on a podium would. The real 3D world can’t do that for a group of people, but virtual worlds can.

I am not saying 3D can’t have its place. You want and need it for modeling things form the real world, as in CAD/CAM. 3D can be a place to show off certain things, and of course a place to play games.

In making second life, a better choice might have been a 2D interface that has portals to occasional 3D environments for when those environments make sense. That would let those who want to build 3D objects in the environment get the ability to do so. But this would not have been nearly as sexy or as Snow-Crashy, so they didn’t do it. Indeed, it would look too much like an incremental improvement over the web, and that might not have gotten the same excitement, even if it’s the right thing to do. The web is also 2.5D, a series of 2D web pages with an arbitrary network of connections between them that exists in slightly more than 2 dimensions. And it has its 3D enclaves, though they are rare and mostly hard to use.

Another idea for a VR world might be a 3D world with 360 degree vision. You could walk around it but you could always see everything, laid out as a panorama. You would not have to turn, just point where you wish to go. It might be confusing at first but I think that could be worth experimenting with.

Harry Potter series review

For the fun of it, we joined a line at a local independent bookstore last Friday night to get a copy of Harry Potter and the Deathly Hallows. Here I will first review the series without reference to the final book, and then make some remarks about things that are missing from the series that could be viewed as very minor spoilers, because they refer to things that might have taken place in the final book, but did not — but for which knowing they did not will not spoil the book in any meaningful way. However, if you want absolutely no knowledge of this sort, stop reading.

Then I will link at the bottom to a section of the review that is full of spoilers of the final book.

I want to address two issues that play a major and minor role. The lesser one is slavery. While Hermione regularly complains about it, and Harry arranges to manumit one slave elf, the truth of it is that pretty much all the other “good guys” embrace slavery on a deep level. In a way, Hermione’s protest group only makes it worse. The good guys can’t claim they are ignorant of the situation. Dumbledore may be sympathetic to Hermione, but his school still owns many slaves.

It is not just the elves that are enslaved. It is rarely examined, but most classical magic requires the enslavement of intelligent spirits of various kinds. The creatures that live in the portraits seem to be fragments of intelligent minds. But nobody cares.

The big issue is that of nature and nurture. Voldemort’s agenda demands wizards be purebloods, a classic racist/fascist theme. The “good guys” oppose him, but at times only with lip service, for most of them remain highly prejudiced against Muggles. They are never seen to socialize with them, and there are no redeeming Muggle characters in the book. Hermione’s parents are never seen, and while the senior Weasley is fascinated by Muggles, this is considered a strange quirk, and he doesn’t seem to have them around to tea. Muggle acceptance consists largely of not killing or abusing them, and being tolerant of magical people who are born to them. We see references to Muggle studies, but it seems that most of the students learn nothing but magic at Hogwarts. There is no talk of science, human history, literature or the arts. Wizards seem to never be employed in anything but jobs relating to magic — thanks to the slaves and spells that manage most of the work. One wonders if the wizards and witches, out of the context of magic, would be remarkably dull people.

Voldemort’s own Muggle father never makes a lot of sense. Yes, we are told he hates that father and hates Muggles because of him, but why does his band of racist followers find this acceptable? It is suggested they don’t know it, but if so, why was this never released? Certainly Hitler’s Jewish roots were publicized after the war.

But most disturbing is Harry himself. Harry’s foster family — the ones who truly raised him — are shallow, mean and selfish. Remarkably so. And yet Harry’s strongest trait is being the opposite of these things. Harry is kind, giving, brave and true. Why? Clearly not because of his adoptive parents. And not because of upbringing by his genetic parents. There can be only one reason — blood will out. His genetic parents were good people, so he must be too, just as he inherited magical abilities from them. But this is not how it is for people who grow up raised by and abused by people like the Dursleys. Hermione is the only good present day character with Muggle parents. The rest of the major characters, as far as we can tell, except Voldemort, have magical parents.

So the book says one thing about race but does another. For Harry, breeding is what matters. Non-humans are generally hated, and while Hagrid is tolerated by our good guys, he’s an exception, not a rule.

Now, if you’ve read the book you can read on for the review of Harry Potter with spoilers.

Photo server being dugg

Well, this site is at a crawl now because the panorama I assembled of San Francisco in 1971 is on the front page. If you haven’t seen it before it’s on the San Francisco page, the panorama of SF from the top of the Bay Bridge in 1971.

My hosting company, Defender Hosting/PowerVPS, has been kind enough to do a temporary upgrade of my server capacity to their top level, though the site’s response is still poor. This is something that virtual hosting can do that you can’t as easily do with dedicated hosting, though virtual hosting has its own costs, mostly in wasted memory.

I think it would be nice if virtual hosting companies sold this “bump” ability as a feature. When your web site gets a lot of load from a place like digg or slashdot, this could ideally be automatically detected, and more capacity made available, either free for rare use as a bonus, or for a fee. Most site owners would be glad to authorize a bit of extra payment for extra capacity in the event that they’re subject to a big swarm of traffic. (The only risk being that you might pay for capacity when under a DOS or spam attack or when being used by crackers or spammers.)

One place this might happen well is in the Amazon ec2 service, which I have yet to really try out. EC2 offers a cloud of virtual servers on demand. In this case, you would want to have a master controller which tracks load on your server, and fires up another virtual server, and then, once it’s up, starts redirecting traffic to it using DNS or proxy techniques, or both. If a web site is highly based on an SQL server, all the copies would need to use the same SQL server (or perhaps need an interesting replication strategy if not read-only) but making SQL servers scale is a well-attacked problem.

Has anybody done this yet with EC2? If not, I expect somebody will soon. The basic concept is fairly simple, though to do it perfectly you would need to do things like copy logs back after the fact and redirect any pages which want to write data to the local server to a common server if one can. For a site with static pages that don’t change due to user activity, such replication should not present too many problems.

RIP Jim Butterfield

In 1978, after finally saving up enough money, I got myself a Commodore PET computer. I became immersed in it, and soon was programming all sorts of things, and learning assembler to make things go really fast. I soon discovered the Toronto Pet User’s Group, which grew over time to be perhaps the most prominent Commodore group in the world.

A big reason for that was the group’s star attraction, a middle aged man with a great deep speaking voice and a talent for writing and explaining computers to newcomers. That man was Jim Butterfield. His talks at meetings were the highlight for many members, and he did both beginner’s talks and fairly high level ones. Jim had been working on reverse engineering the OS (really BIOS) of the PET, and one of my early cute hacks was a very simple loop that copied the computer’s “zero page” onto the screen at every vertical refresh (ie. 60 times/second.) The PET had characters for all 256 bytes, so this was like a live window into the computer’s guts, even beyond das blinkenlights found on mainframes. You could play with the computer and actually watch everything change before you. For his reverse engineering goals, Jim loved the little program and promoted it and we became friends.

Later, Jim would be hired to write the manuals for some of my software projects, including my set of programming tools known as POWER. I’m sure his name on the manual helped sell the product as much as mine did. He was the Commodore world’s rockstar and father figure at the same time. We were only in occasional touch after I left Toronto and then Canada, but the incredible longevity of Pet and C64 hacking has kept his name in people’s minds. He had a sense of humour, charm and love that is rarely found in a technical guru.

Cancer finally got him on June 29th. There’s a bit more at the TPUG page.

You can see this rather embarrassing advertisement that was published to sell software written by myself, Jim and fellow Mississauga software author Steve Punter with a picture of the 3 of us dressed as football players.

Should we allow relative's DNA matching to prove innocence?

Earlier I wrote about the ability to find you from a DNA sample by noting it’s a near match with one of your relatives. This is a concern because it means that if relatives of yours enter the DNA databases, voluntarily or otherwise, it effectively means you’re in them too.

On a recent 60 minutes on the topic, they told the story of Darryl Hunt, who had been jailed for rape and murder. It wasn’t clear to me why, but this was done even though his blood type did not match the rapist’s DNA. Even after DNA testing improved and the non-match was better confirmed, he was still kept in jail, because he was believed to be the murderer, if not the rapist, ie. an accomplice.

Later, they did a DNA search on the rapist’s DNA and found his brother in the database, who had been entered due to a minor parole violation. So they interviewed the brothers of the near-match and found Willard Brown, who turned out to be the rapist. Once they could see he was not an associate of the rapist, Hunt was freed after 19 years of false imprisonment.

The piece also told the story of another rapist, who had raped scores of women and stolen their shoes as souvenirs, but had become a cold case. He was caught because his sister was in a DNA database due to a DUI.

Now much of our privacy law is based on having your own private data not seized and used against you without probable cause. It’s easy to answer the case of the shoe rapist. There are a wide variety of superior surveillance tools we could allow the police to use, and they would help them catch criminals, and in many cases thus prevent those criminals from committing future crimes. But we don’t give the police those tools, deliberately, because we don’t want a world where the government has such immense surveillance power. And a large part of that goal is protecting the innocent. Our rules that allow criminals to walk free when police do improver evidence gathering and surveillance to catch them are there in part to keep the police from use of those powers on the innocent.

But the innocent man who was freed presents a more interesting challenge. Can we help him, without enabling 1984? In considering this question, I asked, “What if we allowed DNA near matches to be used only when they would prove innocence?” Of course, in Hunt’s case, and many others, the innocence is proven by finding the real guilty party.

So what if, in such cases, it was ruled that while they might find the guilty party, they could not prosecute him or her? And further, that any other evidence learned as a result was considered Fruit of the poisonous tree? That’s a pretty tough rule to follow, since once the police know who the real perpetrator is, this will inspire them to find other sorts of evidence that they would not have thought to look for before, and they will find ways to argue that these were discovered independently. It might be necessary to put on a stronger standard, and just give immunity to the real perpetrator if sufficient time has passed since the crime to declare the case to be cold.

Setting out the right doctrine would be difficult. But if it frees innocents, might it be worth it?

Syndicate content