Archives

Date

Reminder, get your credit card foreign exchange settlement tonight

Just a reminder, if you purchased things outside the USA with credit cards or used foreign ATMs the companies gouged you on exchange rates, and lost a class action case. You can go to the CCF Settlement page to fill out the form tonight, just a few hours left. Your options are:

  • Just get a plain $25
  • Report how many days you were outside the USA from 1996 to 2006 (that’s 216 days for me.) I’m guessing you might get back $2/day or so, but nobody knows.
  • Actually calculate all your foreign transactions — only easy if you have an accounting system which would make that easy to do. Get 1% to 3% of them back.

Most people are going with #2 because of how much work #3 will be. This is typical in class actions. Clearly the credit card companies know exactly how many transactions they charged you foreign exchange on, and could calculate this for you, but they arranged a settlement that worked the other way. The lawyers get their fees, though.

(I should note that the EFF has done one class action and is doing another, and learned how hard it is to get something that’s really good for the plaintiffs. However, as a civil rights foundation, we really are highly interested in the punitive nature of these cases, and any fees we get are plowed right back into more civil rights work.)

You probably got a settlement form in the mail. If you kept it, it has a number that you can key in to make this very easy. If you didn’t, you may have to disclose some info and might decide not to do so. For me, they already had most of my info from the CC databases, I just entered my refund ID #, my days outside the USA, and some questions the typical purposes of my trips. Much easier than is typical.

Guarantee CPM if you want me to join your ad network

If you run a web site of reasonable popularity, you probably get invitations to sign up for ad networks from time to time. They want you to try them out, and will sometimes talk a great talk about how well they will do.

I always tell them “put your money where your mouth is — guarantee at least some basic minimum during the trial.”

Most of them shut up when I ask for that, indicating they don’t really believe their own message. I get enough that I wrote a page outlining what I want, and why I want it — and why everybody should want it.

If you have a web site with ads, and definitely if you have an ad network, consider reading what I want before I’ll try your ad network.

Making RAID easier

Hard disks fail. If you prepared properly, you have a backup, or you swap out disks when they first start reporting problems. If you prepare really well you have offsite backup (which is getting easier and easier to do over the internet.)

One way to protect yourself from disk failures is RAID, especially RAID-5. With RAID, several disks act together as one. The simplest protecting RAID, RAID-1, just has 2 disks which work in parallel, known as mirroring. Everything you write is copied to both. If one fails, you still have the other, with all your data. It’s good, but twice as expensive.

RAID-5 is cleverer. It uses 3 or more disks, and uses error correction techniques so that you can store, for example, 2 disks worth of data on 3 disks. So it’s only 50% more expensive. RAID-5 can be done with many more disks — for example with 5 disks you get 4 disks worth of data, and it’s only 25% more expensive. However, having 5 disks is beyond most systems and has its own secret risk — if 2 of the 5 disks fail at once — and this does happen — you lose all 4 disks worth of data, not just 2 disks worth. (RAID-6 for really large arrays of disks, survives 2 failures but not 3.)

Now most people who put in RAID do it for more than data protection. After all, good sysadmins are doing regular backups. They do it because with RAID, the computer doesn’t even stop when a disk fails. You connect up a new disk live to the computer (which you can do with some systems) and it is recreated from the working disks, and you never miss a beat. This is pretty important with a major server.

But RAID has value to those who are not in the 99.99% uptime community. Those who are not good at doing manual backups, but who want to be protected from the inevitable disk failures. Today it is hard to set up, or expensive, or both. There are some external boxes like the “readynas” that make it reasonably easy for external disks, but they don’t have the bandwidth to be your full time disks.

RAID-5 on old IDE systems was hard, they usually could truly talk to only 2 disks at a time. The new SATA bus is much better, as many motherboards have 4 connectors, though soon one will be required by blu-ray drives.  read more »

Advice on what digital camera to buy

I do enough photography that people ask me for advice on cameras. Some time ago I wrote an article about what lenses should I buy for a Canon DSLR which has turned out to be fairly popular. The thrust of that article, by the way, is to convince you that there is only minimal point in buying a DSLR that can changes lenses and getting only one lens for it, even if you plan to get another lens later (after your camera has depreciated plenty without using its real abilities.)

However, many people come with the higher level question of which digital camera to get. There are many cameras, and lots of right answers, but hopefully I give a few in “What Digital Camera Should I Buy?.”

Here, the advice has some specifics and some generalities. Both Canon and Nikon are good, but stick with the major brands so you get accessories and an aftermarket on eBay. And the answer, if you are serious about your pictures, may be to buy more than one. We’ve got three — plus another 2 we don’t use.

Data Deposit Box pros and cons

Recently, I wrote about thedata deposit box, an architecture where applications come to the data rather than copying your personal data to all the applications.

Let me examine some more of the pros and cons of this approach:

The biggest con is that it does make things harder for application developers. The great appeal of the Web 2.0 “cloud” approach is that you get to build, code and maintain the system yourself. No software installs, and much less portability testing (browser versions) and local support. You control the performance and how it scales. When there’s a problem, it’s in your system so you can fix it. You design it how you want, in any language you want, for any OS you want. All the data is there, there are no rules. You can update the software any time, other than the user’s browser and plugins.

The next con is the reliability of user’s data hosts. You don’t control it. If their data host is slow or down, you can’t fix that. If you want the host to serve data to their friends, it may be slow for other people. The host may not be located in the same country as the person getting data from it, making things slower.

The last con is also the primary feature of data hosting. You can’t get at all the data. You have to get permissions, and do special things to get at data. There are things you just aren’t supposed to do. It’s much easier, at least right now, to convince the user to just give you all their data with few or no restrictions, and just trust you. Working in a more secure environment is always harder, even if you’re playing by the rules.

Those are pretty big cons. Especially since the big “pro” — stopping the massive and irrevocable spread of people’s data — is fairly abstract to many users. It is the fundamental theorem of privacy that nobody cares about it until after it’s been violated.

But there’s another big pro — cheap scalability. If users are paying for their own data hosting, developers can make applications with minimal hosting costs. Today, building a large cloud app that will get a lot of users requires a serious investment in providing enough infrastructure for it to work. YouTube grew by spending money like water for bandwidth and servers, and so have many other sites. If you have VCs, it’s relatively inexpensive, but if you’re a small time garage innovator, it’s another story. In the old days, developers wrote software that ran on user’s PCs. Running the software didn’t cost the developer anything, but trying to support on a thousand different variations of the platform did.

With a data hosting architecture, we can get the best of both worlds. A more stable platform (or so we hope) that’s easy to develop for, but no duty to host most of its operations. Because there is no UI in the data hosting platform, it’s much simpler to make it portable. People joked that Java became write-once, debug everywhere for client apps but for server code it’s much closer to its original vision. The UI remains in the browser.

For applications with money to burn, we could develop a micropayment architecture so that applications could pay for your hosting expenses. Micropayments are notoroiusly hard to get adopted, but they do work in more restricted markets. Applications could send payment tokens to your host along with the application code, allowing your host to give you bandwidth and resources to run the application. It would all be consolidated in one bill to the application provider.

Alternately, we could develop a system where users allow applications to cache results from their data host for limited times. That way the application providers could pay for reliable, globally distributed resources to cache the results.

For example, say you wanted to build Flickr in a data hosting world. Users might host their photos, comments and resized versions of the photos in their data host, much of it generated by code from the data host. Data that must be aggregated, such as a search index based on tags and comments, would be kept by the photo site. However, when presenting users with a page filled with photo thumbnails, those thumbnails could be served by the owner’s data host, but this could generate unreliable results, or even missing results. To solve this, the photo site might get the right to cache the data where needed. It might cache only for users who have poor hosting. It might grant those who provide their own premium hosting with premium features since they don’t cost the site anything.

As such, well funded startups could provide well-funded quality of service, while no-funding innovators could get going relying on their users. If they became popular, funding would no doubt become available. At the same time, if more users buy high quality data hosting, it becomes possible to support applications that don’t have and never will have a “business model.” These would, in effect, be fee-paid apps rather than advertising or data harvesting funded apps, but the fees would be paid because the users would take on the costs of their own expenses.

And that’s a pretty good pro.

Gaeta's Transsexual Lament (and Guess What's Coming)

The most recent episode, Guess What’s Coming to Dinner reveals what keeps the fans coming back. While one cliffhanger would be enough for any show, BSG once again gives us several at the same time at the close of the episode. Where has the hybrid jumped to? Is Natalie dead and what happens to Sharon, what happens to the alliance. And what is Gaeta’s song about?

This episode did fall short a bit on plot consistency. There was no reason for the Cylon base ship to jump with the Demetrius. The D should have jumped in first, cleared the situation and then had the Cylon ship appear a bit away from the fleet. No drama of course, but this, combined with the bad radio (every radio’s bad, even on the raptor?) was too much plot device. The other annoying plot device is the universal resurrection hub. It makes no sense — though it does prove the Cylons have FTL radio — other than as a plot device. Something so valuable, I would certainly have a backup.

But more to the point, attacking this hub has only modest military value, though great revenge value. Why? The fleet never plans to engage the Cylons again. And there’s no sign that the loss of this hub (until they can rebuild it) would mean they can’t make more Cylons, just that they can’t make more Cylons with downloaded minds of killed Cylons. The main military value is that perhaps it would make raiders more timid if they have to fight them. But they don’t plan to. The only reason they fought them recently was because the Cylons, for unexplained reasons were able to set an ambush at the Ionian nebula, and some unexplained pulse shut down fleet FTL. This still doesn’t make much sense, but otherwise the fleet plan is to get far away from Cylons, and emergency jump if they show up.

Not to make them really, really angry.

Now onto Gaeta’s song. They play it so much in this episode it is hard to imagine it doesn’t mean something. The composer has a lot about the song on his blog including the lyrics:

Alone she sleeps in the shirt of man
With my three wishes clutched in her hand
The first that she be spared the pain
That comes from a dark and laughing rain
When she finds love may it always stay true
This I beg for the second wish I made too
But wish no more
My life you can take
To have her please just one day wake

Now, right now I am still in the Baltar camp, but boy, this song sure does sound like it’s talking about a female final cylon. Sleeping “in the shirt of man?” Sounds like a sleeper final 5 member, still in shadow. Being spared the pain? Sounds like the redemption that only comes “in the howl of terrible suffering.” And wishing that she will wake up? That he is willing to die for her to awake?

Now, if the final Cylon is a woman, how could Gaeta be the final Cylon? And if he’s not the final Cylon, would would have put a song about her into his head?

No simple answers here, but one to consider: Gaeta is or was a woman. He’s always been viewed as a somewhat effeminate character, and indeed Jamie Bamber declared Gaeta as being gay in an interview. Could he be a transsexual of some sort? This would explain why her awakening would end his life, in a way.

  • He does have a dark secret, one Baltar whispered to him that made him try to kill Baltar (Podcast notes suggest this was a fragment of a deleted plotline, however.)
  • Surely his military doctors would know this, and thus his commander?
  • Perhaps they do know, and in the fleet this is no big whoop. (But then it can’t be a dark secret.)
  • He really, really didn’t want to be unconscious for the amputation.

Another alternative: He is the final Cylon, who is at heart female. I noted earlier that we don’t know who the Final Five were in previous incarnations. That perhaps they don’t always have the same body each time, perhaps they are sometimes of different sexes?

This idea has its own problem: D’Anna saw the “opera house” final five, and she recognized them. So those copies of the final 5 (who are not unaware of their nature) have the same bodies as the current ones. This makes it harder, especially if the final Cylon was the object of her apology, for Gaeta to have a different body.

As I’ve noted, my current rationale for the sleeper agents is that the Final Five were once humans from Earth, from our very century. They transformed (uploaded) into machine form, but don’t want to lose touch with their humanity. To preserve this, they regularly make copies of themselves who live fully as humans, and then merge their minds and memories to keep themselves more human. Under this theory, they might well live as different humans, and different sexes.

Now while some choose Gaeta as the final Cylon because they take the “Last Supper” clue (that the last Cylon is not in the photo, leaving just a few candidates) at face value, I still don’t like it. Mostly because I don’t think Gaeta would make the proper “holy shit” moment that the unmasking of the final Cylon must be by dramatic rules. And because he shows no sign of being “in the shadow, hoping for redemption that will only come in the howl of terrible suffering.” Baltar is still the top candidate for this clue.

There is another clue about a woman. The First Hybrid said:

Soon there will be four, glorious in awakening, struggling with the knowledge of their true selves. The pain of revelation bringing new clarity and in the midst of confusion, he will find her.

This pertains to the awakening of the four, not the one, at least in the context in which it is said. In that case the “her” seems to be Foster, and the “he” could be Tyrol or Baltar. But it’s pretty vague and could mean anything, or it could indeed refer to the final Cylon in some way, as a her.

A near-ZUI encrypted disk, for protection from Customs

Top

Recently we at the EFF have been trying to fight new rulings about the power of U.S. customs. Right now, it’s been ruled they can search your laptop, taking a complete copy of your drive, even if they don’t have the normally required reasons to suspect you of a crime. The simple fact that you’re crossing the border gives them extraordinary power.

We would like to see that changed, but until then what can be done? You can use various software to encrypt your hard drive — there are free packages like truecrypt, and many laptops come with this as an option — but most people find having to enter a password every time you boot to be a pain. And customs can threaten to detain you until you give them the password.

There are some tricks you can pull, like having a special inner-drive with a second password that they don’t even know to ask about. You can put your most private data there. But again, people don’t use systems with complex UIs unless they feel really motivated.

What we need is a system that is effectively transparent most of the time. However, you could take special actions when going through customs or otherwise having your laptop be out of your control.  read more »

A Skype Webcam Mother's Day Brunch

A brunch was planned for my mother’s house on Sunday, but being 2,500 miles distant, I decided to try to attend by videoconference. Recently Skype has started supporting what it calls a “high quality” videoconference, which is 640x480 at 24 to 30 frames per second. At its base, that’s a very good resolution, slightly better than broadcast TV.

This requires fairly modern hardware, which my mother doesn’t have. It needs a dual-core processor to be able to compress the video in real time, and a decently fast processor to decompress it. It wants 384K of upstream bandwidth, but ideally even more, which in theory she has but not always. It demands Windows XP. And artificially it demands one of three of Logitech’s newest and most expensive webcams, the Orbit AF or the Quickcam Pro for Notebooks or Pro 9000 for desktops. These are the same camera in 3 packages — I took the Orbit AF which also includes a pan/tilt motor.

Skype’s decision to only work with these 3 cameras presumably came from a large kickback from Logitech. Admittedly these are very nice webcams. They are true-HD webcams that can native capture at 1600x1200. They are sharp and better in low light than most webcams, and they come with a decent built in microphone that appears as a USB audio device — also good. But they aren’t the only cameras capable of a good 640x480 image, including many of Logitech’s older high-end webcams. They retail for $100 or more, but via eBay sellers I got the Orbit AF for about $75 shipped and the Pro for Notebooks shipped quickly within Canada for $63. Some versions of Skype allow you to hack its config file to tell it to do 640x480 with other quality cameras. That is easy enough for me, but I felt it was not something to push on the relatives quite yet. On the Mac it’s your only choice.

Testing on my own LAN, the image is indeed impressive when bandwidth is no object. It is indeed comparable to broadcast TV. That’s 4 times the pixels and twice the framerate of former high-end video calls, and 16 times the pixels of what most people are used to. And the framerate is important for making the call look much more natural than older 10fps level calls.  read more »

The Final Five from Earth and the Watchtower

The big confirmation in Faith was the line from the Hybrid: “The missing 3 will give you the 5 from the home of the 13th.” While there is still some potential in the minds of some viewers that the 13th will turn out to be something other than the “13th tribe” this seems to confirm what other clues have been saying for quite some time in the show: The Final Five are from Earth.

While this was not news to readers of this blog, I did find it a bit interesting that she referred to the “home of the 13th” because it remains my contention that there never was a 13th tribe. That “the 13th tribe” is really a mythologized name for the people of the homeworld, who never were a tribe, per se. But since the scrolls wish to hide the true story and the origin of the Kobolians, the authors gave them the name of a tribe and a story. And indeed, since the 12 tribes all have names from the Earth zodiac, and the supposed 13th tribe “left” for Earth 2,000 years before the exodus from Kobol, this makes a lot of sense. If the 13th tribe existed and had a name, it is not from the zodiac (No, Ophiuchus doesn’t count) and it’s really the first tribe. But you can’t call them that without leaking the truth.

However, the Hybrid’s use of “the 13th” suggests perhaps more reality for this tribe. It’s possible there was a tribe of Kobolians who did a return expedition to Earth, though I am not quite sure what that explains in the plot. We could have the Final Five being from Earth in several ways. They could have originated in a repopulated Earth, for example.

However, the plot that makes the most sense has the Final Five originating on the real Earth, some time in the not too distant future, and playing a part in the 3 cycles of human/AI war, exodus and resettlement.

In fact, while I did not suggest it seriously at first, I considered it a cute plot point to suggest the Final Five were in fact once ordinary humans who came of age in the late 20th century and then uploaded into machine form some time in the 21st. And as 20th century humans, they could have found that “All Along the Watchtower” was a favourite song of the group, and thus programmed it to be the “wakeup song” used when it is time to make sleeper copies of themselves, planted among the regular humans, become aware of what they are.  read more »

Are botnets run by spy agencies?

A recent story today about discussions for an official defense Botnet in the USA prompted me to post a question I’ve been asking for the last year. Are some of the world’s botnets secretly run by intelligence agencies, and if not, why not?

Some estimates suggest that up to 1/3 of PCs are secretly part of a botnet. The main use of botnets is sending spam, but they are also used for DDOS extortion attacks and presumably other nasty things like identity theft.

But consider this — having remote control of millions of PCs, and a large percentage of the world’s PCs seems like a very tempting target for the world’s various intelligence agencies. Most zombies are used for external purposes, but it would be easy to have them searching their own disk drives for interesting documents, and sniffing their own LANs for interesting unencrypted LAN traffic, or using their internal state to get past firewalls.

Considering the billions that spy agencies like the NSA, MI6, CSEC and others spend on getting a chance to sniff signals as they go over the wires, being able to look at the data all the time, any time as it sits on machines must be incredibly tempting.

And if the botnet lore is to be accepted, all this was done using the resources of a small group of young intrusion experts. If a group of near kids can control hundreds of millions of machines, should not security experts with billions of dollars be tempted to do it?

Of course there are legal/treaty issues. Most “free nation” spy agencies are prohibited from breaking into computers in their own countries without a warrant. (However, as we’ve seen, the NSA has recently been lifted of this restriction, and we’re suing over that.) However, they are not restricted on what they do to foreign computers, other than by the burdens of keeping up good relations with our allies.

However, in some cases the ECHELON loophole may be used, where the NSA spies on British computers and MI-6 spies on American computers in exchange.

More simply, these spy agencies would not want to get caught at this, so they would want to use young hackers building spam-networks as a front. They would be very careful to assure that the botting could not be traced back to them. To keep it legal, they might even just not take information from computers whose IP addresses or other clues suggest they are domestic. The criminal botnet operators could infect everywhere, but the spies would be more careful about where they got information and what they paid for.

Of course, spy agencies of many countries would suffer no such restrictions on domestic spying.

Of all the spy agencies in the world, can it be that none of them have thought of this? That none of them are tempted by being able to comb through a large fraction of the world’s disk drives, looking for both bad guys and doing plain old espionage?

That’s hard to fathom. The question is, how would we detect it? And if it’s true, could it mean that spies funded (as a cover story) the world’s spamming infrastructure?

Panorama of Marienplatz, M√ľnchen, Germany

Here’s my latest assembled panorama, of the main square of Munich, known as Marienplatz, taken from the St. Peter’s Church bell tower just to the south.

This is a 360 degree shot, taken just after sunset. This is a very technically difficult panorama, and as such not perfect. First of all, a tripod is not practical at the top of this tower, where the walkway is so narrow that it’s hard for two people to pass. It also has a metal grille with holes large enough for the camera but not much bigger. So we’re talking handheld long exposures.

And you must walk around the tower, which means parallax, so perfect joins are not possible. This effort has some distortions to get around that but does cover the entire city.

That’s the Rathaus (town hall) prominent in the center of the picture, and the Frauenkirche to the left of it, and the moon in the upper right.

 

Windows needs a master daemon

It seems that half the programs I try and install under Windows want to have a “daemon” process with them, which is to say a portion of the program that is always running and which gets a little task-tray icon from which it can be controlled. Usually they want to also be run at boot time. In Windows parlance this is called a service.

There are too many of them, and they don’t all need to be there. Microsoft noticed this, and started having Windows detect if task tray icons were too static. If they are it hides them. This doesn’t work very well — they even hide their own icon for removing hardware, which of course is going to be static most of the time. And of course some programs now play games to make their icons appear non-static so they will stay visible. A pointless arms race.

All these daemons eat up memory, and some of them eat up CPU. They tend to slow the boot of the machine too. And usually not to do very much — mostly to wait for some event, like being clicked, or hardware being plugged in, or an OS/internet event. And the worst of them on their menu don’t even have a way to shut them down.

I would like to see the creation of a master deaemon/service program. This program would be running all the time, and it would provide a basic scripting language to perform daemon functions. Programs that just need a simple daemon, with a menu or waiting for events, would be strongly encouraged to prepare it in this scripting language, and install it through the master daemon. That way they take up a few kilobytes, not megabytes, and don’t take long to load. The scripting language should be able to react at least in a basic way to all the OS hooks, events and callbacks. It need not do much with them — mainly it would run a real module of the program that would have had a daemon. If the events are fast and furious and don’t pause, this program could stay resident and become a real daemon.

But having a stand alone program would be discouraged, certainly for boring purposes like checking for updates, overseeing other programs and waiting for events. The master program itself could get regular updates, as features are added to it as needed by would-be daemons.

Unix started with this philosophy. Most internet servers are started up by inetd, which listens on all the server ports you tell it, and fires up a server if somebody tries to connect. Only programs with very frequent requests, like E-mail and web serving, are supposed to keep something constantly running.

The problem is, every software package is convinced it’s the most important program on the system, and that the user mostly runs nothing but that program. So they act like they own the place. We need a way to only let them do that if they truly need it.

Charles Templeton gets own mini-room in Creation Museum

I learned today that there is an exhibit about my father in the famous creation museum near Cincinnati. This museum is a multi-million dollar project set up by creationists as a pro-bible “natural history” museum that shows dinosaurs on Noah’s Ark, and how the flood carved the Grand Canyon and much more. It’s all completely bullocks and a number of satirical articles about it have been written, including the account by SF writer John Scalzi.

While almost all this museum is about desperate attempts to make the creation story sound like natural history, it also has the “Biblical Authority Room.” This room features my father, Charles Templeton in two sections. It begins with this display on bible enemies which tells the story of how he went to Princeton seminary and lost his faith. (Warning: Too much education will kill your religion.)

However, around the corner is an amazing giant alcove. It shows a large mural of photos and news stories about my father as a preacher and later. On the next wall is an image of a man (clearly meant to be him though the museum denied it) digging a grave with the tombstone “God is Dead.” There are various other tombstones around for “Truth,” “God’s Word” and “Genesis.” There is also another image of the mural showing it a bit more fully.

Next to the painting is a small brick alcove which for the life of me looks like a shrine.

In it is a copy of his book Farewell to God along with a metal plaque with a quote from the book about how reality is inconsistent with the creation story. (You can click on the photo, courtesy Andrew Arensburger, to see a larger size and read the inscription.)

I had heard about this museum for some time, and even contemplated visiting it the next time I was in the area, though part of me doesn’t want to give them $20. However now I have to go. But I remain perplexed that he gets such a large exibit, along with the likes of Darwin, Scopes and Luther. Today, after all, only older people know of his religious career, though at his peak he was one of the most well known figures of the field. He and his best friend, Billy Graham, were taking the evangelism world by storm, and until he pulled out, many people would have bet that he, rather than Graham, would become the great star. You can read his memoir here online.

But again, this is all long ago, and a career long left behind. But there may be an explanation, based on what he told me when he was alive.

Among many fundamentalists, there is a doctrine of “Once Saved, Always Saved.” What this means is that once Jesus has entered you and become your personal saviour, he would never, ever desert you. It is impossible for somebody who was saved to fall. This makes apostacy a dreadful sin for it creates a giant contradiction. For many, the only way to reconcile this is to decide that he never was truly saved after all. That it was all fake. Only somebody who never really believed could fall.

Except that’s not the case here. He had the classic “religious experience” conversion, as detailed in his memoir. He was fully taken up with it. And more to the point, unlike most, when much later he truly came to have doubts, he debated them openly with his friends, like Graham. And finally decided that he couldn’t preach any more after decades of doing so, giving up fame and a successful career with no new prospects. He couldn’t do it because he could not feel honest preaching to people when he had become less sure himself. Not the act of somebody who was faking it all along.

However, this exhibit in the museum doesn’t try to paint it that way. Rather, it seems to be a warning that too much education by godless scientists can hurt your faith.

So there may be a second explanation. As a big-time preacher, with revival meetings filling sporting arenas, my father converted a lot of people to Christianity. He was one of the founders of Youth for Christ International, which is today still a major religious organization. I meet these converts from time to time. I can see how, if you came to your conversion through him, my father’s renunciation of it must be very hurtful — especially when combined with the once-saved-always-saved doctrine. So I have to wonder if somebody at the Creation Museum isn’t one of his converts, and thus wanted to tell the story of a man that many of the visitors to the museum will have forgotten.

Here are some other Charles Templeton links on my site:

Right now I’m in the process of scanning some of his books and will post when I have done this.

OCR Page numbers and detect double feed

I’m scanning my documents on an ADF document scanner now, and it’s largely pretty impressive, but I’m surprised at some things the system won’t do.

Double page feeding is the bane of document scanning. To prevent it, many scanners offer methods of double feed detection, including ultrasonic detection of double thickness and detection when one page is suddenly longer than all the others (because it’s really two.)

There are a number of other tricks they could do, I think. I think a paper feeder that used air suction or gecko-foot van-der-waals force pluckers on both sides of a page to try to pull the sides in two different directions could help not just detect, but eliminate such feeds.

However, the most the double feed detectors do is signal an exception to stop the scan. Which means work re-feeding and a need to stand by.

However, many documents have page numbers. And we’re going to OCR them and the OCR engine is pretty good at detecting page numbers (mostly out of desire to remove them.) However, it seems to me a good approach would be to look for gaps in the page numbers, especially combined with the other results of a double feed. Then don’t stop the scan, just keep going, and report to the operator which pages need to be scanned again. Those would be scanned, their number extracted, and they would be inserted in the right place in the final document.

Of course, it’s not perfect. Sometimes page numbers are not put on blank pages, and some documents number only within chapters. So you might not catch everything, but you could catch a lot of stuff. Operators could quickly discern the page numbering scheme (though I think the OCR could do this too) to guide the effort.

I’m seeking a maximum convenience workflow. I think to do that the best plan is to have several scanners going, and the OCR after the fact in the background. That way there’s always something for the operator to do — fixing bad feeds, loading new documents, naming them — for maximum throughput. Though I also would hope the OCR software could do better at naming the documents for you, or at least suggesting names. Perhaps it can, the manual for Omnipage is pretty sparse.

While some higher end scanners do have the scanner figure out the size of the page (at least the length) I am not sure why it isn’t a trivial feature for all ADF scanners to do this. My $100 Strobe sheetfed scanner does it. That my $6,000 (retail) FI-5650 needs extra software seems odd to me.

How about standby & hibernate together

PCs can go into standby mode (just enough power to preserve the RAM and do wake-on-lan) and into hibernate mode (where they write out the RAM to disk, shut down entirely and restore from disk later) as well as fully shut down.

Standby mode comes back up very fast, and should be routinely used on desktops. In fact, non-server PCs should consider doing it as a sort of screen saver since the restart can be so quick. It’s also popular on laptops but does drain the battery in a few days keeping the RAM alive. Many laptops will wake up briefly to hibernate if left in standby so long that the battery gets low, which is good.

How about this option: Write the ram contents out to disk, but also keep the ram alive. When the user wants to restart, they can restart instantly, unless something happened to the ram. If there was a power flicker or other trouble, notice the ram is bad and restart from disk. Usually you don’t care too much about the extra time needed to write out to disk when suspending, other than for psychological reasons where you want to be really sure the computer is off before leaving it. It’s when you come back to the computer that you want instant-on.

In fact, since RAM doesn’t actually fail all that quickly, you might even find you can restore from RAM after a brief power flicker. In that case, you would want to store a checksum for all blocks of RAM, and restore any from disk that don’t match the checksum.

To go further, one could also hibernate to newer generations of fast flash memory. Flash memory is getting quite cheap, and while older generations aren’t that quick, they seek instantaneously. This allows you to reboot a machine with its memory “paged out” to flash, and swap in pages at random as they are needed. This would allow a special sort of hybrid restore:

  1. Predict in advance which pages are highly used, and which are enough to get the most basic functions of the OS up. Write them out to a special contiguous block of hibernation disk. Then write out the rest, to disk and flash.
  2. When turning on again, read this block of contiguous disk and go “live.” Any pages needed can then be paged in from the flash memory as needed, or if the flash wasn’t big enough, unlikely pages can come from disk.
  3. In the background, restore the rest of the pages from the faster disk. Eventually you are fully back to ram.

This would allow users to get a fairly fast restore, even from full-off hibernation. If they click on a rarely used program that was in ram, it might be slow as stuff pages in, but still not as bad as waiting for the whole restore.

Stuckbuck's destiny again

I must admit I’ve been somewhat disappointed with how sparse the clues have been this season on the show’s central mysteries. Several episodes in, and we don’t know a great deal more than the little we learned in the first episode. However, something shown in the “scenes from next week” bodes for more interesting times.

If you don’t watch that preview, you might want to hold on this post until Friday.  read more »

Data Deposit Box instead of data portability

I’ve been ranting of late about the dangers inherent in “Data Portability” which I would like to rename as BEPSI to avoid the motherhood word “portability” for something that really has a strong dark side as well as its light side.

But it’s also important to come up with an alternative. I think the best alternative may lie in what I would call a “data deposit box” (formerly “data hosting.”) It’s a layered system, with a data layer and an application layer on top. Instead of copying the data to the applications, bring the applications to the data.

A data deposit box approach has your personal data stored on a server chosen by you. That server’s duty is not to exploit your data, but rather to protect it. That’s what you’re paying for. Legally, you “own” it, either directly, or in the same sense as you have legal rights when renting an apartment — or a safety deposit box.

Your data box’s job is to perform actions on your data. Rather than giving copies of your data out to a thousand companies (the Facebook and Data Portability approach) you host the data and perform actions on it, programmed by those companies who are developing useful social applications.

As such, you don’t join a site like Facebook or LinkedIn. Rather, companies like those build applications and application containers which can run on your data. They don’t get the data, rather they write code that works with the data and runs in a protected sandbox on your data host — and then displays the results directly to you.

To take a simple example, imagine a social application wishes to send a message to all your friends who live within 100 miles of you. Using permission tokens provided by you, it is able to connect to your data host and ask it to create that subset of your friend network, and then e-mail a message to that subset. It never sees the friend network at all.  read more »

Who were the previous incarnations of the Final Five?

As I’ve often discussed, many clues show the Final Five to be over 4,000 years old. We see them in hooded robes in the 2,000 year old Kobol Opera House in visions, and they are almost surely the 5 priests who built the “Temple of Five” 4,000 years ago on the way from Earth.

But we also see that 4 copies of the Final Five have been present in the Colonies for a long time. Tigh is 60 years old, the others look to be in their 30s. And they have aged and gotten sick and been totally human. Indeed, until recently they had no idea they weren’t. As yet there are still few clues as to why the Final Five might be living with the colonials as sleepers.

But here’s the interesting question: What have then been doing the last 4,000 years? They had some role in the creation of the new generation of Cylons, who got programmed to know of the Final Five but to avoid thinking about them. Perhaps they’ve been living out in space until this set of 5 sleepers was introduced into colonial life, destined to be among the fleet that flees the coming human/Cylon war (the third such war, at least, if I read things correctly.)

But more likely they have been living in the colonies for the past 2,000 years. If so, have they been living in the same bodies, perhaps growing old and then downloading into a new young body when done with the old? This could be pulled off — “Boy, you sure look a lot like your dad!” — especially if you moved around from colony to colony, though I can certainly see some risks in a society with thousands of years of photography and computers. Of course the Final Five would have no problem manipulating colonial computers.

Another option might include taking different bodies with each incarnation. And quite possibly starting each new incarnation as a baby, as a sleeper, then growing up and learning of your nature when some trigger happens.

This has led to speculation in my thread about Joesph Adama that he could have been another member of the final Five (presumably the missing one) and also be a current character. A different current character. Perhaps his grandson, Lee. Or perhaps Romo Lampkin, the lawyer who says he knew Joseph well. For those in the “it’s somebody not in the Last Supper photo” camp, this makes some sense and provides the needed “oh shit” moment when all is revealed.

This also allows other members of the Final Five to have had earlier incarnations with roles in colonial history. In particular, one wonders if members of the Final Five, in other bodies, may be some of the characters in Caprica the new Prequel series being made. That includes Joseph Adama, but also the mysterious monotheist preist, Sister Clarice who we are told plays such a pivotal role in the creation of the Cylons.

And of course, who were the final Five at the fall of Kobol? Where they Lords of Kobol, or their enemies? And did they look like Tigh, Foster, Anders and Tyrol?

Update: Now that it’s clear that #3 recognized the current bodies of the Final Five in the Temple, this either means they have had the same bodies since they built that temple, or (less likely) they reprogram the temple every time they change bodies. Mostly I would have to say this discounts the idea they change form, though. This does mean they had better not get too famous, as in history book famous, over the generations.