Brad Templeton is Chairman Emeritus of the EFF
, Singularity U
computing chair, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Submitted by brad on Fri, 2006-09-22 11:46.
As most people in the VoIP world know, the FCC mandated that “interconnected” VoIP providers must provide E911 (which means 911 calling with transmission of your location) service to their customers. It is not optional, they can’t allow the customer to opt out to save money.
It sounds good on the surface, if there’s a phone there you want to be able to reach emergency services with it.
The meaning of interconnected is still being debated. It was mostly aimed at the Vonages of the world. The current definition applies to service that has a phone-like device that can make and receive calls from the PSTN. Most people don’t think it applies to PBX phones in homes and offices, though that’s not explicit. It doesn’t apply to the Skype client on your PC, one hopes, but it could very well apply if you have a more phone like device connecting to Skype, which offers Skype-in and Skype-out services on a pay per use basis and thus is interconnected with the PSTN.
Here’s the kicker. There are a variety of companies which will provide E911 connectivity services for VoIP companies. This means you pay them and they will provide a means for you to route your user’s calls to the right emergency public service access point, and pass along the address the user registered with the service. Seems like a fine business, but as far as I can tell, all these companies are charging by the customer per month, with fees between $1 and $2 per month.
This puts a lot of constraints on the pricing models of VoIP services. There’s a lot of room for innovative business models that include offering limited or trial PSTN connection for free, or per-usage billing with no monthly fees. (All services I know of do the non-PSTN calling for
free.) Or services that appear free but are supported by advertising or other means. You’ve seen that Skype decided to offer free PSTN services for all of 2006. AIM Phoneline offers a free number for incoming calls, as do many others.
Read on… read more »
Submitted by brad on Sun, 2006-09-17 10:34.
It’s common in the blogosphere for bloggers to comment on the posts of other bloggers. Sometimes blogs show trackbacks to let you see those comments with a posting. (I turned this off due to trackback spam.) In some cases we effectively get a thread, as might appear in a message board/email/USENET, but the individual components of the thread are all on the individual blogs.
So now we need an RSS aggregator to rebuild these posts into a thread one can see and navigate. It’s a little more complex than threading in USENET, because messages can have more than one parent (ie. link to more than one post) and may not link directly at all. In addition, timestamps only give partial clues as to position in a thread since many people read from aggregators and may not have read a message that was posted an hour ago in their “thread.”
At a minimum, existing aggregators (like bloglines) could spot sub-threads existing entirely among your subscribed feeds, and present those postings to you. You could also define feeds which are unsubscribed but which you wish to see or be informed of postings from in the event of a thread. (Or you might have a block-list of feeds you don’t want to see contributions from.) They could just have a little link saying, “There’s a thread including posts from other blogs on this message” which you could expand, and that would mark those items as read when you came to the other blog.
Blog search tools, like Technoratti could also spot these threads, and present a typical thread interface for perusing them. Both readers and bloggers would be interested in knowing how deep the threads go.
Submitted by brad on Sat, 2006-09-16 15:33.
At the blogger panel at Fall VON (repurposed to be both video on the net as well as voice) Vlogger and blip.tv advocate Dina Kaplan asked bloggers to start vlogging. It’s started a minor debate.
My take? Please don’t.
I’ve written before on what I call the reader-friendly vs. writer-friendly dichotomy. My thesis is that media make choices about where to be on that spectrum, though ideal technology reduces the compromises. If you want to encourage participation, as in Wikis, you go for writer friendly. If you have one writer and a million readers, like the New York Times, you pay the writer to work hard to make it as reader friendly as possible.
When video is professionally produced and tightly edited, it can be reader (viewer) friendly. In particular if the video is indeed visual. Footage of tanks rolling into a town can convey powerful thoughts quickly.
But talking head audio and video has an immediate disadvantage. I can read material ten times faster than I can listen to it. At least with podcasts you can listen to them while jogging or moving where you can’t do anything else, but video has to be watched. If you’re just going to say your message, you’re putting quite a burden on me to force me to take 10 times as long to consume it — and usually not be able to search it, or quickly move around within it or scan it as I can with text.
So you must overcome that burden. And most videologs don’t. It’s not impossible to do, but it’s hard. Yes, video allows better expression of emotion. Yes, it lets me learn more about the person as well as the message. (Though that is often mostly for the ego of the presenter, not for me.)
Recording audio is easier than writing well. It’s writer friendly. Video has the same attribute if done at a basic level, though good video requires some serious work. Good audio requires real work too — there’s quite a difference between “This American Life” and a typical podcast.
Indeed, there is already so much pro quality audio out there like This American Life that I don’t have time to listen to the worthwhile stuff, which makes it harder to get my attention with ordinary podcasts. Ditto for video.
There is one potential technological answer to some of these questions. Anybody doing an audio or video cast should provide a transcript. That’s writer-unfriendly but very reader friendly. Let me decide how I want to consume it. Let me mix and match by clicking on the transcript and going right to the video snippet.
With the right tools, this could be easy for the vlogger to do. Vlogger/podcaster tools should all come with trained speech recognition software which can reliably transcribe the host, and with a little bit of work, even the guest. Then a little writer-work to clean up the transcript and add notes about things shown but not spoken. Now we have something truly friendly for the reader.
In fact, speaker-independent speech recognition is starting to almost get good enough for this but it’s still obviously the best solution to have the producer make the transcript. Even if the transcript is full of recognition errors. At least I can search it and quickly click to the good parts, or hear the mis-transcribed words.
If you’re making podcaster/vlogger tools, this is the direction to go. In addition, it’s absolutely the right thing for the hearing or vision impaired.
Submitted by brad on Fri, 2006-09-15 22:59.
In an earlier blog post I attempted to distinguish TVoIP (TV over internet) with IPTV, a buzzword for cable/telco live video offerings. My goal was to explain that we can be very happy with TV, movies and video that come to us over the internet after some delay.
The two terms aren’t really very explanatory, so now I suggested VAD, for Video-after-demand. Tivo and Netflix have taught us that people are quite satisifed if they pick their viewing choices in advance, and then later — sometimes weeks or months later — get the chance to view them. The key is that when they sit down to watch something, they have a nice selection of choices they actually want to see.
The video on demand dream is to give you complete live access to all the video in the world that’s available. Click it and watch it now. It’s a great dream, but it’s an expensive one. It needs fast links with dedicated bandwidth. If your movie viewing is using 4 of your 6 megabits, somebody else in the house can’t use those megabits for web surfing or other interactive needs.
With VaD you don’t need much in your link. In fact, you can download shows that you don’t have the ability to watch live at all, or get them at higher quality. You just have to wait. Not staring at a download bar, of course, nobody likes that, but wait until a later watching session, just as you do when you pick programs to record on a PVR like the Tivo.
I said these things before, but the VaD vision is remarkably satisfying and costs vastly less, both to the consumer, and those building out the networks. It can be combined with IP multicasting (someday) to even be tremendously efficient. (Multicasting can be used for streaming but if packets are lost you have only a limited time to recover them based on how big your buffer is.)
Submitted by brad on Wed, 2006-09-13 07:00.
Trade show booths are always searching for branded items to hand out to prospects. Until they fix the airport bans, how about putting your brand on a tube of toothpaste and/or other travel liquids now banned from carry-on bags?
(Yeah, most hotels will now give you these, but it’s the thought that counts and this one would be remembered longer than most T-shirts.)
Submitted by brad on Sun, 2006-09-10 18:18.
As a hirsute individual, I beg the world’s makers of medical tapes and band-aids to work on an adhesive that is decent at sticking to skin, but does not stick well to hair.
Not being versed in the adhesive chemistries of these things, I don’t know how difficult this is, but if one can be found, many people would thank you.
Failing that would be an adhesive with a simple non-toxic solvent that unbinds it, which could be swabbed on while slowly undoing tape.
Submitted by brad on Fri, 2006-09-08 12:24.
While it will be a while before I get the time to build all my panoramas of this year’s Burning Man, I did do some quick versions of some of those I shot of the burn itself. This year, I arranged to be on a cherry picker above the burn. I wish I had spent more time actually looking at the spectacle, but I wanted to capture panoramas of Burning Man’s climactic moment. The entire city gathers, along with all the art cars for one shared experience. A large chunk of the experience is the mood and the sound which I can’t capture in a photo, but I can try to capture the scope.
This thumbnail shows the man going up, shooting fireworks and most of the crowd around him. I will later rebuild it from the raw files for the best quality.
Shooting panoramas at night is always hard. You want time exposures, but if any exposure goes wrong (such as vibration) the whole panorama can be ruined by a blurry frame in the middle. On a boomlift, if anybody moves — and the other photographer was always adjusting his body for different angles — a time exposure won’t be possible. It’s also cramped and if you drop something (as I did my clamp knob near the end) you won’t get it back for a while. In addition, you can’t have everybody else duck every time you do a sweep without really annoying them, and if you do you have to wait a while for things to stabilize.
It was also an interesting experience riding to the burn with DPW, the group of staff and volunteers who do city infrastructure. They do work hard, in rough conditions, but it gives them an attitude that crosses the line some of the time regarding the other participants. When we came to each parked cherry picker, people had leaned bikes against them, and in one case locked a bike on one. Though we would not actually move the bases, the crew quickly grabbed all the bikes and tossed them on top of one another, tangling pedal in spoke, probably damaging some and certainly making some hard to find. The locked bike had its lock smashed quickly with a mallet. Now the people who put their bikes on the pickers weren’t thinking very well, I agree, and the DPW crew did have to get us around quickly but I couldn’t help but cringe with guilt at being part of the cause of this, especially when we didn’t move
the pickers. (Though I understand safety concerns of needing to be able to.)
Anyway, things “picked up” quickly and the view was indeed spectacular. Tune in later for more and better pictures, and in the meantime you can see the first set of trial burn panoramas for a view of the burn you haven’t seen.
Submitted by brad on Wed, 2006-09-06 11:54.
I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.
You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.
When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.
There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.
I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.
It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.
For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.
Submitted by brad on Mon, 2006-08-21 11:44.
One of the few positive things over the recent giant AOL data spill (which we have asked the FTC to look into) is it has hopefully taught a few lessons about just how hard it is to truly anonymize data. With luck, the lesson will be “don’t be fooled into thinking you can do it” and not “Just avoid what AOL did.”
There is some Irony that in general, AOL is one of the better performers. They don’t keep a permanent log of searches tied to userid, though it is tied, reports say, to a virtual ID. (I have seen other reports to suggest even this is erased after a while.) AOL also lets you turn off short term logging of the association with your real ID. Google, MSN, Yahoo and others keep the data effectively forever.
Everybody has pointed out that for many people, just the search queries themselves can be enough to identify a person, because people search for things that relate to them. But many people’s searches will not be trackable back to them.
However, the AOL records maintain the exact time of the search, to the second or perhaps more accurately. They also maintain the site the user clicked on after doing the search. AOL may have wiped logs, but most sites don’t. Let’s say you go through the AOL logs and discover an AOL user searched and clicked on your site. You can go into your own logs and find that search, both from the timestamp, and the fact the “referer” field will identify that the user came via an AOL search for those specific terms.
Now you can learn the IP address of the user, and their cookies or even account with your site, if your site has accounts.
If you’re a lawyer, however, doing a case where you can subpoena information, you could use that tool to identify almost any user in the AOL database who did a modest volume of searches. And the big sites with accounts could probably identify all their users who are in the database, getting their account id (and thus often name and email and the works.)
So even if AOL can’t uncover who many of these users are due to an erasure policy, the truth is that’s not enough. Even removing the site does not stop the big sites from tracking their own users, because their own logs have the timestamped searches. And an investigator could look for a query, do the query, see what sites you would likely click on, and search the logs of those sites. They would still find you. Even without the timestamp this is possible for an uncommon query. And uncommon queries are surprisingly common. :-)
I have a static IP address, so my IP address links directly to me. Broadband users who have dynamic IP addresses may be fooled — if you have a network gateway box or leave your sole computer on, your address may stay stable for months at a time — it’s almost as close a tie as a static IP.
The point here is that once the data are collected, making them anonymous is very, very hard. Harder than you think, even when you take into account this rule about how hard it is.
Submitted by brad on Fri, 2006-08-18 22:56.
You probably heard yesterday’s good news that the ACLU prevailed in their petition for an injunction against the NSA warrentless wiretapping. (Our case against AT&T to hold them accountable for allegedly participating in this now-ruled-unlawful program continues in the courts.)
However, the ruling was appealed (no surprise) and the government also asked for, and was granted a stay of the injunction. So the wiretaps won’t stop unless the appeal is won.
But this begs the question, “Why do you need a stay?”
The line from the White House has been that the government engaged in this warrantless wiretapping because the the President had the authority to do that, both inherently and under the famous AUMF. And they wanted to use that authority because they complained the official system mandated by law, requiring process before the FISA court, was just too cumbersome. Even though the FISA law allows immediate emergency wiretaps without a warrant as long as a retroactive application is made soon.
We’ve all wondered just why that’s too cumbersome. But they seemed to be saying that since the President haud the authority to bypass the FISA court, why should they impede the program with all that pesky judicial oversight?
But now we have a ruling that the President does not have that authority. Perhaps that will change on appeal, but for now it is the ruling. So surely this should mean that they just go back to doing it the way the FISA regulations require it? What’s the urgent need for a stay? Could they not have been ready with the papers to get the warrants they need if they lost?
Well, I think I know the answer. Many people suspect that the reason they don’t go to FISA is not because it’s too much paperwork. It’s because they are trying to do things FISA would not let them do. So of course they don’t want to ask. (The FISA court, btw, has only told them no once, and even that was overturned. That’s about all the public knows about all its rulings.) I believe there is a more invasive program in place, and we’ve seen hints of that in press reports, with data mining of call records and more.
By needing this stay, the message has come through loud and clear. They are not willing to get the court’s oversight of this program, no way, no how. And who knows how long it will be until we learn what’s really going on?
Submitted by brad on Mon, 2006-08-14 23:39.
Last week at ZeroOne in San Jose, one of the art pieces reminded me of a sneaky idea I had a while ago. As you may know, many camcorders, camera phones and cheaper digital cameras respond to infrared light. You can check this out pretty easily by holding down a button on your remote control while using the preview screen on your camera. If you see a bright light, you’re camera shoots in infrared.
Anyway, the idea is to find techniques, be they arrays of bright infrared LEDs, or paints that shine well in infrared but are not obvious in visible light, and create invisible graffiti that only shows up in tourist photos and videos. Imagine the tourists get home from their trip to fisherman’s wharf, and the side of the building says something funny or rude that they are sure wasn’t there when they filmed it.
The art piece at ZeroOne used this concept to put up a black monolith to the naked eye. If you pulled out your camera phone or digital camera, you could see words scrolling down the front. Amusing to watch people watch it. Another piece by our friends at .etoy also had people pulling out cameraphones to watch it. They displayed graphics made of giant pixels on a wall just a few feet from you. Up close, it looked like random noise. If you found a way to widen your field of view (which the screen on a camera can do) allowed you to see the big picture, and you could see the images of talking faces. (My SLR camera’s 10mm lens through the optical viewfinder worked even better.)
That piece only really worked at night, though with superbright LEDs I think it could be done in the day. I don’t know if there are any paints to coatings to make this work well. It would be amusing to tag the world with tags that can only be seen when you pull out your camera.
Submitted by brad on Fri, 2006-08-11 17:06.
Everybody’s pulling out IBM PC stories on the 25th anniversary so I thought I would relate mine. I had been an active developer as a teen for the 6502 world — Commodore Pet, Apple ][, Atari 800 and the like, and sold my first game to Personal Software Inc. back in 1979. PSI was just starting out, but the founders hired me on as their first employee to do more programming. The company became famous shortly thereafter by publishing VisiCalc, which was the first serious PC application, and the program that helped make Apple as a computer company outside the hobby market.
In 1981, I came back for a summer job from school. Mitch Kapor, who had worked for Personal Software in 1980 (and had been my manager at the time) had written a companion for VisiCalc, called VisiPlot. VisiPlot did graphs and charts, and a module in it (VisiTrend) did statistical analysis. Mitch had since left, and was on his way to founding Lotus. Mitch had written VisiPlot in Apple ][ Basic, and he won’t mind if I say it wasn’t a masterwork of code readability, and indeed I never gave it more than a glance. Personal Software, soon to be renamed VisiCorp, asked me to write VisiPlot from scratch, in C, for an un-named soon to be released computer.
I didn’t mention this, but I had never coded in C before. I picked up a copy of the Kernighan and Ritchie C manual, and read it as my girlfriend drove us over the plains on my trip from Toronto to California.
I wasn’t told much about the computer I would be coding for. Instead, I defined an API for doing I/O and graphics, and wrote to a generalized machine. Bizarrely (for 1981) I did all this by dialing up by modem to a unix computer time sharing service called CCA on the east coast. I wrote and compiled in C on unix, and defined a serial protocol to send graphics back to, IIRC an Apple computer acting as a terminal. And, in 3 months, I made it happen.
(Very important side note: CCA-Unix was on the arpanet. While I had been given some access to
an Arpanet computer in 1979 by Bob Frankston, the author of VisiCalc, this was my first
day to day access. That access turned out to be the real life-changing event in this story.)
There was a locked room at the back of the office. It contained the computer my code would eventually run on. I was not allowed in the room. Only a very small number of outside companies were allowed to have an IBM PC — Microsoft, UCSD, Digital Research, VisiCorp/Software Arts and a couple of other applications companies.
On this day, 25 years ago, IBM announced their PC. In those days, “PC” meant any kind of personal computer. People look at me strangely when I call an Apple computer a PC. But not long after that, most people took “PC” to mean IBM. Finally I could see what I was coding for. Not that the C compilers were all that good for the 8088 at the time. However, 2 weeks later I would leave to return to school. Somebody else would write the library for my API so that the program would run on the IBM PC, and they released the product. The contract with Mitch required they pay royalties to him for any version of VisiPlot, including mine, so they bought out that contract for a total value close to a million — that helped Mitch create Lotus, which would, with assistance from the inside, outcompete and destroy VisiCorp.
(Important side note #2: Mitch would use the money from Lotus to found the E.F.F. — of which I am now chairman.)
The IBM PC was itself less exciting than people had hoped. The 8088 tried to be a 16 bit processor but it was really 8 bit when it came to performance. PC-DOS (later MS-DOS) was pretty minimal. But it had an IBM name on it, so everybody paid attention. Apple bought full page ads in the major papers saying, “Welcome IBM, Seriously.” Later they would buy ads with lines like Steve Jobs saying, “When I invented the personal computer…” and most of us laughed but some of the press bought it. And of course there is a lot more to this story.
And I was paid about $7,000 for the just under 4 months of work, building almost all of an entire software package. I wish I could program like that today, though I’m glad I’m not paid that way today.
So while most people today will have known the IBM-PC for 25 years, I was programming for it before it released. I just didn’t know it!
Submitted by brad on Thu, 2006-08-10 23:25.
Quite frequently in non-HTML documents, such as E-mails, people will enclose their URLs in angle brackets, such as <http://foo.com> What is the origin of this? For me, it just makes cutting and pasting the URLs much harder (it’s easier if they have whitespace around them and easiest if they are on a line by themselves.) It’s not any kind of valid XML or HTML in fact it would cause a problem in any document of that sort.
There’s lot of software out there that parses URLs out of text documents of course, but they all seem to do fine with whitespace and other punctuation. They handle the angle bracket notation, but don’t need it. Is there any software out there that needs it? If not, why do so many people use this form?
Submitted by brad on Thu, 2006-08-10 01:39.
Many universities are now setting up to broadcast lectures over their LANs, often in video. Many students simply watch from their rooms, or even watch later. There are many downsides to this (fewer show up in class) but the movement is growing.
Here’s a simple addition that would be a bonanza for the cell companies. Arrange to offer broadcast of lectures to student cell phones. In this case, I mean live, and primarily for those who are running late to class. They could call into the number, put on their bluetooth headset and hear the start of the lecture on the way in. All the lecture hall has to do is put the audio into a phone that calls a conference bridge (standard stuff all the companies have already) and then students can call into the bridge to hear the lecture. In fact, the cell company should probably pay the school for all the minutes they would bill.
This need not apply only to lectures at universities. All sorts of talks and large meetings could do the
same, including sessions at conferences.
Perhaps it would encourage tardyness, but you could also make the latecomers wait outside (listening) for an appropriate pause at which to enter.
Submitted by brad on Mon, 2006-08-07 13:51.
The blogosphere is justifiably abuzz with the release by AOL of “anonymized” search query histories for over 500,000 AOL users, trying to be nice to the research community. After the fury, they pulled it and issued a decently strong apology, but the damage is done.
Many people have pointed out obvious risks, such as the fact that searches often contain text that reveal who you are. Who hasn’t searched on their own name? (Alas, I’m now the #7 “brad” on Google, a shadow of my long stint at #1.)
But some other browsers have discovered something far darker. There are searches in there for things like “how to kill your wife” and child porn. Once that’s discovered, isn’t that now going to be sufficient grounds for a court order to reveal who that person was? It seems there is probable cause to believe user 17556639 is thinking about killing his wife. And knowing this very specific bit of information, who would impede efforts to investigate and protect her?
But we can’t have this happening in general. How long before sites are forced to look for evidence of crimes in “anonymized” data and warrants then nymize it. (Did I just invent a word?)
After all, I recall a year ago, I wanted to see if Google would sell adwords on various nasty searches, and what adwords they would be. So I searched for “kiddie porn” and other nasty things. (To save you the stigma, Google clearly has a system designed to spot such searches and not show ads, since people who bought the word “kiddie” may not want to advertise on those results.)
So had my Google results been in such a leak, I might have faced one of those very scary kiddie porn raids, which in the end would find nothing after tearing apart my life and confiscating my computers. (I might hope they would have a sanity check on doing this to somebody from the EFF, but who knows. And you don’t have that protection even if somebody would accord it to me.)
I expect we’ll be seeing the reprecussions from this data spill for some time to come. In the end, if we want privacy from being data mined, deletion of such records is the only way to go.
Submitted by brad on Sun, 2006-08-06 20:15.
Those who know about my phone startup Voxable will know I have far more ambitious goals regarding presence and telephony, but during my recent hospital stay, I thought of a simple subset idea that could make hospital phone systems much better for the patient, namely a way to easily specifiy whether it’s a good time to call the patient or not. Something as simple as a toggle switch on the phone, or with standard phones, a couple of magic extensions they can dial to set whether it’s good or not.
When you’re in the hospital, your sleep schedule is highly unusual. You sleep during the day frequently, you typically sleep much more than usual, and you’re also being woken up regularly by medical staff at any time of the day for visits, medications, blood pressure etc.
At Stanford Hospital, outsiders could not dial patient phones after 10pm, even if you might be up. On the other hand even when the calls can come through, people are worried if it’s a good time. So a simple switch on the phone would cause the call to be redirected to voice mail or just a recording saying it’s not a good time. Throw it to take a nap or do something else where you want peace and quiet. If you throw it at night, it stays in sleep mode until 8 or 9 hours. Then it beeps and reverts to available mode. If you throw it in the day, it will revert in a shorter amount of time (because you might forget) however a fancier interface would let you specify the time on an IVR menu. Nurses would make you available when they wake you in the morning, or you could put up a note saying you don’t want this. (Since it seems to be the law you can’t get the same nurse two days in a row.)
In particular, when doctors and nurses come in to do something with you, they would throw the switch, and un-throw it when they leave, so you don’t get a call while in the middle of an examination. The nurse’s RFID badge, which they are all getting, could also trigger this.
Now people who call would know they got you at a good time, when you’re ready to chat. Next step — design a good way for the phone to be readily reachable by people in pain, such as hanging from the ceiling on a retractable cord, or retractable into the rail on the side of the bed. Very annoying when in pain to begin the slow process of getting to the phone, just to have them give up when you get to it.
Submitted by brad on Wed, 2006-08-02 18:28.
There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. “Pet names” given to web sites so you can confirm you’re where you were before.
I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults.
Now I don’t store my most important passwords (bank, etc.) in my password vault, but I do store most
medium importance ones there (accounts at various billing entities etc.) I just use a simple common
password for web boards, blogs and other places where the damage from compromise is nil to minimal.
So when I go to such a site, I expect the password vault to fill in the password. If it doesn’t, that’s a big warning flag for me. And so I can’t easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to bankofthevvest.com (Two “v”s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.
The browser should store passwords in the vault, and even the “don’t store this” passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is “secretword” and I use it on bankofthewest.com, no problem. “secretword” isn’t stored in my password vault, but the hash of it is. If I ever type in “secretword” to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I’ll do a good job of examining — it’s still possible I’ll be fooled by bankofthevvest.com, but other tricks won’t fool me.
The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at “who-cares” sites, we don’t want to be warned all the time. The second time we use that password, we’ll get a warning, and we need a box to say, “Don’t warn me about re-use of this password.”
Read on for subtleties… read more »
Submitted by brad on Mon, 2006-07-31 15:08.
Right now this blog is hosted by powerVPS, which provides virtual private servers. This is to say they have a large powerful box, and they run virutalization softare (Virtuozo) which allows several users to have the illusion of a private machine, on which they are the root user. In theory users get an equal share of the machine, but since most of the users do not run at full capacity, any user can "burst" to temporarily use more resources.
Unfortunately I have found that this approach does fine with CPU, but not with RAM. The virtual server I first used had 256MB of ram (burst to 1gb) available to it. But it was not able to perform at the level of a dedicated server with 256mb of ram -- swapping the rest to disk -- would do. It also doesn't perform anywhere near the level of a non-virtualized shared server, which is what you will commonly see in very cheap web hosting. An ordinary shared server looks like normal multi-user timesharing, though they tend to virtualize the apache so it looks like everybody gets their own apache.
I eventually had to double my virtual machine's capacity -- and double the monthly fee. You probably saw an increase in the speed of this blog a couple of weeks ago.
Now the virtual machines out there are pretty good, and do cost only a modest performance hit when you run one. But when you run many, you lose out on the OS's ability to run many copies of the same program but keep only one copy in memory.
I propose a more efficient design that mixes shared machine and virtual machine concepts. One step to that would be to not have every user run their own mySQL database. MySQL takes about 50mb of ram, which is not much today but a lot if multiplied out 16 times. Instead have one special virtual server (or just a different dedicated machine) with a copy of MySQL. This would be a special version, which virtualizes the connection, so that as far as each IP address connecting to it is concerned, they think they have a private version of mySQL. This means that everybody can create a database called "drupal" (as far as they think) if they want to. The virtualizer would add some prefix to the names based on which customer is connecting. This would also apply to permissions, so each root user would be different, and really only have global permissions on the right databases.
You would not be able to modify mySQL's parameters or start and stop it -- unless you went back to running a private copy in your own virtual server. But if you didn't need that, you would get a more efficient database server.
The bad news -- it's up to the hosting companies to do this. MySQL AB doesn't get paid by those hosting companies, so it's not particularly motivated to put in changes for them. But it's an open source system so others could write such changes.
The other big users on web hosts are apache and php. There are many virtualized versions of apache, but this is often where people do want to virtualize, to run custom scripts, java programs and special CGIs. Providing a mixed shared/virtual environment here would be more difficult. One easy approach would be to have it be two web sites, with some pages on the shared site and links going to the virtual site. More cleverly, the virtual apache could have internal rewrite rules that are not shown to outsiders that cause it to fetch and forward from the virtualized web server.
Submitted by brad on Fri, 2006-07-28 13:47.
Yesterday I received a Dell 3007WFP panel display. The price hurt ($1600 on eBay, $2200 from Dell but sometimes there are coupons) and you need a new video card (and to top it off, 90% of the capable video cards are PCI-e and may mean a new motherboard) but there is quite a jump by moving to this 2560 x 1600 (4.1 megapixel) display if you are a digital photographer. This is a very similar panel to Apple's Cinema, but a fair bit cheaper.
It's great for ordinary windowing and text of course, which is most of what I do, but it's a great deal cheaper just to get multiple displays. In fact, up to now I've been using CRTs since I have a desk designed to hold 21" CRTs and they are cheaper and blacker to boot. You can have two 1600x1200 21" CRTs for probably $400 today and get the same screen real estate as this Dell.
But that really doesn't do for photos. If you are serious about photography, you almost surely have a digital camera with more than 4MP, and probably way more. If it's a cheap-ass camera it may not be sharp if viewed at 1:1 zoom, but if it's a good one, with good lenses, it will be.
If you're also like me you probably never see 99% of your digital photos except on screen, which means you never truly see them. I print a few, mostly my panoramics and finally see all their resolution, but not their vibrance. A monitor shows the photos with backlight, which provides a contrast ratio paper can't deliver.
At 4MP, this monitor is only showing half the resolution of my 8MP 20D photos. And when I move to a 12MP camera it will only be a third, but it's still a dramatic step up from a 2MP display. It's a touch more than twice as good because the widescreen aspect ratio is a little closer to the 3:2 of my photos than the 4:3 of 1600x1200. Of course if you shoot with a 4:3 camera, here you'll be wasting pixels. In both cases, of course, you can crop a little so you are using all the pixels. (In fact, a slideshow mode that zoom/crops to fully use the display would be a handy mode. Most slideshows offer 1:1 and zoom to fit based on no cropping.)
There are many reasons for having lots of pixels aside from printing and cropping. Manipulations are easier and look better. But let's face it, actually seeing those pixels is still the biggest reason for having them. So I came to the conclusion that I just haven't been seeing my photos, and now I am seeing them much better with a screen like this. Truth is, looking at pictures on it is better than any 35mm print, though not quite at a 35mm slide of quality.
Dell should give me a cut for saying this.
Long ago I told people not to shoot on 1MP and 2MP digital cameras instead of film, because in the future, displays would get so good the photos will look obviously old and flawed. That day is now well here. Even my 3MP D30 pictures don't fill the screen. I wonder when I'll get a display that makes my 8MP pictures small.
Submitted by brad on Thu, 2006-07-27 14:46.
Today, Congress passed 410-15 the Delete Telephony Online Predators act, or DTOPA. This act requires all schools and libraries to by default block access to the social networking system called the “telephone.” All libraries receiving federal funding, and schools receiving E-rate funding must immediately bar access to this network. Blocks can be turned off, on request, for adults, and when students are under the supervision of an adult.
“This is not the end-all bill,” Rep. Fred Upton (R-Mich.) said.
“But, we know sexual predators should not have the avenue of our
schools and libraries to pursue their evil deeds.” The “telephone” social network
allows voice conversation between a student and virtually any sexual predator
in the world. Once a predator gets a child’s “number” or gives his number
to the child, they can speak at any time, no matter where the predator is in
Many children have taken to carrying small pocket “telephones” which can be signalled
by predators at any time. Use of these will be prohibited.