Brad Templeton is Chairman Emeritus of the EFF
, Singularity U
computing chair, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Submitted by brad on Sun, 2006-09-10 18:18.
As a hirsute individual, I beg the world’s makers of medical tapes and band-aids to work on an adhesive that is decent at sticking to skin, but does not stick well to hair.
Not being versed in the adhesive chemistries of these things, I don’t know how difficult this is, but if one can be found, many people would thank you.
Failing that would be an adhesive with a simple non-toxic solvent that unbinds it, which could be swabbed on while slowly undoing tape.
Submitted by brad on Fri, 2006-09-08 12:24.
While it will be a while before I get the time to build all my panoramas of this year’s Burning Man, I did do some quick versions of some of those I shot of the burn itself. This year, I arranged to be on a cherry picker above the burn. I wish I had spent more time actually looking at the spectacle, but I wanted to capture panoramas of Burning Man’s climactic moment. The entire city gathers, along with all the art cars for one shared experience. A large chunk of the experience is the mood and the sound which I can’t capture in a photo, but I can try to capture the scope.
This thumbnail shows the man going up, shooting fireworks and most of the crowd around him. I will later rebuild it from the raw files for the best quality.
Shooting panoramas at night is always hard. You want time exposures, but if any exposure goes wrong (such as vibration) the whole panorama can be ruined by a blurry frame in the middle. On a boomlift, if anybody moves — and the other photographer was always adjusting his body for different angles — a time exposure won’t be possible. It’s also cramped and if you drop something (as I did my clamp knob near the end) you won’t get it back for a while. In addition, you can’t have everybody else duck every time you do a sweep without really annoying them, and if you do you have to wait a while for things to stabilize.
It was also an interesting experience riding to the burn with DPW, the group of staff and volunteers who do city infrastructure. They do work hard, in rough conditions, but it gives them an attitude that crosses the line some of the time regarding the other participants. When we came to each parked cherry picker, people had leaned bikes against them, and in one case locked a bike on one. Though we would not actually move the bases, the crew quickly grabbed all the bikes and tossed them on top of one another, tangling pedal in spoke, probably damaging some and certainly making some hard to find. The locked bike had its lock smashed quickly with a mallet. Now the people who put their bikes on the pickers weren’t thinking very well, I agree, and the DPW crew did have to get us around quickly but I couldn’t help but cringe with guilt at being part of the cause of this, especially when we didn’t move
the pickers. (Though I understand safety concerns of needing to be able to.)
Anyway, things “picked up” quickly and the view was indeed spectacular. Tune in later for more and better pictures, and in the meantime you can see the first set of trial burn panoramas for a view of the burn you haven’t seen.
Submitted by brad on Wed, 2006-09-06 11:54.
I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.
You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.
When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.
There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.
I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.
It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.
For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.
Submitted by brad on Mon, 2006-08-21 11:44.
One of the few positive things over the recent giant AOL data spill (which we have asked the FTC to look into) is it has hopefully taught a few lessons about just how hard it is to truly anonymize data. With luck, the lesson will be “don’t be fooled into thinking you can do it” and not “Just avoid what AOL did.”
There is some Irony that in general, AOL is one of the better performers. They don’t keep a permanent log of searches tied to userid, though it is tied, reports say, to a virtual ID. (I have seen other reports to suggest even this is erased after a while.) AOL also lets you turn off short term logging of the association with your real ID. Google, MSN, Yahoo and others keep the data effectively forever.
Everybody has pointed out that for many people, just the search queries themselves can be enough to identify a person, because people search for things that relate to them. But many people’s searches will not be trackable back to them.
However, the AOL records maintain the exact time of the search, to the second or perhaps more accurately. They also maintain the site the user clicked on after doing the search. AOL may have wiped logs, but most sites don’t. Let’s say you go through the AOL logs and discover an AOL user searched and clicked on your site. You can go into your own logs and find that search, both from the timestamp, and the fact the “referer” field will identify that the user came via an AOL search for those specific terms.
Now you can learn the IP address of the user, and their cookies or even account with your site, if your site has accounts.
If you’re a lawyer, however, doing a case where you can subpoena information, you could use that tool to identify almost any user in the AOL database who did a modest volume of searches. And the big sites with accounts could probably identify all their users who are in the database, getting their account id (and thus often name and email and the works.)
So even if AOL can’t uncover who many of these users are due to an erasure policy, the truth is that’s not enough. Even removing the site does not stop the big sites from tracking their own users, because their own logs have the timestamped searches. And an investigator could look for a query, do the query, see what sites you would likely click on, and search the logs of those sites. They would still find you. Even without the timestamp this is possible for an uncommon query. And uncommon queries are surprisingly common. :-)
I have a static IP address, so my IP address links directly to me. Broadband users who have dynamic IP addresses may be fooled — if you have a network gateway box or leave your sole computer on, your address may stay stable for months at a time — it’s almost as close a tie as a static IP.
The point here is that once the data are collected, making them anonymous is very, very hard. Harder than you think, even when you take into account this rule about how hard it is.
Submitted by brad on Fri, 2006-08-18 22:56.
You probably heard yesterday’s good news that the ACLU prevailed in their petition for an injunction against the NSA warrentless wiretapping. (Our case against AT&T to hold them accountable for allegedly participating in this now-ruled-unlawful program continues in the courts.)
However, the ruling was appealed (no surprise) and the government also asked for, and was granted a stay of the injunction. So the wiretaps won’t stop unless the appeal is won.
But this begs the question, “Why do you need a stay?”
The line from the White House has been that the government engaged in this warrantless wiretapping because the the President had the authority to do that, both inherently and under the famous AUMF. And they wanted to use that authority because they complained the official system mandated by law, requiring process before the FISA court, was just too cumbersome. Even though the FISA law allows immediate emergency wiretaps without a warrant as long as a retroactive application is made soon.
We’ve all wondered just why that’s too cumbersome. But they seemed to be saying that since the President haud the authority to bypass the FISA court, why should they impede the program with all that pesky judicial oversight?
But now we have a ruling that the President does not have that authority. Perhaps that will change on appeal, but for now it is the ruling. So surely this should mean that they just go back to doing it the way the FISA regulations require it? What’s the urgent need for a stay? Could they not have been ready with the papers to get the warrants they need if they lost?
Well, I think I know the answer. Many people suspect that the reason they don’t go to FISA is not because it’s too much paperwork. It’s because they are trying to do things FISA would not let them do. So of course they don’t want to ask. (The FISA court, btw, has only told them no once, and even that was overturned. That’s about all the public knows about all its rulings.) I believe there is a more invasive program in place, and we’ve seen hints of that in press reports, with data mining of call records and more.
By needing this stay, the message has come through loud and clear. They are not willing to get the court’s oversight of this program, no way, no how. And who knows how long it will be until we learn what’s really going on?
Submitted by brad on Mon, 2006-08-14 23:39.
Last week at ZeroOne in San Jose, one of the art pieces reminded me of a sneaky idea I had a while ago. As you may know, many camcorders, camera phones and cheaper digital cameras respond to infrared light. You can check this out pretty easily by holding down a button on your remote control while using the preview screen on your camera. If you see a bright light, you’re camera shoots in infrared.
Anyway, the idea is to find techniques, be they arrays of bright infrared LEDs, or paints that shine well in infrared but are not obvious in visible light, and create invisible graffiti that only shows up in tourist photos and videos. Imagine the tourists get home from their trip to fisherman’s wharf, and the side of the building says something funny or rude that they are sure wasn’t there when they filmed it.
The art piece at ZeroOne used this concept to put up a black monolith to the naked eye. If you pulled out your camera phone or digital camera, you could see words scrolling down the front. Amusing to watch people watch it. Another piece by our friends at .etoy also had people pulling out cameraphones to watch it. They displayed graphics made of giant pixels on a wall just a few feet from you. Up close, it looked like random noise. If you found a way to widen your field of view (which the screen on a camera can do) allowed you to see the big picture, and you could see the images of talking faces. (My SLR camera’s 10mm lens through the optical viewfinder worked even better.)
That piece only really worked at night, though with superbright LEDs I think it could be done in the day. I don’t know if there are any paints to coatings to make this work well. It would be amusing to tag the world with tags that can only be seen when you pull out your camera.
Submitted by brad on Fri, 2006-08-11 17:06.
Everybody’s pulling out IBM PC stories on the 25th anniversary so I thought I would relate mine. I had been an active developer as a teen for the 6502 world — Commodore Pet, Apple ][, Atari 800 and the like, and sold my first game to Personal Software Inc. back in 1979. PSI was just starting out, but the founders hired me on as their first employee to do more programming. The company became famous shortly thereafter by publishing VisiCalc, which was the first serious PC application, and the program that helped make Apple as a computer company outside the hobby market.
In 1981, I came back for a summer job from school. Mitch Kapor, who had worked for Personal Software in 1980 (and had been my manager at the time) had written a companion for VisiCalc, called VisiPlot. VisiPlot did graphs and charts, and a module in it (VisiTrend) did statistical analysis. Mitch had since left, and was on his way to founding Lotus. Mitch had written VisiPlot in Apple ][ Basic, and he won’t mind if I say it wasn’t a masterwork of code readability, and indeed I never gave it more than a glance. Personal Software, soon to be renamed VisiCorp, asked me to write VisiPlot from scratch, in C, for an un-named soon to be released computer.
I didn’t mention this, but I had never coded in C before. I picked up a copy of the Kernighan and Ritchie C manual, and read it as my girlfriend drove us over the plains on my trip from Toronto to California.
I wasn’t told much about the computer I would be coding for. Instead, I defined an API for doing I/O and graphics, and wrote to a generalized machine. Bizarrely (for 1981) I did all this by dialing up by modem to a unix computer time sharing service called CCA on the east coast. I wrote and compiled in C on unix, and defined a serial protocol to send graphics back to, IIRC an Apple computer acting as a terminal. And, in 3 months, I made it happen.
(Very important side note: CCA-Unix was on the arpanet. While I had been given some access to
an Arpanet computer in 1979 by Bob Frankston, the author of VisiCalc, this was my first
day to day access. That access turned out to be the real life-changing event in this story.)
There was a locked room at the back of the office. It contained the computer my code would eventually run on. I was not allowed in the room. Only a very small number of outside companies were allowed to have an IBM PC — Microsoft, UCSD, Digital Research, VisiCorp/Software Arts and a couple of other applications companies.
On this day, 25 years ago, IBM announced their PC. In those days, “PC” meant any kind of personal computer. People look at me strangely when I call an Apple computer a PC. But not long after that, most people took “PC” to mean IBM. Finally I could see what I was coding for. Not that the C compilers were all that good for the 8088 at the time. However, 2 weeks later I would leave to return to school. Somebody else would write the library for my API so that the program would run on the IBM PC, and they released the product. The contract with Mitch required they pay royalties to him for any version of VisiPlot, including mine, so they bought out that contract for a total value close to a million — that helped Mitch create Lotus, which would, with assistance from the inside, outcompete and destroy VisiCorp.
(Important side note #2: Mitch would use the money from Lotus to found the E.F.F. — of which I am now chairman.)
The IBM PC was itself less exciting than people had hoped. The 8088 tried to be a 16 bit processor but it was really 8 bit when it came to performance. PC-DOS (later MS-DOS) was pretty minimal. But it had an IBM name on it, so everybody paid attention. Apple bought full page ads in the major papers saying, “Welcome IBM, Seriously.” Later they would buy ads with lines like Steve Jobs saying, “When I invented the personal computer…” and most of us laughed but some of the press bought it. And of course there is a lot more to this story.
And I was paid about $7,000 for the just under 4 months of work, building almost all of an entire software package. I wish I could program like that today, though I’m glad I’m not paid that way today.
So while most people today will have known the IBM-PC for 25 years, I was programming for it before it released. I just didn’t know it!
Submitted by brad on Thu, 2006-08-10 23:25.
Quite frequently in non-HTML documents, such as E-mails, people will enclose their URLs in angle brackets, such as <http://foo.com> What is the origin of this? For me, it just makes cutting and pasting the URLs much harder (it’s easier if they have whitespace around them and easiest if they are on a line by themselves.) It’s not any kind of valid XML or HTML in fact it would cause a problem in any document of that sort.
There’s lot of software out there that parses URLs out of text documents of course, but they all seem to do fine with whitespace and other punctuation. They handle the angle bracket notation, but don’t need it. Is there any software out there that needs it? If not, why do so many people use this form?
Submitted by brad on Thu, 2006-08-10 01:39.
Many universities are now setting up to broadcast lectures over their LANs, often in video. Many students simply watch from their rooms, or even watch later. There are many downsides to this (fewer show up in class) but the movement is growing.
Here’s a simple addition that would be a bonanza for the cell companies. Arrange to offer broadcast of lectures to student cell phones. In this case, I mean live, and primarily for those who are running late to class. They could call into the number, put on their bluetooth headset and hear the start of the lecture on the way in. All the lecture hall has to do is put the audio into a phone that calls a conference bridge (standard stuff all the companies have already) and then students can call into the bridge to hear the lecture. In fact, the cell company should probably pay the school for all the minutes they would bill.
This need not apply only to lectures at universities. All sorts of talks and large meetings could do the
same, including sessions at conferences.
Perhaps it would encourage tardyness, but you could also make the latecomers wait outside (listening) for an appropriate pause at which to enter.
Submitted by brad on Mon, 2006-08-07 13:51.
The blogosphere is justifiably abuzz with the release by AOL of “anonymized” search query histories for over 500,000 AOL users, trying to be nice to the research community. After the fury, they pulled it and issued a decently strong apology, but the damage is done.
Many people have pointed out obvious risks, such as the fact that searches often contain text that reveal who you are. Who hasn’t searched on their own name? (Alas, I’m now the #7 “brad” on Google, a shadow of my long stint at #1.)
But some other browsers have discovered something far darker. There are searches in there for things like “how to kill your wife” and child porn. Once that’s discovered, isn’t that now going to be sufficient grounds for a court order to reveal who that person was? It seems there is probable cause to believe user 17556639 is thinking about killing his wife. And knowing this very specific bit of information, who would impede efforts to investigate and protect her?
But we can’t have this happening in general. How long before sites are forced to look for evidence of crimes in “anonymized” data and warrants then nymize it. (Did I just invent a word?)
After all, I recall a year ago, I wanted to see if Google would sell adwords on various nasty searches, and what adwords they would be. So I searched for “kiddie porn” and other nasty things. (To save you the stigma, Google clearly has a system designed to spot such searches and not show ads, since people who bought the word “kiddie” may not want to advertise on those results.)
So had my Google results been in such a leak, I might have faced one of those very scary kiddie porn raids, which in the end would find nothing after tearing apart my life and confiscating my computers. (I might hope they would have a sanity check on doing this to somebody from the EFF, but who knows. And you don’t have that protection even if somebody would accord it to me.)
I expect we’ll be seeing the reprecussions from this data spill for some time to come. In the end, if we want privacy from being data mined, deletion of such records is the only way to go.
Submitted by brad on Sun, 2006-08-06 20:15.
Those who know about my phone startup Voxable will know I have far more ambitious goals regarding presence and telephony, but during my recent hospital stay, I thought of a simple subset idea that could make hospital phone systems much better for the patient, namely a way to easily specifiy whether it’s a good time to call the patient or not. Something as simple as a toggle switch on the phone, or with standard phones, a couple of magic extensions they can dial to set whether it’s good or not.
When you’re in the hospital, your sleep schedule is highly unusual. You sleep during the day frequently, you typically sleep much more than usual, and you’re also being woken up regularly by medical staff at any time of the day for visits, medications, blood pressure etc.
At Stanford Hospital, outsiders could not dial patient phones after 10pm, even if you might be up. On the other hand even when the calls can come through, people are worried if it’s a good time. So a simple switch on the phone would cause the call to be redirected to voice mail or just a recording saying it’s not a good time. Throw it to take a nap or do something else where you want peace and quiet. If you throw it at night, it stays in sleep mode until 8 or 9 hours. Then it beeps and reverts to available mode. If you throw it in the day, it will revert in a shorter amount of time (because you might forget) however a fancier interface would let you specify the time on an IVR menu. Nurses would make you available when they wake you in the morning, or you could put up a note saying you don’t want this. (Since it seems to be the law you can’t get the same nurse two days in a row.)
In particular, when doctors and nurses come in to do something with you, they would throw the switch, and un-throw it when they leave, so you don’t get a call while in the middle of an examination. The nurse’s RFID badge, which they are all getting, could also trigger this.
Now people who call would know they got you at a good time, when you’re ready to chat. Next step — design a good way for the phone to be readily reachable by people in pain, such as hanging from the ceiling on a retractable cord, or retractable into the rail on the side of the bed. Very annoying when in pain to begin the slow process of getting to the phone, just to have them give up when you get to it.
Submitted by brad on Wed, 2006-08-02 18:28.
There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. “Pet names” given to web sites so you can confirm you’re where you were before.
I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults.
Now I don’t store my most important passwords (bank, etc.) in my password vault, but I do store most
medium importance ones there (accounts at various billing entities etc.) I just use a simple common
password for web boards, blogs and other places where the damage from compromise is nil to minimal.
So when I go to such a site, I expect the password vault to fill in the password. If it doesn’t, that’s a big warning flag for me. And so I can’t easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to bankofthevvest.com (Two “v”s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.
The browser should store passwords in the vault, and even the “don’t store this” passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is “secretword” and I use it on bankofthewest.com, no problem. “secretword” isn’t stored in my password vault, but the hash of it is. If I ever type in “secretword” to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I’ll do a good job of examining — it’s still possible I’ll be fooled by bankofthevvest.com, but other tricks won’t fool me.
The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at “who-cares” sites, we don’t want to be warned all the time. The second time we use that password, we’ll get a warning, and we need a box to say, “Don’t warn me about re-use of this password.”
Read on for subtleties… read more »
Submitted by brad on Mon, 2006-07-31 15:08.
Right now this blog is hosted by powerVPS, which provides virtual private servers. This is to say they have a large powerful box, and they run virutalization softare (Virtuozo) which allows several users to have the illusion of a private machine, on which they are the root user. In theory users get an equal share of the machine, but since most of the users do not run at full capacity, any user can "burst" to temporarily use more resources.
Unfortunately I have found that this approach does fine with CPU, but not with RAM. The virtual server I first used had 256MB of ram (burst to 1gb) available to it. But it was not able to perform at the level of a dedicated server with 256mb of ram -- swapping the rest to disk -- would do. It also doesn't perform anywhere near the level of a non-virtualized shared server, which is what you will commonly see in very cheap web hosting. An ordinary shared server looks like normal multi-user timesharing, though they tend to virtualize the apache so it looks like everybody gets their own apache.
I eventually had to double my virtual machine's capacity -- and double the monthly fee. You probably saw an increase in the speed of this blog a couple of weeks ago.
Now the virtual machines out there are pretty good, and do cost only a modest performance hit when you run one. But when you run many, you lose out on the OS's ability to run many copies of the same program but keep only one copy in memory.
I propose a more efficient design that mixes shared machine and virtual machine concepts. One step to that would be to not have every user run their own mySQL database. MySQL takes about 50mb of ram, which is not much today but a lot if multiplied out 16 times. Instead have one special virtual server (or just a different dedicated machine) with a copy of MySQL. This would be a special version, which virtualizes the connection, so that as far as each IP address connecting to it is concerned, they think they have a private version of mySQL. This means that everybody can create a database called "drupal" (as far as they think) if they want to. The virtualizer would add some prefix to the names based on which customer is connecting. This would also apply to permissions, so each root user would be different, and really only have global permissions on the right databases.
You would not be able to modify mySQL's parameters or start and stop it -- unless you went back to running a private copy in your own virtual server. But if you didn't need that, you would get a more efficient database server.
The bad news -- it's up to the hosting companies to do this. MySQL AB doesn't get paid by those hosting companies, so it's not particularly motivated to put in changes for them. But it's an open source system so others could write such changes.
The other big users on web hosts are apache and php. There are many virtualized versions of apache, but this is often where people do want to virtualize, to run custom scripts, java programs and special CGIs. Providing a mixed shared/virtual environment here would be more difficult. One easy approach would be to have it be two web sites, with some pages on the shared site and links going to the virtual site. More cleverly, the virtual apache could have internal rewrite rules that are not shown to outsiders that cause it to fetch and forward from the virtualized web server.
Submitted by brad on Fri, 2006-07-28 13:47.
Yesterday I received a Dell 3007WFP panel display. The price hurt ($1600 on eBay, $2200 from Dell but sometimes there are coupons) and you need a new video card (and to top it off, 90% of the capable video cards are PCI-e and may mean a new motherboard) but there is quite a jump by moving to this 2560 x 1600 (4.1 megapixel) display if you are a digital photographer. This is a very similar panel to Apple's Cinema, but a fair bit cheaper.
It's great for ordinary windowing and text of course, which is most of what I do, but it's a great deal cheaper just to get multiple displays. In fact, up to now I've been using CRTs since I have a desk designed to hold 21" CRTs and they are cheaper and blacker to boot. You can have two 1600x1200 21" CRTs for probably $400 today and get the same screen real estate as this Dell.
But that really doesn't do for photos. If you are serious about photography, you almost surely have a digital camera with more than 4MP, and probably way more. If it's a cheap-ass camera it may not be sharp if viewed at 1:1 zoom, but if it's a good one, with good lenses, it will be.
If you're also like me you probably never see 99% of your digital photos except on screen, which means you never truly see them. I print a few, mostly my panoramics and finally see all their resolution, but not their vibrance. A monitor shows the photos with backlight, which provides a contrast ratio paper can't deliver.
At 4MP, this monitor is only showing half the resolution of my 8MP 20D photos. And when I move to a 12MP camera it will only be a third, but it's still a dramatic step up from a 2MP display. It's a touch more than twice as good because the widescreen aspect ratio is a little closer to the 3:2 of my photos than the 4:3 of 1600x1200. Of course if you shoot with a 4:3 camera, here you'll be wasting pixels. In both cases, of course, you can crop a little so you are using all the pixels. (In fact, a slideshow mode that zoom/crops to fully use the display would be a handy mode. Most slideshows offer 1:1 and zoom to fit based on no cropping.)
There are many reasons for having lots of pixels aside from printing and cropping. Manipulations are easier and look better. But let's face it, actually seeing those pixels is still the biggest reason for having them. So I came to the conclusion that I just haven't been seeing my photos, and now I am seeing them much better with a screen like this. Truth is, looking at pictures on it is better than any 35mm print, though not quite at a 35mm slide of quality.
Dell should give me a cut for saying this.
Long ago I told people not to shoot on 1MP and 2MP digital cameras instead of film, because in the future, displays would get so good the photos will look obviously old and flawed. That day is now well here. Even my 3MP D30 pictures don't fill the screen. I wonder when I'll get a display that makes my 8MP pictures small.
Submitted by brad on Thu, 2006-07-27 14:46.
Today, Congress passed 410-15 the Delete Telephony Online Predators act, or DTOPA. This act requires all schools and libraries to by default block access to the social networking system called the “telephone.” All libraries receiving federal funding, and schools receiving E-rate funding must immediately bar access to this network. Blocks can be turned off, on request, for adults, and when students are under the supervision of an adult.
“This is not the end-all bill,” Rep. Fred Upton (R-Mich.) said.
“But, we know sexual predators should not have the avenue of our
schools and libraries to pursue their evil deeds.” The “telephone” social network
allows voice conversation between a student and virtually any sexual predator
in the world. Once a predator gets a child’s “number” or gives his number
to the child, they can speak at any time, no matter where the predator is in
Many children have taken to carrying small pocket “telephones” which can be signalled
by predators at any time. Use of these will be prohibited.
Submitted by brad on Tue, 2006-07-25 18:03.
Transit is of course more efficient than private cars, many people on one vechicle. But because a round-trip for a couple or family involves buying 4 to 8 single tickets, couples and families who have cars will often take their cars unless parking is going to be a problem.
For example, for us to go downtown it’s $6 within SF. For people taking BART from Berkeley or Oakland it’s $13.40 for 2 people. Makes it very tempting to take a car, even if it costs a similar amount (at 35 cents/mile, 15 of those for gasoline in a city) for the convenience and, outside of rush-hour, speed.
So even if transit is the winning choice for one, it often isn’t for 2. And while 2 in a car is better than 1, an extra 2 on transit during non-peak hours is even better for traffic and the environment.
Many transit agencies offer a one-day family pass, mostly aimed at tourists. There may be some that also offer what I am going to propose, which is a more ordinary one-way or return ticket for groups of people living at the same address, that is sufficiently discounted to make them do the transition from car to transit.
This isn’t trivial, we don’t want drivers to have to check addresses on IDs as people get on the bus. They can check a simple card, though.
For example, people could get a simple, non-logged card with their photo and some simple word, symbol or colour combination, so that the
driver can tell right away that all the cards were picked up together. (For example they could all have the same randomly chosen word on them in large print, or 3 colour stripes.)
The household/family fare would be only available outside of hours where the transit cars get loaded to standing room. Past that point each rider should pay, and driving is usually rough anyway. Passengers could board, show their matching cards, and get reduced, or even free fares for the additional people. The driver could look at the photos but probably only needs to do that from time to time. (Mainly, we would be trying to stop somebody from getting a set of household cards, and selling cheap rides to random people at the stop with them. Not that likely an event anyways, but random photo checks could stop it.)
It’s harder to manage at automatic fare stations as found on subways. There you could get more abuse, but it might not be so much as to present a problem. The main issue would be homeless people “renting” card sets to groups of people who arrive at a turnstile. (At fancy pay-to-pee public toilets in SF, the homeless were given unlimited use tokens. Better that than have them urinate on the streets for lack of a quarter. They promptly got to renting these tokens to tourists wanting to use the toilets.)
If you’re not too worried about abuse, family tickets could simply be purchased in advance from a desk where they can check that everybody is in the same household. The adults would have to show (easiest for couples) but they need not bring the kids, who already get reduced fares as it is, though in the household ticket they would probably be free.
I presume some transit agencies already do this since the one-day passes are common enough. How do they work it out? Is it aimed at locals rather than tourists? Do they assume locals close to the transit line get monthly passes?
Submitted by brad on Mon, 2006-07-24 22:40.
Everybody in the blogosphere has heard something about Alaska’s Ted Stevens calling the internet a series of tubes.
They just heard him wrong. His porn filters got turned off and he discovered the internet was a series of pubes.
(And, BTW, I think we’ve been unfair to Stevens. While it wasn’t high traffic that delayed his E-mail — “an internet” — a
few days, his description wasn’t really that bad… for a senator.)
Submitted by brad on Mon, 2006-07-24 12:57.
A proposal by a Stanford CS Prof for a means to switch the U.S. Presidential race from electoral college to popular vote is gaining some momentum. In short, the proposal calls for some group of states representing a majority of the electoral college to agree to an inter-state compact that they will vote their electoral votes according to the result of the popular vote.
State compacts are like treaties but are enforceable by both state courts and federal law, so this has some merit. In addition, you actually don’t even need to get 270 electoral votes in the compact. All you really need is a much smaller number of “balanced” states. For example perhaps 60 typically republican electoral votes and 60 typically democratic electoral votes. Maybe even less.
For example I think a compact with MA, IL, MN (42 Dem) and IN, AB, OK, UT, ID, KA (42 Rep) might well be enough, certainly to start.
Not that it hurts if CA, NY or TX join.
That’s because normally the electoral college already follows the popular vote. If it’s not going to, the race is very close, and a fairly small number of states in the compact would be assured to swing the electoral college to the popular vote in that case. There are a few exceptions I’ll talk about below, but largely this would work.
This is unlike proposals for states to, on their own, do things like allocate their electors based on popular vote within the state, as Maine does. Such proposals don’t gain traction because there is generally going to be somebody powerful in the state who loses under such a new rule. In a state solidly behind one party, they would be fools to effectively give electoral votes to the minority party. In a balanced state, they would be giving up their coveted “swing state” status, which causes presidential candidates to give them all the attention and election-year gifts.
Even if, somehow, many states decided to switch to a proportional college, it is an unstable situation. Suddenly, any one state that is biased towards one party (both in state government and electoral college history) is highly motivated to put their candidate over the top by switching back to winner-takes-all.
There’s merit in the popular-vote-compact because it can be joined by “safe” states, so long as a similar number of safe votes from the other side join up. The safe states resent the electoral college system, it gets them ignored. Since close races are typically decided by a single mid-sized state, even a very small compact could be surprisingly effective — just 3 or 4 states!
The current “swing state” set is AZ, AR, CO, FL, IA, ME, MI, MN, MO, NV, NH, NM, NC, OH, OR, PA, VA, WA, WV, and WI, though of course this set changes over time. However, once states commit to a compact, they will be stuck with it, even if it goes against their interests down the road.
The one thing that interferes with the small-compact is that even the giant states like New York, Texas and California can become swing states if the “other” party runs a native candidate. California in particular. (In 1984 Mondale won only Minnesota, and got just under 50% of the vote. Anything can happen.) That’s why you don’t just get an “instant effective compact” from just 3 states like California matching Texas and Indiana. But there are small sets that probably would work.
Also, a tiny compact such as I propose would not undo the “campaign only in swing states” system so easily. A candidate who worked only on swing states (and won them) could outdo the extra margin now needed because of the compact. In theory. If the compact grew (with non-swing states, annoyed at this, joining it) this would eventually fade.
Of course the next question may surprise you. Is it a good idea to switch from the electoral college system? 4 times the winner of the popular vote has lost (strangely, 3 of those have been the 3 times the winner was the son — GWB, Adams - or grandson - Harrison- of a President) the White House. The framers of the consitution, while they did not envision the two party system we see today, intended for the winner of the popular vote to be able to lose the electoral college.
When they designed the system, they wanted to protect against the idea of a “regional” president. A regional winner would be a candidate with extreme popularity in some small geographic region. Imagine a candidate able to take 90% of the vote in their home region, that region being 1/3 of the population. Imagine them being less popular in the other 2/3 of the country, only getting 31% of the vote there. This candidate wins the popular vote, but would lose the electoral college (quite solidly.) Real examples would not be so simple. The framers did not want a candidate who really represented only a small portion of the country in power. The wanted to require that a candidate have some level of national support.
The Civil War provides an example of the setting for such extreme conditions. In that sort of schism, it’s easy to imagine one region rallying around a candidate very strongly, while the rest of the nation remains unsure.
Do we reach their goal today? Perhaps not. However, we must take care before we abandon their goal to make sure it’s what we want to do.
Update: See the comments for discussion of ties. Also, I failed to discuss another important issue to me, that of 3rd parties. The electoral debacle of 2000 hurt 3rd parties a lot, with a major “Ralph don’t run” campaign that told 3rd parties, “don’t you dare run if you could actually make a difference.” A national popular vote would continue, and possibly strengthen the bias against 3rd parties. Some 3rd parties have been proposing what they call a “safe state” strategy, where they tell voters to only vote for their presidential candidate in the safe states. This allows them to demonstrate how much support they are getting (and with luck the press reports their safe-state percentage rather than national percentage) without spoiling or being accused of spoiling.
Of course, I think the answer for that would be a preferential ballot, which would have to be done on a state by state basis, and might not mesh well with the compact under discussion.
Submitted by brad on Thu, 2006-07-20 14:46.
Big news today. Judge Walker has denied the motions — particularly the one by the federal government — to dismiss our case against AT&T for cooperative with the NSA on warrantless surveillance of phone traffic and records.
The federal government, including the heads of the major spy agencies, had filed a brief demanding the case be dismissed on “state secrets” grounds. This common law doctrine, which is often frighteningly successful, allows cases to be dismissed, even if they are of great merit, if following through would reveal state secrets.
Here is our brief note which as a link to the decision.
This is a great step. Further application of the state secrets rule would have made legal oversight of
surveillance by spy agencies moot. We can write all the laws we want governing how spies may operate, and how surveillance is to be regulated, but if nobody can sue over violations of those laws, what purpose do they really have? Very little.
Now our allegations can be tested in court.
Submitted by brad on Wed, 2006-07-19 14:48.
An interesting article in the WSJ yesterday on the paradox of abundance describes how many Netflix customers are putting many “highbrow” or “serious” movies on their lists, then letting them sit for months, unwatched, even returning them unwatched.
This sounds great for Netflix, of course, though it would be bad for Peerflix.
It echoes something I have been observing in my own household with the combination of a MythTV PVR with lots of disk space and a Peerflix subscription. When the time pressure of the old system goes off, stuff doesn’t get watched.
This is a counter to one of the early phenomenon that people with PVRs like Tivo/MythTV experience, namely watching more TV because it’s so much more convenient and there’s much more to watch than you imagined. In particular, when you record a series on your PVR, you watch every episode of that series unless you deliberately try not to (as I do with my “abridged” series watching system where I delete episodes of shows if they get bad reviews.)
In the past, with live TV, you might be a fan of a series, but you were going to miss a few. They expected you to and included “Previously on…” snippets for you. For a few top series you set up the VCR, but even then it missed things. And only the most serious viewers had a VCR record every episode of every show they might have interest in. But that’s easy with the PVR.
We’ve found some of our series watching to be really delayed. Sometimes it’s deliberate — we won’t watch the cliffhanger final episode of a season until we know we have the conclusion at the start of the next season, though that has major spoiler risks. Sometimes there will be series fatigue, where too much of your viewing time has gone to a set of core series and you are keen for something else — anything else. Then the series languishes.
Now there is some time pressure in the DVR. Eventually it runs out of disk space and gets rid of old shows. Which is what makes the DVDs from Peerflix or Netflix in even more trouble. Some have indeed gone 6 months without watching.
As the WSJ article suggests, part of it relates to the style of show. One is always up for lighthearted shows, comedies etc. But sitting there for months is The Pianist. For some reason when we sit down in front of the TV and want to pick a show, Nazis never seem very appealing. Even though we know from recommendations that it’s a very good film.
When the cinema was the normal venue for films, the system of choice was different. First of all, if we decide we want to go out to a movie, we’ll consider the movies currently playing. Only a small handful will be movies we think worthwhile to go to. In that context, it’s much more likely we might pick a serious or depressing movie with Nazis in it. It could easily be the clear choice in our small list. In addition, we know that the movie will only be in cinemas for a short time, any given movie, especially serious ones, may be gone in a few weeks. That’s even more true in smaller markets.
I’ve also noticed a push for shorter programming. When you’ve rented a DVD, your plan for the evening is clear, you are going to watch a movie at home. When you just sit down to choose something from your library, the temptation is strong to watch shorter things instead of making a 2 hour committment to a longer thing.
These factors are even more true when there are 2 or more people to please, instead of just one. The reality seems to be when the choice is 2 hours of war or Nazis or a 22 minute TV comedy, the 22 minute comedy — even several
of them in a row — is almost always the winner. Also popular are non-fiction shows, such as
science and nature shows, which have no strict time contract since you can readily stop them in the middle to resume later with no suspense.
Anyway, as you can see the WJS article resonated with me. Since the phenomenon is common, the next question is what this means for the industry. Will the market for more serious movies be diminished? The public was already choosing lighter movies over serious ones, but now even those who do enjoy the serious movies may find themselves tending away from them.
Of course, if people take a DVD from Netflix and leave it on the shelf for months, that actually helps the market for the disk in the rental context, helps it quite a bit. Far more copies are needed to meet the demands of the viewers, even if there are fewer viewers. However, the real shift coming is to pay-per-view and downloading. If people look at the PPV menu and usually pick the light movie over the serious one, then the market for the serious ones is sunk.