Submitted by brad on Fri, 2006-10-06 22:43.
When you call most companies today, you get a complex “IVR” (menu with speech or touch-tone commands.) In many cases the IVR offers you a variety of customer service functions which can be done far more easily on the web site. And indeed, the prompts usually tell you to visit the web site to do such things.
However, have we all not shouted, “I am already at your damned web site, I would not be calling you to do those things!”
And they should know this. So if you’re on the web site, and you’ve done more than just click on the “Contact Us” tab, then when you finally do click on the tab asking for a phone number, you should not get the same phone number that is given to newcomers or printed in non-web locations.
You should get a special phone number that says, “This customer is already on the web site. Don’t bother offering things that can be done far more easily on the web site.”
Now I understand why they offer these things. Agents cost money and they want to divert customers to automated systems if at all possible. But If I’m already at the automated system, I am usually calling for just a few reasons. Perhaps I want web site support, but I probably need an agent to do something that’s hard or impossible to do on the web site. Why frustrate me?
Of course, even better is if you have an eCRM system that integrates the call center and the web experience. Many companies now have a click-to-call link on their page. Some even connect you with an agent who has your information already from the history on the web site, but this is annoyingly rare. All this stuff is expensive and involves buying new tools and fancy reprogramming. What I propose is pretty trivial — a much simpler menu gated by the phone number the person came in on. Any IVR can do that with a small amount of work.
Now I see one hole. The “Gets to an agent fast” number might of course be spread around, and people would want to use it for all their calls, defeating (to the company) the purpose of all those menus. But today, numbers are cheap. You can get a block of 100 numbers and change the magic one every day. Or, with a little bit of programming, really not that much, you can have the web site tell the true web-sourced callers “Dial extension xxxx when you get connected.” That’s a little fancier, requires the IVR be programmed to know about a changing extension, but again it’s not nearly so hard as buying a whole eCRM system.
I know that companies don’t want to frustrate their customers, they think the IVRs are saving them enough money to offset the frustration. But in this case, they are costing money, as the person wastes time listening to a pointles s IVR. Let’s stop it!
Submitted by brad on Thu, 2006-10-05 21:56.
Every driver of a regular car knows this frustration well. You’re behind a big SUV or Minivan and you can no longer see what’s happening ahead of you, the way you can with ordinary cars. This is not simply because the ordinary cars are shorter, it’s because you can see through the windows of the ordinary car — they are at your level.
Of course trucks have always blocked the way but in the past they were few in number. Now that half the cars on the road are tall, being blocked is becoming the norm. This is dangerous, since good driving requires tracking the cars in front of the one you are following, and reacting to their brake lights as well.
Now that flat planel displays are plumetting in price, I propose that any vehicle that can’t be easily seen through by a driver in a standard height car must put a flat screen display on the back, said display showing the view of a camera on the front of the vehicle ideally configured
to act like a window would for a car at some modest distance behind the screen.
(A really clever display would track the distance of the car behind and zoom the view so it acts exactly like a window if it were big enough, or at least show what a big window would.)
I’m not talking HDTV here, though of course that would be nice and would become the norm a few years later. It might just be a 20” widescreen style display. For computers, these are dropping under $500 with HD resolution, and less with TV resolution. Admittedly car-mounted units would start off being more expensive in order to be rugged enough, though lots of people are putting small panels in their cars today.
It would of course need a very bright backlight for daytime, and an automatic adjustment of brightness for the night.
Quite a bit cheaper would be to just have the SUV/Minivans have the camera, and transmit the video over RF. The drivers of cars could be the ones to have to buy screens, in this case small dashboard screens which are cheaper than big ones and already exist in many cars for GPS. The big problem here is only receiving the signal of the car in front of you. You would need a protocol where cars that transmit also receive with highly directional antennas. Thus they would examine the direction of all signals they receive from other cameras, automatically pick a free band, and then transmit, “I’m car X. Car Y is in front of me, car Z in front of it.
Cars A and B are right front and direct right, car C is left, car D is behind me (probably you!)”
In fact it would be giving signal strength info from all directionals. It should be pretty easy then to tell, with all that info from all the cars around you, which is the car directly in front of you.
Then display it on the dash or even in a heads up display where the tail of the car is.
For privacy reason, cars could change their serial number from time to time so this can’t track them, though there is a virtue in broadcasting the licence plate so you can confirm you are really seeing the view of the car ahead of you by reading the plate.
This solution would cost under $50 for the camera and transmitter, much easier to mandate. The receiver would be an option car owners could buy. Not as fair of course, since the vision blockers should be the ones paying for this.
Submitted by brad on Tue, 2006-10-03 12:07.
We should all be disturbed by the story of a man who was questioned and missed his flight because he spoke on his cell phone in Tamil. Some paranoid thought it was suspicious, reported it, and so the guy gets pulled and misses his flight.
This is not the first time. People have been treated as suspicious for speaking in all sorts of languages, including Arabic, Hebrew, Urdo or just being Arabs or Sikhs. Sometimes it’s been a lot worse than just missing your flight.
So here’s a simple rule. If you want to report something as suspicious, then you don’t fly until the matter is resolved. After all, if you are really afraid, you wouldn’t want to fly. Even with the nasty foreigner pulled off the plane, you should be afraid of conspiracies with teams of villains. So you go into the holding cell and get a few questions too.
Now frankly, I would want to do much worse when it turns out the suspect is very obviously innocent. But I know that won’t get traction because people will not want to overly discourage reports lest they discourage a real report. But based on my logic above, this should not discourage people who think they really have something. At least not the first time.
TSA employees are of course in a CYA mode. They can’t screen out the paranoia because they aren’t punished for harassing the innocent, but they will be terribly punished if they ignore a report of somebody suspicious and decide to do nothing. That’s waht we need to fix long term, as I’ve written before. There must be negative consequences for people who implement security theatre and strip the innocent of their rights, or that’s what we will get.
Submitted by brad on Mon, 2006-10-02 12:29.
More cars are being made “drive-by-wire” where the controls are electronic, and even in cars with mechanical steering, throttle and brake linkages, there also exist motorized controls for power steering and cruise control. (It’s less common on the brakes.)
As this becomes more common, it would be nice if one could pop in a simple, short duration control console on the passenger’s side. It need not be large, full set of controls, it might be more of the video game console size.
The goal is to make it possible for the driver to ask the passenger to “take the wheel” for a short period of time in a situation where the driving is not particularly complex. For example, if the driver wants to take a phone call, or eat a snack or even just stretch for a minute. For long term driving, the two people should switch. It could also be used in an emergency, if the driver should conk out, but that’s rare enough I don’t think it’s all that likely people would have the presence of mind to pop out the auxilary controls and use them well.
The main question is, how dangerous is this? Disabled people drive with hand controls for throttle and brakes, though of course they train with this and practice all the time. You would want people to practice driving with the mini-console before using it on a live road. A small speed display would be needed.
While it’s possible to just pass over steering, and have the person in the driver’s seat be reading with brakes that seems risky to me, even if it’s cheaper. Driving from the other side of a car has poorer visibility, of course, but it’s legal and doable. However, I wouldn’t recommend this approach for complex city driving.
We’re used to a big wheel, but almost everybody is also comfortable with something like fold out handlebars that could pop out from the glovebox. (There is an airbag problem with this, perhaps having the bars be low would be better. As they are electronic, they can even pop up from under the front of the seat, or the console between the two seats.) Motorcycle style throttle — clutch would be too much work.
Driving schools would like to buy this of course. They already get cars with a passenger side brake pedal.
Submitted by brad on Thu, 2006-09-28 11:14.
Some time ago I modified this blog softare (Drupal) to ask a very simple question of people without accounts posting comments. It generally works very well at stopping robot posting, however the volume of spam has been increasing, so I changed the question. Volume may have dropped a touch but I still got a bunch, which means the spammers are actually live humans, not robots.
It’s also possible that asking natural language questions (rather than captcha style entry of text from a graphic) has gotten common enough that spammers have modified their software so they can figure out the answer once and easily code it, but I don’t think this is the case.
What’s curious is that my comment form also clearly explains that any links in comments will be done with the rel=nofollow tag, which tells Google and other search engines not to treat the link as a valid one when ranking pages. This means that, other than readers of the blog clicking on the links, which should be very rare, these spams should be unproductive for the spammer. But they’re still doing them.
The change however was prompted by a new breed of comment spam, where the spammers were copying other comments from inside large threads, but inserting their link on the author’s name. (This also uses rel=nofollow.) Indeed, such a technique does not automatically trigger my instincts to delete the spam, but they chose one of my own comments, so I recognized it. Right now my methods cut the spam enough that it is productive to manually delete what gets posted, though if the volume got high enough I would have to find other automated techniques.
(Drupal could of course help by having a much easier to use delete, including a ‘delete all from this IP address’ option.)
Submitted by brad on Fri, 2006-09-22 11:46.
As most people in the VoIP world know, the FCC mandated that “interconnected” VoIP providers must provide E911 (which means 911 calling with transmission of your location) service to their customers. It is not optional, they can’t allow the customer to opt out to save money.
It sounds good on the surface, if there’s a phone there you want to be able to reach emergency services with it.
The meaning of interconnected is still being debated. It was mostly aimed at the Vonages of the world. The current definition applies to service that has a phone-like device that can make and receive calls from the PSTN. Most people don’t think it applies to PBX phones in homes and offices, though that’s not explicit. It doesn’t apply to the Skype client on your PC, one hopes, but it could very well apply if you have a more phone like device connecting to Skype, which offers Skype-in and Skype-out services on a pay per use basis and thus is interconnected with the PSTN.
Here’s the kicker. There are a variety of companies which will provide E911 connectivity services for VoIP companies. This means you pay them and they will provide a means for you to route your user’s calls to the right emergency public service access point, and pass along the address the user registered with the service. Seems like a fine business, but as far as I can tell, all these companies are charging by the customer per month, with fees between $1 and $2 per month.
This puts a lot of constraints on the pricing models of VoIP services. There’s a lot of room for innovative business models that include offering limited or trial PSTN connection for free, or per-usage billing with no monthly fees. (All services I know of do the non-PSTN calling for
free.) Or services that appear free but are supported by advertising or other means. You’ve seen that Skype decided to offer free PSTN services for all of 2006. AIM Phoneline offers a free number for incoming calls, as do many others.
Read on… read more »
Submitted by brad on Sun, 2006-09-17 10:34.
It’s common in the blogosphere for bloggers to comment on the posts of other bloggers. Sometimes blogs show trackbacks to let you see those comments with a posting. (I turned this off due to trackback spam.) In some cases we effectively get a thread, as might appear in a message board/email/USENET, but the individual components of the thread are all on the individual blogs.
So now we need an RSS aggregator to rebuild these posts into a thread one can see and navigate. It’s a little more complex than threading in USENET, because messages can have more than one parent (ie. link to more than one post) and may not link directly at all. In addition, timestamps only give partial clues as to position in a thread since many people read from aggregators and may not have read a message that was posted an hour ago in their “thread.”
At a minimum, existing aggregators (like bloglines) could spot sub-threads existing entirely among your subscribed feeds, and present those postings to you. You could also define feeds which are unsubscribed but which you wish to see or be informed of postings from in the event of a thread. (Or you might have a block-list of feeds you don’t want to see contributions from.) They could just have a little link saying, “There’s a thread including posts from other blogs on this message” which you could expand, and that would mark those items as read when you came to the other blog.
Blog search tools, like Technoratti could also spot these threads, and present a typical thread interface for perusing them. Both readers and bloggers would be interested in knowing how deep the threads go.
Submitted by brad on Sat, 2006-09-16 15:33.
At the blogger panel at Fall VON (repurposed to be both video on the net as well as voice) Vlogger and blip.tv advocate Dina Kaplan asked bloggers to start vlogging. It’s started a minor debate.
My take? Please don’t.
I’ve written before on what I call the reader-friendly vs. writer-friendly dichotomy. My thesis is that media make choices about where to be on that spectrum, though ideal technology reduces the compromises. If you want to encourage participation, as in Wikis, you go for writer friendly. If you have one writer and a million readers, like the New York Times, you pay the writer to work hard to make it as reader friendly as possible.
When video is professionally produced and tightly edited, it can be reader (viewer) friendly. In particular if the video is indeed visual. Footage of tanks rolling into a town can convey powerful thoughts quickly.
But talking head audio and video has an immediate disadvantage. I can read material ten times faster than I can listen to it. At least with podcasts you can listen to them while jogging or moving where you can’t do anything else, but video has to be watched. If you’re just going to say your message, you’re putting quite a burden on me to force me to take 10 times as long to consume it — and usually not be able to search it, or quickly move around within it or scan it as I can with text.
So you must overcome that burden. And most videologs don’t. It’s not impossible to do, but it’s hard. Yes, video allows better expression of emotion. Yes, it lets me learn more about the person as well as the message. (Though that is often mostly for the ego of the presenter, not for me.)
Recording audio is easier than writing well. It’s writer friendly. Video has the same attribute if done at a basic level, though good video requires some serious work. Good audio requires real work too — there’s quite a difference between “This American Life” and a typical podcast.
Indeed, there is already so much pro quality audio out there like This American Life that I don’t have time to listen to the worthwhile stuff, which makes it harder to get my attention with ordinary podcasts. Ditto for video.
There is one potential technological answer to some of these questions. Anybody doing an audio or video cast should provide a transcript. That’s writer-unfriendly but very reader friendly. Let me decide how I want to consume it. Let me mix and match by clicking on the transcript and going right to the video snippet.
With the right tools, this could be easy for the vlogger to do. Vlogger/podcaster tools should all come with trained speech recognition software which can reliably transcribe the host, and with a little bit of work, even the guest. Then a little writer-work to clean up the transcript and add notes about things shown but not spoken. Now we have something truly friendly for the reader.
In fact, speaker-independent speech recognition is starting to almost get good enough for this but it’s still obviously the best solution to have the producer make the transcript. Even if the transcript is full of recognition errors. At least I can search it and quickly click to the good parts, or hear the mis-transcribed words.
If you’re making podcaster/vlogger tools, this is the direction to go. In addition, it’s absolutely the right thing for the hearing or vision impaired.
Submitted by brad on Fri, 2006-09-15 22:59.
In an earlier blog post I attempted to distinguish TVoIP (TV over internet) with IPTV, a buzzword for cable/telco live video offerings. My goal was to explain that we can be very happy with TV, movies and video that come to us over the internet after some delay.
The two terms aren’t really very explanatory, so now I suggested VAD, for Video-after-demand. Tivo and Netflix have taught us that people are quite satisifed if they pick their viewing choices in advance, and then later — sometimes weeks or months later — get the chance to view them. The key is that when they sit down to watch something, they have a nice selection of choices they actually want to see.
The video on demand dream is to give you complete live access to all the video in the world that’s available. Click it and watch it now. It’s a great dream, but it’s an expensive one. It needs fast links with dedicated bandwidth. If your movie viewing is using 4 of your 6 megabits, somebody else in the house can’t use those megabits for web surfing or other interactive needs.
With VaD you don’t need much in your link. In fact, you can download shows that you don’t have the ability to watch live at all, or get them at higher quality. You just have to wait. Not staring at a download bar, of course, nobody likes that, but wait until a later watching session, just as you do when you pick programs to record on a PVR like the Tivo.
I said these things before, but the VaD vision is remarkably satisfying and costs vastly less, both to the consumer, and those building out the networks. It can be combined with IP multicasting (someday) to even be tremendously efficient. (Multicasting can be used for streaming but if packets are lost you have only a limited time to recover them based on how big your buffer is.)
Submitted by brad on Wed, 2006-09-13 07:00.
Trade show booths are always searching for branded items to hand out to prospects. Until they fix the airport bans, how about putting your brand on a tube of toothpaste and/or other travel liquids now banned from carry-on bags?
(Yeah, most hotels will now give you these, but it’s the thought that counts and this one would be remembered longer than most T-shirts.)
Submitted by brad on Sun, 2006-09-10 18:18.
As a hirsute individual, I beg the world’s makers of medical tapes and band-aids to work on an adhesive that is decent at sticking to skin, but does not stick well to hair.
Not being versed in the adhesive chemistries of these things, I don’t know how difficult this is, but if one can be found, many people would thank you.
Failing that would be an adhesive with a simple non-toxic solvent that unbinds it, which could be swabbed on while slowly undoing tape.
Submitted by brad on Fri, 2006-09-08 12:24.
While it will be a while before I get the time to build all my panoramas of this year’s Burning Man, I did do some quick versions of some of those I shot of the burn itself. This year, I arranged to be on a cherry picker above the burn. I wish I had spent more time actually looking at the spectacle, but I wanted to capture panoramas of Burning Man’s climactic moment. The entire city gathers, along with all the art cars for one shared experience. A large chunk of the experience is the mood and the sound which I can’t capture in a photo, but I can try to capture the scope.
This thumbnail shows the man going up, shooting fireworks and most of the crowd around him. I will later rebuild it from the raw files for the best quality.
Shooting panoramas at night is always hard. You want time exposures, but if any exposure goes wrong (such as vibration) the whole panorama can be ruined by a blurry frame in the middle. On a boomlift, if anybody moves — and the other photographer was always adjusting his body for different angles — a time exposure won’t be possible. It’s also cramped and if you drop something (as I did my clamp knob near the end) you won’t get it back for a while. In addition, you can’t have everybody else duck every time you do a sweep without really annoying them, and if you do you have to wait a while for things to stabilize.
It was also an interesting experience riding to the burn with DPW, the group of staff and volunteers who do city infrastructure. They do work hard, in rough conditions, but it gives them an attitude that crosses the line some of the time regarding the other participants. When we came to each parked cherry picker, people had leaned bikes against them, and in one case locked a bike on one. Though we would not actually move the bases, the crew quickly grabbed all the bikes and tossed them on top of one another, tangling pedal in spoke, probably damaging some and certainly making some hard to find. The locked bike had its lock smashed quickly with a mallet. Now the people who put their bikes on the pickers weren’t thinking very well, I agree, and the DPW crew did have to get us around quickly but I couldn’t help but cringe with guilt at being part of the cause of this, especially when we didn’t move
the pickers. (Though I understand safety concerns of needing to be able to.)
Anyway, things “picked up” quickly and the view was indeed spectacular. Tune in later for more and better pictures, and in the meantime you can see the first set of trial burn panoramas for a view of the burn you haven’t seen.
Submitted by brad on Wed, 2006-09-06 11:54.
I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.
You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.
When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.
There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.
I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.
It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.
For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.
Submitted by brad on Mon, 2006-08-21 11:44.
One of the few positive things over the recent giant AOL data spill (which we have asked the FTC to look into) is it has hopefully taught a few lessons about just how hard it is to truly anonymize data. With luck, the lesson will be “don’t be fooled into thinking you can do it” and not “Just avoid what AOL did.”
There is some Irony that in general, AOL is one of the better performers. They don’t keep a permanent log of searches tied to userid, though it is tied, reports say, to a virtual ID. (I have seen other reports to suggest even this is erased after a while.) AOL also lets you turn off short term logging of the association with your real ID. Google, MSN, Yahoo and others keep the data effectively forever.
Everybody has pointed out that for many people, just the search queries themselves can be enough to identify a person, because people search for things that relate to them. But many people’s searches will not be trackable back to them.
However, the AOL records maintain the exact time of the search, to the second or perhaps more accurately. They also maintain the site the user clicked on after doing the search. AOL may have wiped logs, but most sites don’t. Let’s say you go through the AOL logs and discover an AOL user searched and clicked on your site. You can go into your own logs and find that search, both from the timestamp, and the fact the “referer” field will identify that the user came via an AOL search for those specific terms.
Now you can learn the IP address of the user, and their cookies or even account with your site, if your site has accounts.
If you’re a lawyer, however, doing a case where you can subpoena information, you could use that tool to identify almost any user in the AOL database who did a modest volume of searches. And the big sites with accounts could probably identify all their users who are in the database, getting their account id (and thus often name and email and the works.)
So even if AOL can’t uncover who many of these users are due to an erasure policy, the truth is that’s not enough. Even removing the site does not stop the big sites from tracking their own users, because their own logs have the timestamped searches. And an investigator could look for a query, do the query, see what sites you would likely click on, and search the logs of those sites. They would still find you. Even without the timestamp this is possible for an uncommon query. And uncommon queries are surprisingly common. :-)
I have a static IP address, so my IP address links directly to me. Broadband users who have dynamic IP addresses may be fooled — if you have a network gateway box or leave your sole computer on, your address may stay stable for months at a time — it’s almost as close a tie as a static IP.
The point here is that once the data are collected, making them anonymous is very, very hard. Harder than you think, even when you take into account this rule about how hard it is.
Submitted by brad on Fri, 2006-08-18 22:56.
You probably heard yesterday’s good news that the ACLU prevailed in their petition for an injunction against the NSA warrentless wiretapping. (Our case against AT&T to hold them accountable for allegedly participating in this now-ruled-unlawful program continues in the courts.)
However, the ruling was appealed (no surprise) and the government also asked for, and was granted a stay of the injunction. So the wiretaps won’t stop unless the appeal is won.
But this begs the question, “Why do you need a stay?”
The line from the White House has been that the government engaged in this warrantless wiretapping because the the President had the authority to do that, both inherently and under the famous AUMF. And they wanted to use that authority because they complained the official system mandated by law, requiring process before the FISA court, was just too cumbersome. Even though the FISA law allows immediate emergency wiretaps without a warrant as long as a retroactive application is made soon.
We’ve all wondered just why that’s too cumbersome. But they seemed to be saying that since the President haud the authority to bypass the FISA court, why should they impede the program with all that pesky judicial oversight?
But now we have a ruling that the President does not have that authority. Perhaps that will change on appeal, but for now it is the ruling. So surely this should mean that they just go back to doing it the way the FISA regulations require it? What’s the urgent need for a stay? Could they not have been ready with the papers to get the warrants they need if they lost?
Well, I think I know the answer. Many people suspect that the reason they don’t go to FISA is not because it’s too much paperwork. It’s because they are trying to do things FISA would not let them do. So of course they don’t want to ask. (The FISA court, btw, has only told them no once, and even that was overturned. That’s about all the public knows about all its rulings.) I believe there is a more invasive program in place, and we’ve seen hints of that in press reports, with data mining of call records and more.
By needing this stay, the message has come through loud and clear. They are not willing to get the court’s oversight of this program, no way, no how. And who knows how long it will be until we learn what’s really going on?
Submitted by brad on Mon, 2006-08-14 23:39.
Last week at ZeroOne in San Jose, one of the art pieces reminded me of a sneaky idea I had a while ago. As you may know, many camcorders, camera phones and cheaper digital cameras respond to infrared light. You can check this out pretty easily by holding down a button on your remote control while using the preview screen on your camera. If you see a bright light, you’re camera shoots in infrared.
Anyway, the idea is to find techniques, be they arrays of bright infrared LEDs, or paints that shine well in infrared but are not obvious in visible light, and create invisible graffiti that only shows up in tourist photos and videos. Imagine the tourists get home from their trip to fisherman’s wharf, and the side of the building says something funny or rude that they are sure wasn’t there when they filmed it.
The art piece at ZeroOne used this concept to put up a black monolith to the naked eye. If you pulled out your camera phone or digital camera, you could see words scrolling down the front. Amusing to watch people watch it. Another piece by our friends at .etoy also had people pulling out cameraphones to watch it. They displayed graphics made of giant pixels on a wall just a few feet from you. Up close, it looked like random noise. If you found a way to widen your field of view (which the screen on a camera can do) allowed you to see the big picture, and you could see the images of talking faces. (My SLR camera’s 10mm lens through the optical viewfinder worked even better.)
That piece only really worked at night, though with superbright LEDs I think it could be done in the day. I don’t know if there are any paints to coatings to make this work well. It would be amusing to tag the world with tags that can only be seen when you pull out your camera.
Submitted by brad on Fri, 2006-08-11 17:06.
Everybody’s pulling out IBM PC stories on the 25th anniversary so I thought I would relate mine. I had been an active developer as a teen for the 6502 world — Commodore Pet, Apple ][, Atari 800 and the like, and sold my first game to Personal Software Inc. back in 1979. PSI was just starting out, but the founders hired me on as their first employee to do more programming. The company became famous shortly thereafter by publishing VisiCalc, which was the first serious PC application, and the program that helped make Apple as a computer company outside the hobby market.
In 1981, I came back for a summer job from school. Mitch Kapor, who had worked for Personal Software in 1980 (and had been my manager at the time) had written a companion for VisiCalc, called VisiPlot. VisiPlot did graphs and charts, and a module in it (VisiTrend) did statistical analysis. Mitch had since left, and was on his way to founding Lotus. Mitch had written VisiPlot in Apple ][ Basic, and he won’t mind if I say it wasn’t a masterwork of code readability, and indeed I never gave it more than a glance. Personal Software, soon to be renamed VisiCorp, asked me to write VisiPlot from scratch, in C, for an un-named soon to be released computer.
I didn’t mention this, but I had never coded in C before. I picked up a copy of the Kernighan and Ritchie C manual, and read it as my girlfriend drove us over the plains on my trip from Toronto to California.
I wasn’t told much about the computer I would be coding for. Instead, I defined an API for doing I/O and graphics, and wrote to a generalized machine. Bizarrely (for 1981) I did all this by dialing up by modem to a unix computer time sharing service called CCA on the east coast. I wrote and compiled in C on unix, and defined a serial protocol to send graphics back to, IIRC an Apple computer acting as a terminal. And, in 3 months, I made it happen.
(Very important side note: CCA-Unix was on the arpanet. While I had been given some access to
an Arpanet computer in 1979 by Bob Frankston, the author of VisiCalc, this was my first
day to day access. That access turned out to be the real life-changing event in this story.)
There was a locked room at the back of the office. It contained the computer my code would eventually run on. I was not allowed in the room. Only a very small number of outside companies were allowed to have an IBM PC — Microsoft, UCSD, Digital Research, VisiCorp/Software Arts and a couple of other applications companies.
On this day, 25 years ago, IBM announced their PC. In those days, “PC” meant any kind of personal computer. People look at me strangely when I call an Apple computer a PC. But not long after that, most people took “PC” to mean IBM. Finally I could see what I was coding for. Not that the C compilers were all that good for the 8088 at the time. However, 2 weeks later I would leave to return to school. Somebody else would write the library for my API so that the program would run on the IBM PC, and they released the product. The contract with Mitch required they pay royalties to him for any version of VisiPlot, including mine, so they bought out that contract for a total value close to a million — that helped Mitch create Lotus, which would, with assistance from the inside, outcompete and destroy VisiCorp.
(Important side note #2: Mitch would use the money from Lotus to found the E.F.F. — of which I am now chairman.)
The IBM PC was itself less exciting than people had hoped. The 8088 tried to be a 16 bit processor but it was really 8 bit when it came to performance. PC-DOS (later MS-DOS) was pretty minimal. But it had an IBM name on it, so everybody paid attention. Apple bought full page ads in the major papers saying, “Welcome IBM, Seriously.” Later they would buy ads with lines like Steve Jobs saying, “When I invented the personal computer…” and most of us laughed but some of the press bought it. And of course there is a lot more to this story.
And I was paid about $7,000 for the just under 4 months of work, building almost all of an entire software package. I wish I could program like that today, though I’m glad I’m not paid that way today.
So while most people today will have known the IBM-PC for 25 years, I was programming for it before it released. I just didn’t know it!
Submitted by brad on Thu, 2006-08-10 23:25.
Quite frequently in non-HTML documents, such as E-mails, people will enclose their URLs in angle brackets, such as <http://foo.com> What is the origin of this? For me, it just makes cutting and pasting the URLs much harder (it’s easier if they have whitespace around them and easiest if they are on a line by themselves.) It’s not any kind of valid XML or HTML in fact it would cause a problem in any document of that sort.
There’s lot of software out there that parses URLs out of text documents of course, but they all seem to do fine with whitespace and other punctuation. They handle the angle bracket notation, but don’t need it. Is there any software out there that needs it? If not, why do so many people use this form?
Submitted by brad on Thu, 2006-08-10 01:39.
Many universities are now setting up to broadcast lectures over their LANs, often in video. Many students simply watch from their rooms, or even watch later. There are many downsides to this (fewer show up in class) but the movement is growing.
Here’s a simple addition that would be a bonanza for the cell companies. Arrange to offer broadcast of lectures to student cell phones. In this case, I mean live, and primarily for those who are running late to class. They could call into the number, put on their bluetooth headset and hear the start of the lecture on the way in. All the lecture hall has to do is put the audio into a phone that calls a conference bridge (standard stuff all the companies have already) and then students can call into the bridge to hear the lecture. In fact, the cell company should probably pay the school for all the minutes they would bill.
This need not apply only to lectures at universities. All sorts of talks and large meetings could do the
same, including sessions at conferences.
Perhaps it would encourage tardyness, but you could also make the latecomers wait outside (listening) for an appropriate pause at which to enter.
Submitted by brad on Mon, 2006-08-07 13:51.
The blogosphere is justifiably abuzz with the release by AOL of “anonymized” search query histories for over 500,000 AOL users, trying to be nice to the research community. After the fury, they pulled it and issued a decently strong apology, but the damage is done.
Many people have pointed out obvious risks, such as the fact that searches often contain text that reveal who you are. Who hasn’t searched on their own name? (Alas, I’m now the #7 “brad” on Google, a shadow of my long stint at #1.)
But some other browsers have discovered something far darker. There are searches in there for things like “how to kill your wife” and child porn. Once that’s discovered, isn’t that now going to be sufficient grounds for a court order to reveal who that person was? It seems there is probable cause to believe user 17556639 is thinking about killing his wife. And knowing this very specific bit of information, who would impede efforts to investigate and protect her?
But we can’t have this happening in general. How long before sites are forced to look for evidence of crimes in “anonymized” data and warrants then nymize it. (Did I just invent a word?)
After all, I recall a year ago, I wanted to see if Google would sell adwords on various nasty searches, and what adwords they would be. So I searched for “kiddie porn” and other nasty things. (To save you the stigma, Google clearly has a system designed to spot such searches and not show ads, since people who bought the word “kiddie” may not want to advertise on those results.)
So had my Google results been in such a leak, I might have faced one of those very scary kiddie porn raids, which in the end would find nothing after tearing apart my life and confiscating my computers. (I might hope they would have a sanity check on doing this to somebody from the EFF, but who knows. And you don’t have that protection even if somebody would accord it to me.)
I expect we’ll be seeing the reprecussions from this data spill for some time to come. In the end, if we want privacy from being data mined, deletion of such records is the only way to go.