Submitted by brad on Fri, 2007-06-29 12:48.
Earlier I wrote about the frenzy buying Plastation 3s on eBay and lessons from it. There’s a smaller scale frenzy going on now about the iPhone, which doesn’t go on sale until 6pm today. With the PS3, many stores pre-sold them, and others lined up. In theory Apple/AT&T are not pre-selling, and limiting people to 2 units, though many eBay sellers are claiming otherwise.
The going price for people who claim they have one, either for some unstated reason, or because they are first in line at some store, is about $1100, almost twice the cost. A tidy profit for those who wait in line, time their auction well and have a good enough eBay reputation to get people to believe them. Quite a number of such auctions have closed at such prices with “buy it now.” If you live in a town without a frenzy and line it might do you well to go down to pick up two iPods. Bring your laptop with wireless access to update your eBay auction. None of the auctions I have seen have gone so far as to show a picture of the seller waiting in line to prove it.
eBay has put down some hard terms on iPhone sellers and pre-sellers. It says it does not allow pre-sales, but seems to be allowing those sellers who claim they can guarantee a phone. It requires a picture of the actual item in hand, with a non-photoshopped sign in the picture with the seller’s eBay name. A number of items show a stock photo with an obviously photoshopped tag. In spite of the publicised limit of 2, a number of people claim they have 4 or more.
It seems Apple may have deliberately tried to discourage this by releasing at 6pm on Friday, too late to get to Fedex in most places. Thus all most sellers can offer is getting the phone Monday, which is much less appealing, since that leaves a long window to learn that there are plenty more available Monday, and loses the all-important bragging rights of having an iPhone at weekend social events. Had they released it just a few hours earlier, I think sales like this would have been far more lucrative. (While Apple would not want to leave money on the table, it’s possible high eBay prices would add to the hype and be in their interest.)
As before, I predict timing of auctions will be very important. At this point even a 1 day auction will close after 18 hours of iPhone sales, adding a lot of rish. The PS3 kept its high value for much of the Christmas season, but the iPhone, if not undersupplied, may drop to retail in as little as a day. A standard 1 week auction would be a big mistake. Frankly I think paying $1200 (or a $300 wait-in-line fee) is pretty silly.
The iPhone, by the way, seems like a cool generalized device. A handheld that has the basic I/O tools including GSM phone and is otherwise completely made of touchscreen seems a good general device for the future. Better with a small bluetooth keyboard. Whether this device will be “the one” remains to be seen, of course.
Update: read more »
Submitted by brad on Wed, 2007-06-27 16:44.
If you go to the cities of Asia, one thing I find striking is how much more three-dimensional their urban streets are. By this I mean that you will regularly find busy retail shops and services on the higher floors of ordinary buildings, and even in the basement. Even in our business areas, above the ground floor is usually offices at most, rarely depending on walk-by traffic. There it's commonplace. I remember being in Hong Kong and asking natives to pick a restaurant for lunch. It was not unknown to just get into an otherwise unmarked elevator and go down or up to a bustling floor or sub-ground level to find the food.
Here we really like to see things from the street. A stairway up is uninviting. People want to see inside a restaurant as they walk by, to see how it looks, how busy it is, and even what the other patrons look like. I don't know why the non-main level shops can do so well in places like Japan and China, it may just be a necessity due to the much higher urban density.
However, I have wondered if the recent drop in price for HDTV panels and cameras could make a change. Instead of a stairway with sign, imagine a closed circuit HDTV panel or two at the entrance, showing you a live view of what's up there. For a little extra money, the camera could pan. While I think a live camera is best, obviously some shops would prefer to run something more akin to an advertisement. In all cases, I would hope sound was kept to a minimum, and the screens should have a reliable light sensor and clock to know how bright to be so they are not distracting at night. Some places, such as bars and restaurants, might elect to also put their camera online as a webcam, so people can look from home to see if a restaurant is hopping or not.
(There might be some temptation to run recorded video of busy times, but I think that would annoy patrons more than it would win them, once they went up the stairs. Who wants to go to a restaurant that has to fake it?)
While this idea could start with traditional urban streets, where each building has its own stairway or elevator up to higher floors, one could imagine a neoclassical urban street which is really an urban strip mall managed as a unit. In such a building, each ground floor tenant would have to devote a section of their window to show the live view of their neighbour above. Though patrons would then have to head to the actual stair or elevator to get up to the second floor. It's hard to say whether it might make more sense to put the panels in a cluster by the stairs rather than with each ground level shop.
This principle could also apply to the mini-malls found in the basements of tall buildings. However, again I fear the screens going overboard and trying to be too flashy. I really think a "window" that lets you see a live scene you can't otherwise see is in the interests of all, while yet another square foot with ads is not.
Submitted by brad on Mon, 2007-06-25 13:41.
Last week I talked briefly about self-driving delivery vehicles. I’ve become interested in what I’ll call the “roadmap” (pun intended) for the adoption of self-driving cars. Just how do we get there from here, taking the technology as a given? I’ve seen and thought of many proposals, and been ignoring the one that should stare us in the face — delivery. I say that because this is the application the DARPA grand challenge is actually aimed at. They want to move cargo without risks to soldiers. We mostly think of that as a path to the tech that will move people, but it may be the pathway.
Robot delivery vehicles have one giant advantage. They don’t have to be designed for passenger safety, and you don’t have to worry about that when trying to convince people to let them on the road. They also don’t care nearly as much about how fast they get there. Instead what we care about is whether they might hit people, cars or things, or get in the way of cars. If they hit things or hurt their cargo, that’s usually just an insurance matter. In fact, in most cases even if they hit cars, or cars hit them, that will just be an insurance matter.
A non-military cargo robot can be light and simple. It doesn’t need crumple zones or airbags. It might look more like a small electric trike, on bicycle wheels. (Indeed, the Blue Team has put a focus on making it work on 2 wheels, which could be even better.) It would be electric (able to drive itself to charging stations as needed) and mechanically, very cheap.
The first step will be to convince people they can’t hit pedestrians. To do that, the creators will need to make an urban test track and fill it with swarms of the robots, and demonstrate that they can walk out into the swarm with no danger. Indeed, like a school of fish, it should be close to impossible to touch one even if you try. Likewise, skeptics should be able to get onto bicycles, motorcycles, cars and hummers and drive right through the schools of robots, unable to hit one if they try. After doing that for half an hour and getting tired, doubters will be ready to accept them on the roads. read more »
Submitted by brad on Sun, 2007-06-24 20:50.
At Supernova 2007, several of us engaged Andrew Keen over his controversial book "The Cult of the Amateur." I will admit to not yet having read the book. Reviews in the blogosphere are scathing, but of course the book is entirely critical of the blogosphere so that's not too unexpected.
However, one of the things Keen said he worries about is what he calls the "scarcity of talent." He believes the existing "professional" media system did a good enough job at encouraging, discovering and promoting the talent that's out there, and so the world doesn't get more than slush with all the new online media. The amount of talent he felt, was very roughly constant.
I presented one interesting counter to this concept. I am from Canada. As you probably know, we excel at Hockey. Per capita certainly, and often on an absolute scale, Canada will beat any other nation in Hockey. This is only in part because of the professional leagues. We all play hockey when we are young, and this has no formal organization. The result is more talented players arise. The same is true for the USA in Baseball but not in Soccer, and so on.
This suggest that however much one might view YouTube as a vaster wasteland of terrible video, the existence of things like YouTube will eventually generate more and better videographers, and the world will be richer for it, at least if the world wants videographers. One could argue this just takes them away from something else but I doubt that accounts for all of it.
Submitted by brad on Sat, 2007-06-23 14:16.
At the recent Supernova 2007 conference, they did a session where startups presented, and to mix things up, at the end they told us that one of the companies was fake. Most people clued in, because the presentation had been funny, and had a few obvious business mistakes, but at the same time many commented that it was chosen well, because they would like it to exist. The fake company, ZapMeals claimed it would let you order delivered food from quality at-home chefs and caterers, with a reputation system that helped you choose them by quality. GPS-enabled delivery companies would show you where your meal was as it drove to your home. read more »
Submitted by brad on Sat, 2007-06-23 11:08.
Whoops, sorry. I was playing around with a shared to-do list manager in drupal, the software that runs this web site, and it seems to have poorly configured security defaults, so the test entries showed up on the home page. I've made them unpublic now.
Submitted by brad on Mon, 2007-06-18 21:34.
For some time I’ve been warning about a growing danger to the 4th amendment. The 4th amendment protects our “persons, houses, papers and effects” but police and some courts have been interpreting this to mean that our private records kept in the hands of 3rd parties — such as E-mail on an ISP or webmail server — are not protected because they are not papers and not in our houses. Or more to the point, that we do not have a “reasonable expectation of privacy” when we leave our private data in the hands of 3rd parties. They have been seizing E-mail without getting a warrant, using the lower standards of the Stored Communications Act.
Recently, we at the EFF got involved in a case challenging that, and argued in our amicus brief that this mail deserved full protection. We won a lower court round and are thrilled that today, the 6th circuit court of appeals has issued a ruling affirming the logic in our amicus and protecting E-mail. We hope and expect this to become the full law of the land, though for now, I might advise all E-mail service providers to move their servers to the 6th circuit (MI, OH, TN, KY) for full protection. It will save you money as you will be able to more simply deal with requests for customer E-mails.
You can read more details on the EFF page on Warshak v USA. Congrats to Kevin Bankston who did the work on the brief. (Amusingly, Google owes him a big debt today, and last week they were hassling him to provide a notarized driver’s license photo in order to get removed from their Street View!)
Submitted by brad on Mon, 2007-06-18 15:38.
Continuing our discussion of the goals of voting systems, today I want to write about ballots that let you vote for more than one candidate in the same race. Many people have seen Preferential voting where you rank the candidates in order of how much you like them. This is used in Australia, and many private elections such as for the Hugo Awards. The most widely known preferential ballot is Single Transferable Vote and its cousin the instant-runoff. Many election theorists, however view these as the worst possible system. I prefer the Condorcet method with the modification that the cases where it fails, it is declared a tie, or a second type of election is used to break the tie. While it has been demonstrated that all preferential ballots have failure modes where they choose somebody that seems illogical based on the voters’ true desires, this does not have to be true when a tie is possible.
Multiple candidate votes would provide a dramatic improvement in the US — they are already used in many other places. They would have entirely eliminated the question of minor candidates “splitting” or spoiling the vote. There would have been no question in Florida of 2000, with Al Gore defeating George W. Bush (and at least by the popular vote, some feel that Bill Clinton would have lost to George Bush the elder, and there’s strong evidence the electoral margin would have at least been smaller.) This is in fact what prevents them from being used — there is always somebody in power who is going to conclude they would have lost has there been a multi-candidate ballot in place. Such people will fight it harder than advocates push it.
Small party candidates want it because it gives them a chance to be heard. Voters who like them can safely express that preference without fear of “spoiling” the race among the frontrunners. Given that, small candidates can eventually become frontrunners. In the 2 party system, as we’ve seen, any time a minor candidate like Ralph Nader gets popular enough that he might actually make a difference, the result is cries of “Ralph, don’t run” and a dropping of support from those who fear that problem. read more »
Submitted by brad on Sat, 2007-06-16 22:00.
Recently, Lauren Weinstein posted a query for a way to bring a certain type of commentary on web sites to the web. In particular, he’s interested in giving people who are the subject of attack web sites, who may even have gotten court judgments against such web sites to inform people of the dispute by annotations that show up when they search in search engines.
I’m not sure this is a good idea for a number of reasons. I like the idea of being able to see 3rd party commentary on web sites (such as Third Voice and others have tried to do) and suspect the browser is a better place than the search engine for it. I don’t like putting any duty upon people who simply link to web sites (which is what search engines do) because the sites are bad. They may want to provide extra info on what they link to as a service to users, but that’s up to them and should be unless they are a monopoly.
In addition, putting messages with an agenda next to search results is what search engines do for a living. However, in that may be the answer. read more »
Submitted by brad on Sat, 2007-06-16 11:54.
From time to time I come up with ideas that are interesting but I can't advocate because they have overly negative consequences in other areas, like privacy. Nonetheless, they are worth talking about because we might find better ways to do them.
There is some controversy today over whether driving while talking on a cell phone is dangerous, and should be banned, or restricted to handsfree mode. It occurs to me that the data to answer that question is out there. Most cars today have a computer, and it records things like the time that airbags deploy, or even in some cases when you suddenly dropped in speed. (If not, it certainly could.) Your cell phone, and your cell company know when you're on the phone. Your phone knows if you are using the handsfree, though the company doesn't. Your phone and cell company also know (but usually don't record) when you're driving and suddenly stop moving for an extended period.
In other words, something with access to all that data (and a time delta for the car's clock) could quickly answer the question of what cell phone behaviours are more likely to cause accidents. It would get a few errors (such as if the driver borrows their passenger's phone) but would be remarkably comprehensive in providing an answer.
But to gather this data involves way too many scary things. We don't really want our cars or phone companies recording data which can be used against us. They could record things like if we speed, and where we go that we don't want others to know about, and who we're talking to at the time, and much more.
In our quest for learning from private data, we have often sought anonymization technologies that can somehow collect the data and disassociate it from the source. That turns out to be very hard to do, often near impossible, and the infrastructure built for this sort of collection can almost always be trivially repurposed for non-anonymous use; now all that is needed is to flick a switch.
Now I do expect that soon we will see, after a serious car accident, attempts to get at this data on a case by case basis. The insurance companies will ask for cell phone records at the time of the accident, or data from the phone itself. We're already going to lose that privacy once there is an accident, thought at least case by case invasions don't scale. Messy problem.
Submitted by brad on Fri, 2007-06-15 23:38.
The radio had a tribute to Bob Barker, who retires today after 35 years hosting The Price is Right. I always admired the genius of that show in making product placement an essential part of the show -- the show was about the advertisers and made the audience think about how much the product was worth and remember it. I'm surprised we didn't see more copycat game shows. There's plenty of product placement today, but it's largely gratuitous, not integral as this was. The fans on the radio said that while the show was gone, they could always watch reruns.
At first I laughed at this -- clearly you could not watch them too soon. But then I thought it might be amusing to see reruns from decades ago just because it would shock us as to how the prices of the items had changed. And then I thought you could recreate the show today, with modern people, and their puzzle would be figuring out the prices of items from the past. And this could be not simply the recent past -- there is no reason the game could not go back centuries, and puzzle the audience about history as well as commerce.
One could even invert the question. "I have here one gallon of gas. What year did it first hit 25 cents?" instead of "Here's a gallon of Gas. What did it cost in 1950?" Of course, the product placement opportunities are perhaps not nearly as good. Companies would not love to remind consumers how much more they charge for things today.
Submitted by brad on Thu, 2007-06-14 23:55.
In my series on the design of new voting systems, I would now like to discuss the question of high voter turnout as a goal for such systems.
Everybody agrees in enfranchisement as a goal for voting systems. Nobody eligible should find voting impossible, or even particularly hard. (And, while it may not be possible due to disabilities, it should be equally easy for a voters.)
However, there is less agreement about trading off other goals to make it trivial to vote. Some voting systems accept that there will be a certain bar of effort required to vote, and don’t view it as a problem that those who will not make a certain minimum effort — registering to vote, and coming down to a polling station — don’t vote. Other systems try to lower that bar as much as possible, with at-home voting by mail, or vote-by-internet and vote-by-phone in private elections. And many nations, such as Australia, even make voting compulsory, with fines if you don’t vote.
What makes this question interesting is the numbers. With 50% voter turnouts, or even less if there is not an “interesting” race, not having trivial voting “disenfranchises” huge numbers of voters. The numbers dwarf any other number in election issues, be it more standard disenfranchisements of minorities or the disabled, or any election fraud I’ve ever heard about. A decision on this issue can be the most election-changing of any. Australia has 96% voter turnout, and it had 47% turnout before it passed the laws in 1924 compelling voting. read more »
Submitted by brad on Tue, 2007-06-12 13:15.
Everybody’s been discovering things in Google Street View. While Microsoft and Amazon did this sort of thing much earlier, there’s been a lot more publicity about Google doing it because it’s Google, and it’s much more high resolution among other things.
But now that it’s out, I expect we’ll see web sites pop up where people spot the Google camera-car and report on its location in real time. Allowing people to prepare for its passage.
I expect we’ll see:
- People flashing various parts of their bodies
- Dances, pyramids, etc.
- Spam, and signs with sayings and ads and even anti-google slogans
- Signs designed to look like a large Google ad box
- People holding Google Maps flags like this crowd from Bay to Breakers
And more clever things I haven’t thought of. Soon they may have to stealth the vehicle!
Submitted by brad on Mon, 2007-06-11 14:39.
Yesterday, I wrote about election goals. Today I want to talk about one of the sub-goals, the non-provable ballot, because I am running into more people who argue it should be abandoned in favour of others goals. Indeed, they argue, it has already been abandoned.
As I noted, our primary goal is that voters cast their true desire, independent of outside pressure. If voters can’t demonstrate convincingly how they voted (or indeed if it’s easy to lie) then they can say one thing to those pressuring them and vote another way without fear of consequences. This is sometimes called “secret ballot” but in fact that consists of two different types of secrecy.
The call to give this up is compelling. We can publish, to everybody, copies of all the ballots cast — for example, on the net. Thus anybody can add up the ballots and feel convinced the counts are correct, and anybody can look and find their own ballot in the pool and be sure their vote was counted. If only a modest number of random people take the time to find their ballot in the published pool, we can be highly confident that no significant number of ballots have not been counted, nor have they been altered or miscounted. It becomes impossible to steal a ballot box or program a machine not to count a vote. It’s still possible to add extra ballots — such as the classic Chicago dead voters, though with enough checking even this can be noticed by the public if it’s done in one place.
The result is a very well verified election, and one the public feels good about. No voter need have any doubt their vote was counted, or that any votes were altered, miscounted, lost or stolen. This concept of “transparency” has much to recommend it.
Further, it is argued, many jurisdictions long ago gave up on unprovable ballots when they allowed vote by mail. The state of Oregon votes entirely by mail, making it trivial to sell your ballot or be pushed into showing it to your spouse. While some jurisdictions only allow limited vote by mail for people who really can’t get to the polls, some allow it upon request. In California, up to 40% of voters are taking advantage of this.
Having given up the unprovable ballot, why should we not claim all the advantages the published ballot can give us? Note that the published ballots need not have names on them. One can give voters a receipt that will let them find their true ballot but not let anybody who hasn’t seen the receipt look up any individual’s vote. So disclosure can still be optional. read more »
Submitted by brad on Sun, 2007-06-10 11:02.
This week I was approached by two different groups seeking to build better voting
systems, something I talk about here in my new democracy
topic. The discussions quickly got into all the various goals we have for voting
systems, and I did some more thinking I want to express here, but I want to start
by talking about the goals. Then shortly I will talk about the one goal both systems wanted to
abandon, namely the inability to prove how you voted.
Many of the goals we talk about are actually sub-goals of the core high-level goals I
will outline here. The challenge comes because no system yet proposed doesn’t have to
trade off one goal for another. This forces us to examine these goals and see which
ones we care about more.
The main goals, as I break them out are: Accuracy, Independence, Enfranchisement,
Confidence and Cost. I seek input on refining these goals, though I realize there will
be some overlap. read more »
Submitted by brad on Fri, 2007-06-08 14:43.
For many of us, E-mail has become our most fundamental tool. It is not just the way we communicate with friends and colleagues, it is the way that a large chunk of the tasks on our “to do” lists and calendars arrive. Of course, many E-mail programs like Outlook come integrated with a calendar program and a to-do list, but the integration is marginal at best. (Integration with the contact manager/address book is usually the top priority.)
If you’re like me you have a nasty habit. You leave messages in your inbox that you need to deal with if you can’t resolve them with a quick reply when you read them. And then those messages often drift down in the box, off the first screen. As a result, they are dealt with much later or not at all. With luck the person mails you again to remind you of the pending task.
There are many time management systems and philosophies out there, of course. A common theme is to manage your to-do list and calendar well, and to understand what you will do and not do, and when you will do it if not right away.
I think it’s time to integrate our time management concepts with our E-mail. To realize that a large number of emails or threads are also a task, and should be bound together with the time manager’s concept of a task.
For example, one way to “file” an E-mail would be to the calendar or a day oriented to-do list. You might take an E-mail and say, “I need 20 minutes to do this by Friday” or “I’ll do this after my meeting with the boss tomorrow.” The task would be tied to the E-mail. Most often, the tasks would not be tied to a specific time the way calendar entries are, but would just be given a rough block of time within a rough window of hours or days.
It would be useful to add these “when to do it” attributes to E-mails, because now delegating a task to somebody else can be as simple as forwarding the E-mail-message-as-task to them.
In fact, because, as I have noted, I like calendars with free-form input (ie. saying “Lunch with Peter 1pm tomorrow” and having the calender understand exactly what to do with it) it makes sense to consider the E-mail window as a primary means of input to the calendar. For example, one might add calendar entries by emailing them to a special address that is processed by the calendar. (That’s a useful idea for any calendar, even one not tied at all to the E-mail program.)
One should also be able to assign tasks to places (a concept from the “Getting Things Done” book I have had recommended to me.) In this case, items that will be done when one is shopping, or going out to a specific meeting, could be synced or sent appropriately to one’s mobile device, but all with the E-mail metaphor.
Because there are different philosophies of time management, all with their fans, one monolithic e-mail/time/calendar/todo program may not be the perfect answer. A plug-in architecture that lets time managers integrate nicely with E-mail could be a better way to do it.
Some of these concepts apply to the shared calendar concepts I wrote about last month.
Submitted by brad on Wed, 2007-06-06 20:19.
Even people outside of California have heard about proposition 13, the tax-revolt referendum which, exactly 29 years ago, changed the property tax law so that one’s property taxes only go up marginally while you own a property. Your tax base remains fixed at the price you paid for your house, with minor increments. If you sell and buy a house of similar value (or inherit in many cases) your tax basis and tax bill can jump alarmingly.
The goal of Prop 13 was that people would not find themselves with a tax bill they couldn’t handle just because soaring real estate values doubled or tripled the price of their home, as has often taken place in California. (Yes, I can hear your tears of sympathy.) In particular older people living off savings were sometimes forced to leave, always unpopular.
However, there have been negative consequences. One, it has stopped tax revenues from rising as fast as the counties like, resulting in underfunding of schools and other public programs. (This could be fixed by jacking up the rates even more on more recent buyers of homes but that has its own problems.)
Two, it generates a highly inequitable situation. Two identical families living in two identical houses — but one has a tax bill of $4,000 per year and the other has a tax bill of $15,000 per year, based entirely on when they bought or inherited their house. I would think this is unconstitutional but the courts said it is not.
Three it’s an impediment to moving (as if the realtor monopoly’s 6% scam were not enough.) There are exemptions in most counties for moves within California by seniors.
Here’s my fix: Each house would, as in most jurisdictions, be fairly appraised, and receive a tax bill based on that. Two identical houses — same tax bill. However, those who had a low basis value in their home could elect to defer some of that bill (ie. the difference between the real bill and their base bill derived from the price they paid for their home) until they sold the home. There would be interest on this unpaid amount, in effect they would be borrowing against the future equity of the home in order to have a lower tax bill. read more »
Submitted by brad on Mon, 2007-06-04 11:01.
Here’s a new approach to linux adoption. Create a linux distro which converts a Windows machine to linux, marketed as a way to solve many of your virus/malware/phishing woes.
Yes, for a long time linux distros have installed themselves on top of a windows machine dual-boot. And there are distros that can run in a VM on windows, or look windows like, but here’s a set of steps to go much further, thanks to how cheap disk space is today. read more »
- Yes, the distro keeps the Windows install around dual boot, but it also builds a virtual machine so it can be run under linux. Of course hardware drivers differ when running under a VM, so this is non-trivial, and Windows XP and later will claim they are stolen if they wake up in different hardware. You may have to call Microsoft, which they may eventually try to stop.
- Look through the Windows copy and see what apps are installed. For apps that migrate well to linux, either because they have equivalents or run at silver or gold level under Wine, move them into linux. Extract their settings and files and move those into the linux environment. Of course this is easiest to do when you have something like Firefox as the browser, but IE settings and bookmarks can also be imported.
- Examine the windows registry for other OS settings, desktop behaviours etc. Import them into a windows-like linux desktop. Ideally when it boots up, the user will see it looking and feeling a lot like their windows environment.
- Using remote window protocols, it’s possible to run windows programs in a virtual machine with their window on the X desktop. Try this for some apps, though understand some things like inter-program communication may not do as well.
- Next, offer programs directly in the virtual machine as another desktop. Put the windows programs on the windows-like “start” menu, but have them fire up the program in the virtual machine, or possibly even fire up the VM as needed. Again, memory is getting very cheap.
- Strongly encourage the Windows VM be operated in a checkpointing manner, where it is regularly reverted to a base state, if this is possible.
- The linux box, sitting outside the windows VM, can examine its TCP traffic to check for possible infections or strange traffic to unusual sites. A database like the siteadvisor one can help spot these unusual things, and encourage restoring the windows box back to a safe checkpoint.
Submitted by brad on Mon, 2007-06-04 00:20.
You’ve all seen it many times. You hit the ‘back’ button and the browser tells you it has to resubmit a form, which may be dangerous, in order to go back. A lot of the blame for this I presume lies on pages not setting suitable cache TTLs on pages served by forms, but I think we could be providing more information here, even with an accurate cache note.
I suggest that when responding to a form POST, the HTTP response should be able to indicate how safe it is to re-post the form, effectively based on what side-effects (other than returning a web page) posting the form had. There are forms that are totally safe to re-POST, and the browser need not ask the user about it, instead treating them more like they do a GET.
(Truth be told, the browser should not really treat GET and POST differently, my proposed header would be a better way to do it on both of them.)
The page could report that the side effects are major (like completing a purchase, or launching an ICBM) and thus that re-posting should be strongly warned against. The best way to do this would be a string, contained in the header or in the HTML so the browser can say, “This requires resubmitting the form which will ” for example.
This is, as noted, independent of whether the results will be the same, which is what the cache is for. A form that loads a webcam has no side effects, but returns a different result every time that should not be cached.
We could also add some information on the Request, telling the form that it has been re-posted from saved values rather than explicit user input. It might then decide what to do. This becomes important when the user has re-posted without having received a full response from the server due to an interruption or re-load. That way the server can know this happened and possibly get a pointer to the prior attempt.
In addition, I would not mind if the query on the back button about form repost offered me the ability to just see the expired cache material, since I may not want the delay of a re-post.
With this strategy in mind, it also becomes easier to create the deep bookmarks I wrote of earlier, with less chance for error.
Some possible levels of side-effects could be None, Minor, Major and Forbidden. The tag could also appear as an HTML attribute to the form itself, but then it can’t reveal things that can only be calculated after posting, such as certain side effects.
Submitted by brad on Sun, 2007-06-03 20:27.
In a chat I had recently with another communications geek, we talked about the well known problem of videoconferencing systems. You look at a person on the screen, and the camera is not where you are looking, so eye contact is not possible.
There have been a few solutions tried for this. You can have a display with a beam-splitting mirror that allows a camera to see a well lit subject, at some cost of quality of the image. You still need to keep the camera on the eyes. There has been some experimentation with software that would have cameras at the left and right of the screen and combine the two images to make one from a virtual camera at the eye point, or sometimes more simply to rewrite the image of the eye to move the pupil to the right place. That turns out to be hard to do because we are very discerning about eyes looking “natural” though it may become possible.
Another approach has been semi-transparent displays a camera can look through, but we like our displays to be crisp and bright. A decade ago I saw guys claiming they could build a display that could focus light without a lens, so each cell could have a sensor, but I have not seen anything come of this. In the end, most people try to place the camera near the top of the screen, and the image right under it.
Having the image under the camera makes the person look like they are looking down. This causes some women to perceive this as something else they frequently see — men staring at their chests when they talk to them. Yes, we’re pretty much all guilty of this.
So I came up with an amusing, not entirely serious answer, namely to put the camera below the image and then, for men at least, stare at her chest, or an imaginary one below the edge of the screen. Then you would be looking at the camera and thus at the other person.
Amusingly, when videophones are shown on TV, we almost always see the people staring right into them, because they are TV actors who know how to find their camera.