Submitted by brad on Thu, 2011-12-22 15:49.
This time of year I do a lot of online shopping, and my bell rings with many deliveries. But today and tomorrow, not Saturday. The post office comes Saturday but has announced it wants to stop doing that to save money. They do need to save money, but this is the wrong approach. I think the time has come for Saturday and Sunday delivery to be the norm for UPS, Fedex and the rest.
When I was young almost all retailers closed on Sunday and even had limited hours on Saturday. Banks never opened on the weekend either. But people soon realized that because the working public had the weekend off, the weekend was the right time for consumer services to be operating. The weekend days are the busiest days at most stores.
The shipping companies like Fedex and UPS started up for business to business, but online shopping has changed that. They now do a lot of delivery to residences, and not just at Christmas. But Thursday and Friday are these odd days in that business. An overnight package on Friday gets there 3 days later, not 1. (If you use the post office courier, you get Saturday delivery as part of the package, and the approximately 2 day Priority mail service is a huge win for things sent Thursday.) In many areas, the companies have offered Saturday and even Sunday delivery, but only as a high priced premium service. Strangely, the weekend also produces a gap in ground shipping times — the truck driving cross-country presumably pauses for 2 days.
We online shoppers shop 7 days a week and we want out stuff as soon as we can get it. I understand the desire to take the weekend off, but usually there are people ready to take these extra shifts. This will cost the delivery companies more as they will have to hire more workers to operate on the weekend. And they can’t just do it for ground (otherwise a 3 day package sent Friday arrives the same time as an overnight package.)
Update: I will point out that while online shipping is the David to the Goliath of brick & mortar, changing shipping to 7 days a week will mean a bunch more stuff gets bought online, and shipped, and will bring new revenue to the shipping companies. It’s just just a cost of hiring more people. It also makes use of infrastructure that sits idle 2 days a week.
This is particularly good for those who are always not hope to sign for packages that come during the work week. The trend is already starting. OnTrak, which has taken over a lot of the delivery from Amazon’s Nevada warehouse to Californians, does Saturday delivery, and it’s made me much more pleased with Amazon’s service. When Deliverbots arrive, this will be a no brainer.
Submitted by brad on Mon, 2011-10-03 10:27.
I’m actually not a fan of login and sessions on the web, and in fact prefer a more stateless concept I call authenticated actions to the more common systems of login and “identity.”
But I’m not going to win the day soon on that, and I face many web sites that think I should have a login session, and that session should in fact terminate if I don’t click on the browser often enough. This frequently has really annoying results — you can be working on a complex form or other activity, then switch off briefly to other web sites or email to come back and find that “your session has expired” and you have to start from scratch.
There are times when there is an underlying reason for this. For example, when booking things like tickets, the site needs to “hold” your pending reservation until you complete it, but if you’re not going to complete it, they need to return that ticket or seat to the pool for somebody else to buy. But many times sessions expire without that reason. Commonly the idea is that for security, they don’t want to leave you logged on in a way that might allow somebody to come to your computer after you leave it and take over your session to do bad stuff. That is a worthwhile concept, particularly for people who will do sessions at public terminals, but it’s frustrating when it happens on the computer in your house when you’re alone.
Many sites also overdo it. While airlines need to cancel your pending seat requests after a while, there is no reason for them to forget everything and make you start from scratch. That’s just bad web design. Other sites are happy to let you stay “logged on” for a year.
To help, it would be nice if the browser had a way of communicating things it knows about your session with the computer to trusted web sites. The browser knows if you have just switched to other windows, or even to other applications where you are using your mouse and keyboard. Fancier tools have even gone so far as to use your webcam and microphone to figure if you are still at your desk or have left the computer. And you know whether your computer is in a public space, semi-public space or entirely private space. If a browser, or browser plug-in, has a standardized way to let a site query session status, or be informed of session changes and per-machine policy, sites could be smarter about logging you out. That doesn’t mean your bank still should not be paranoid if you are logged in to a session where you can spend your money, but they can be more informed about it. read more »
Submitted by brad on Mon, 2011-08-22 12:12.
Today an op-ed by John Sununu and Harold Ford Jr. of “Broadband For America” (a group of cable companies and other ISPs which says it is really a grass-roots organization) declared that the net needs a better pricing model for what Netflix is doing. For a group of ISPs, they really seem to not understand how the internet works and how pricing works, so I felt it was worthwhile to describe how things work with a remarkably close analogy. (I have no association with Netflix, I am not even a customer, but I do stream video on the net.)
You can liken the internet to a package delivery service that works somewhat differently from traditional ones like the postal service or FedEx. The internet’s pricing model is “I pay for my line to the middle, and you pay for your line to the middle and we don’t account for the costs of individual traffic.”
In the package model, imagine a big shipping depot. Shippers send packages to this depot, and it’s the recipient’s job to get the package from the depot to their house. The shippers pay for their end, you pay for your end, and both share the cost of creating the depot.
Because most people don’t want to go directly to the depot to get their packages, a few “last mile” delivery companies have sprung up. For a monthly fee, they will deliver anything that shows up at the depot addressed to you directly to your house. They advertise in fact, that for the flat fee, they will deliver as many packages as show up, subject to a fairly high maximum rate per unit of time (called bandwidth in the internet world.) They promote and compete on this unlimited service.
To be efficient, the delivery companies don’t run a private truck from the depot to your house all the time. Instead, they load up a truck with all the packages for your neighbourhood, and it does one delivery run. Some days you have a lot of packages and your neighbours have few. Other days you have few and they have a lot. The truck is sized to handle the high end of the total load for all the neighbours. However, it can’t handle it if a large number of the neighbours all want to use a large fraction of their total load on the same day, they just didn’t buy enough trucks for that, even though they advertised they were selling that.
This is not unreasonable. A majority of the businesses in the world that sell flat rate service work this way, not just internet companies. Though there are a few extra twists in this case:
- The last mile companies have a government granted franchise. Only a couple can get permission to operate. (In reality — only a few companies have got permission to have wires strung on poles or under the street.)
- Some of the last mile companies also used to be your exclusive source for some goods (in this case phone service and TV) and are concerned that now there are competitors delivering those things to the customers.
The problem arises because new services like Netflix suddenly have created a lot more demand to ship packages. More than the last mile companies counted on. They’re seeing the truck fill up and need to run more trucks. But they proudly advertised unlimited deliveries from the depot to their customers. So now, in the op-ed, they’re asking that companies like Netflix, in addition to paying the cost of shipping to the depot, pay some of the cost for delivery from the depot to the customer. If they did this, companies would pass this cost on to the customer, even though the customer already paid for that last mile delivery. read more »
Submitted by brad on Tue, 2011-03-15 23:13.
ICANN is meeting in San Francisco this week. And they’re getting closer to finally implementing a plan they have had in the works for some time to issue new TLDs, particularly generic top level domains.
Their heart is in the right place, because Verisign’s monopoly on “.com” — which has become the de facto only space where everybody wants a domain name, if they can get it — was a terrible mistake that needs to be corrected. We need to do something about this, but the plan of letting other companies get generic TLDs which are ordinary English words, with domains like “.sport” and “.music” (as well as .ibm and .microsoft) is a great mistake.
I have an odd ambivalence. This plan will either fail (as the others like .travel, .biz, .museum etc appear to have) or it will succeed at perpetuating the mistake. Strangely it is the trademark lawyers who know the answer to this. In trademark law, it was wisely ruled centuries ago that nobody gets ownership of generic terms. But some parties will offer the $185,000 fee to own .music precisely because they hope it will give them a monopoly on naming of music related internet sites. Like all monopolies these TLDs will charge excessive fees and give poor customer service. They’ll also get to subdivide the monopoly selling domains like rock.music or classical.music. And while .music will compete with .com, the new TLDs will largely not compete with one another — ie. nobody will be debating whether to go with .music or .sport, and so we won’t get the competition we truly need.
I’ve argued this before, but I have just prepared two new essays in my DNS sub-site:
Since I don’t like either of the two main consequences, what do I propose? Well for years I have suggested we should instead have truly competitive TLDs which can compete on everything — price, policies, service, priority and more. They should each start on an equal footing so they are equal competitors. That means not giving any one a generic name that has an intrinsic value like “.music.” People will seek out the .music domain not because the .music company is good or has good prices, they will seek it out because they want to name a site related to music, and that’s not a market.
Instead I propose that new TLDs be what trademark people call “coined terms” which are made up words with no intrinsic meaning. Examples from the past include names like Kodak, Xerox and Google. Today, almost every new .com site has to make up a coined term because all the generics are taken. If the TLDs are coined terms, then the owners must build the value in them by the sweat of their brow (or with mone) rather than getting a feudal lordship over an existing space. That means they can all compete for the business of people registering domains, and competition is what’s good for the market and the users.
Sadly the .com monopoly remains (along with the few other generic TLDs.) The answer there is to announce a phase out. All .com sites with generic meanings should get new names in the new system, but after a year or two they’ll get redirect as long as they want to pay. (Their new registrar will manage this and set the price.) All http requests, in particular would get an HTTP Redirect Permanent (301) so the browser shows the new name. E-mail MX would be provided but all sent email would use the new name. All old links and addresses would still work forever, but users would switch advertising and everything else to the new names at a reasonable pace. Yes, people who invested lots of money in trying to own words like “drugstore.com” lose some of that value, but it’s value they should never have been sold in the first place. (Companies with unique strings like microsoft.com could avoid the switch, but not non-unique ones like apple.com or ibm.com)
Check out the essays for the real details. Of course, at this point the forces of the “stakeholders” at ICANN are so powerful that I am tilting at windmills. They will go ahead even though it’s the wrong answer. And once done, it will be as hard to undo as .com is. But the right answer should still be proclaimed.
Submitted by brad on Wed, 2011-03-09 15:19.
In media today, it’s common to talk about three screens: Desktop, mobile and TV. Many people watch TV on the first two now, and tools like Google TV and the old WebTV try to bring interactive, internet style content to the TV. People like to call the desktop the “lean forward” screen where you use a keyboard and have lots of interactivity, while the TV is the “lean back” couch-potato screen. The tablet is also distinguishing itself a bit from the small screen normally found in mobile.
More and more people also find great value in having an always-on screen where they can go to quickly ask questions or do tasks like E-mail.
I forecast we will soon see the development of a “fourth screen” which is a mostly-always-on wall panel meant to be used with almost no interaction at all. It’s not a thing to stare at like the TV (though it could turn into one) nor a thing to do interactive web sessions on. The goal is to have minimal UI and be a little bit psychic about what to show.
One could start by showing stuff that’s always of use. The current weather forecast, for example, and selected unusual headlines. Whether each member of the household has new mail, and if it makes sense from a privacy standpoint, possibly summaries of that mail. Likewise the most recent status from feeds on twitter or Facebook or other streams. One could easily fill a screen with these things so you need a particularly good filter to find what’s relevant. Upcoming calendar events (with warnings) also make sense.
Some things would show only when important. For example, when getting ready to go out, I almost always want to see the traffic map. Or rather, I want to see it if it has traffic jams on it, no need to show it when it’s green — if it’s not showing I know all is good. I may not need to see the weather if it’s forecast sunny either. Or if it’s raining right now. But if it’s clear now and going to rain later I want to see that. Many city transit systems have a site that tracks when the next bus or train will come to my stop — I want to see that, and perhaps at morning commute time even get an audio alert if something unusual is up or if I need to leave right now to catch the street car. A view from the security camera at the door should only show if somebody is at the door.
There are so many things I want to see that we will need some UI for the less popular ones. But it should be a simple UI, with no need to find a remote (though if I have a remote — any remote — it should be able to use it.) Speech commands would be good to temporarily see other screens and modes. A webcam (and eventually Kinect style sensor) for gestural UI would be nice, letting me swipe or wave to get other screens. read more »
Submitted by brad on Fri, 2011-02-18 17:29.
You may have heard of Bus Rapid Transit — a system to give a bus line a private or semi-private right-of-way, along with bus stops that are more akin to stations than bus shelters (with ticket-taking machines and loading platforms for multiple doors.) The idea is to make bus transit competitive with light-rail (LRT) in terms of speed and convenience. Aside from getting caught in slow traffic, buses also are slow to board. BRT is hoped to be vastly less expensive than light rail — which is not hard because LRT (which means light capacity rail, not lightweight rail) has gotten up to $80 to $100M per mile. When BRT runs down the middle of regular roads, it gets signal timing assistance to help it have fewer stops. It’s the “hot new thing” in transit. Some cities even give it bits of underground or elevated ROW (the Boston Silver Line) and others just want to wall off the center of a road to make an express bus corridor. Sometimes BRT gets its own highway lane or shares a special carpool lane.
At the same time just about anybody who has looked at transit and the internet has noticed that as the buses go down the street, they travel with tons of cars carrying only one person and lots of empty seats. Many have wondered, “how could we use those empty private car seats to carry the transit load?” There are a number of ride-sharing and carpooling apps on web sites and on smartphones, but success has been modest. Drivers tend to not want to take the time to declare their route, and if money is offered, it’s usually not enough to counter the inconvenience. Some apps are based on social networks so friends can give rides to friends — great when it works but not something you can easily do on demand.
But one place I’ve seen a lot of success at this is the casual carpooling system found in a number of cities. Here it’s very popular to cross the Oakland-SF Bay Bridge, which has a $6 toll to cross into SF. It used to be free for 3-person carpools, now it’s $2.50, but the carpools also get a faster lane for access to the highly congested bridge both going in and out of SF.
Almost all the casual carpool pickup spots coming in are at BART (subway) stations, which are both easy for everybody to get to, and which allow those who can’t get a carpool to just take the train. There is some irony that it means that the carpools mostly take people who would have ridden BART trains, not people who would have driven, the official purpose of carpool subsidies. In the reverse direction the carpools are far fewer with no toll to be saved, but you do get a better onramp.
People drive the casual carpools because they get something big for for it — saving over $1,000/year, and hopefully a shorter line to the bridge. This is the key factor to success in ride share. The riders are saving a similar amount of money in BART tickets, even more if they skipped driving.
Let’s consider what would happen if you put in the dedicated lane for BRT, but instead of buses created an internet mediated carpooling system. Drivers could enter the dedicated lane only if:
- They declared their exit in advance to the app on their phone, and it’s far enough away to be useful to riders.
- They agree to pick up riders that their phone commands them to.
- They optionally get a background check that they pay for so they can be bonded in some way to do this. (Only the score of the background check is recorded, not the details.)
Riders would declare their own need for a ride, and to what location, on their own phones, or on screens mounted at “stops” (or possibly in nearby businesses like coffee shops.) When a rider is matched to a car, the rider will be informed and get to see the approach of their ride on the map, as well as a picture of the car and plate number. The driver will be signaled and told by voice command where to go and who to pick up. I suggest calling this Carpool-Rapid-Transit or CRT. read more »
Submitted by brad on Fri, 2010-12-17 12:25.
Passwords are in the news thanks to Gawker media, who had their database of userids, emails and passwords hacked and published on the web. A big part of the fault is Gawker’s, who was saving user passwords (so it could email them) and thus was vulnerable. As I have written before, you should be very critical of any site that is able to email you your password if you forget it.
Some of the advice in the wake of this to users has been to not use the same password on multiple sites, and that’s not at all practical in today’s world. I have passwords for many hundreds of sites. Most of them are like gawker — accounts I was forced to create just to leave a comment on a message board. I use the same password for these “junk accounts.” It’s just not a big issue if somebody is able to leave a comment on a blog with my name, since my name was never verified in the first place. A different password for each site just isn’t something people can manage. There are password managers that try to solve this, creating different passwords for each site and remembering them, but these systems often have problems when roaming from computer to computer, or trying out new web browsers, or when sites change their login pages.
The long term solution is not passwords at all, it’s digital signature (though that has all the problems listed above) and it’s not to even have logins at all, but instead use authenticated actions so we are neither creating accounts to do simple actions nor using a federated identity monopoly (like Facebook Connect). This is better than OpenID too. read more »
Submitted by brad on Sat, 2010-09-25 14:54.
Yesterday we had a meeting using some videoconferencing. In a situation I find fairly common, the setup was a meeting room with many people, and then a small number of people calling in remotely. In spite of this being a fairly common situation, I have had trouble finding conferencing systems that do this particular task very well. I have not been looking in the high-priced end but I believe the more modestly priced tools should be able to focus on this and make it work. Yesterday we used Oovoo, one of the few multi-part conference systems to support PC and Mac, with some good but many bad results.
The common answer, namely a speakerphone on the meeting room table and a conference bridge system, is pretty unsatisfactory, though the technology is stable enough that it is easy to get going. The remote people are never really part of the meeting. It’s harder for them to engage in random banter, and the call fidelity is usually low and never better than PSTN phone quality. They usually have trouble hearing some of the people in the meeting room, though fancier systems with remote microphones help a bit with that.
The audio level
The next step up is a higher quality audio call. For this Skype is an excellent and free solution. The additional audio quality offers a closer sense of being in the room, and better hearing in both directions. It comes with a downside in that tools like Skype often pick up ambient noise in the room (mostly with remote callers) including clacking of keyboards, random background noises and bleeps and bloops of software using the speakers of the computer. While Skype has very good echo cancellation for those who wish to use it in speakerphone mode, I still strongly recommend the use of headsets by those calling in remotely, and even the judicious use of muting. There’s a lot more Skype and others could do in this department, but a headset is a real winner, and they are cheap.
Most of these notes also apply to video calling which of course includes audio. read more »
Submitted by brad on Wed, 2010-03-24 18:02.
Today an interesting paper (written with the assistance of the EFF) was released. The authors have found evidence that governments are compromising trusted “certificate authorities” by issuing warrants to them, compelling them to create a false certificate for a site whose encrypted traffic they want to snoop on.
That’s just one of the many ways in which web traffic is highly insecure. The biggest reason, though, is that the vast majority of all web traffic takes place “in the clear” with no encryption at all. This happens because SSL/TLS, the “https” system is hard to set up, hard to use, considered expensive and subject to many false-alarm warnings. The tendency of security professionals to deprecate anything but perfect security often leaves us with no security at all. My philosophy is different. To paraphrase Einstein:
Ordinary traffic should be made as secure as can be made easy to use, but no more secure
In this vein, I have prepared a new article on how to make the web much more secure, and it makes sense to release it today in light of the newly published threat. My approach, which calls for new browser behaviour and some optional new practices for sites, calls for the following:
- Make TLS more lightweight so that nobody is bothered by the cost of it
- Automatic provisioning (Zero UI) for self-signed certificates for domains and IPs.
- A different meaning for the lock icon: Strong (Locked), Ordinary (no icon) and in-the-clear (unlocked).
- A new philosophy of browser warnings with a focus on real threats and on changes in security, rather than static states deemed insecure.
- A means so sites can provide a file with advisories for browsers about what warnings make sense at this site.
There is one goal in mind here: The web must become encrypted by default, with no effort on the part of site operators and users, and false positive warnings that go off too frequently and make security poor and hard to use must be eliminated.
If you have interest in browser design and security policy I welcome your comments on A new way to secure the web.
Submitted by brad on Tue, 2010-01-26 05:28.
I’m at DLD in Munich, and going to Davos tomorrow. While at DLD I made a brief mention during a panel on identity and tracking of my concept of the privacy dangers of the AIs of the future, which are able to extract things from recorded data (like faces) that we can’t do today.
I mentioned a new idea, however, which is a search engine which focuses on the negative, because though advanced algorithms it can tell the difference between positive and negative content.
We’re quite interested in dirt. Every eBay user who looks at a seller’s feedback would like to see only the negative comments, as the positive ones tell almost no information. eBay doesn’t want to show this, they want people to see eBay sellers as positive and to bid.
But a lot of the time if we are investigating a company we might do business with or even a person, we want to focus on the negative. A company with few complaints is of interest to us. AI software will exist to find such complaints, and possibly even to do things like understand photos and know which ones might be a source of embarrassment, or read on postings on message boards and tell which ones are damning. This is hard to do well today, but will change over time.
This will have deep consequences to concepts of reputation. Those with a big online presence certainly have bad stuff written by or about them out there. Normally, however, it is buried in the large volume of stuff, and doesn’t get high search engine rankings. However, our human thirst for gossip and dirt will result in some search engines will push it to the top. In addition, there will be those wanting to game this with deliberate libel of their enemies and competitors. Today they can do this but their libels will be hidden in the large volume of information.
Some have proposed that in the future it will be necessary to pay a service to libel you, and spread lots of false material that buries and discredits any libel left by enemies (as well as true negative comments.) The AIs may be able to spot the difference, but that’s an arms race which can’t easily be predicted.
It is likely that all the bad in our lives will haunt us even more than we already fear. Efforts by some countries to pass laws which let people delete alleged libels will not work, and may bring even more attention to the materials. While you might be able to remove your tag from a photo on facebook, once that photo makes it into a system that can do face recognition the tag will come back and do so in ways beyond your control.
Submitted by brad on Thu, 2010-01-21 16:28.
These days it is getting very common to make videos of presentations, and even to do live streams of them. And most of these presentations have slides in Powerpoint or Keynote or whatever. But this always sucks, because the camera operator — if there is one — never moves between the speaker and the slide the way I want. You can’t please everybody of course.
In the proprietary “web meeting” space there are several tools that will let people do a video presentation and sync it with slides, ideally by pre-transmitting the slide deck so it is rendered in full resolution at the other end, along with the video. In this industry there are also some video players where you can seek along in the video and it automatically seeks within the slides. This can be a bit complex if the slides try to do funny animations but it can be done.
Obviously it would be nice to see a flash player that understands it is playing a video and also playing slides (even video of slides, though it would be better to do it in higher quality since it isn’t usually full motion video.) Sites like youtube could support it. However, getting the synchronization requires that you have a program on the presenting computer, which you may not readily control.
One simple idea would be a button the camera operator could push to say “Copy this frame to the slide window.” Then the camera would, when there is a new slide, move or switch over there, and the button would be pushed, and the camera could go immediately back to the speaker. Usually though the camera crew has access to the projector feed and would not need to actually point a camera, in fact some systems “switch” to the slides by just changing the video feed. A program which sends the projector feed with huge compresion (in other words, an I frame for any slide change and nothing after) would also work well. No need to send all the fancy transitions.
But it would be good to send the slides not as mpeg, but as PNG style, to be sharper if you can’t get access to the slides themselves. I want a free tool, so I can’t ask for the world, yet, but even something as basic as this would make my watching of remote presentations and talks much better. And it would make people watching my talks have a better time too — a dozen or so of them are out on the web.
I’m in O’Hare waiting to fly to Munich for DLD. More details to come.
Submitted by brad on Fri, 2010-01-15 17:50.
There’s a phenomenon we’re seeing more and more often. A company screws over a customer, but this customer now has a means to reach a large audience through the internet, and as a result it becomes a PR disaster for the company. The most famous case recently was United Breaks Guitars where Nova Scotia musician David Carroll had his luggage mistreated and didn’t get good service, so he wrote a funny song and music video about it. 7 million views later, a lot of damage was done to United Airlines’ reputation.
I’ve done this myself to companies who refuse to fix things. I will write a page about the incident sometimes, and due to my high google pagerank, the page will show up high. Do a Google search for Qwest Long Distance and you’ll see the first hit is Qwest, and the 2nd is my boring but frustrating story of bad service. I’m not the only one to have done this. Over 200 people per month visit that page — which has been up for almost a decade — and you have to assume they have lost more business than it would have cost to make things right.
Now I think all good companies should make things right whenever they can to show that the errors are rare enough that they can afford to go the extra mile and fix them. If you won’t fix them, it means you must have a lot of them.
However, companies are soon going to realize that there are a whole raft of “minor celebrities” like David Carroll and even myself who can do far more damage than they can tolerate. Companies have always given top notch service to A-list celebrities, and even to B list. Not just gift bags at the Oscars. When I was kid, my father was A-list for a time in Canada, and that meant that when he got on a plane with a coach ticket, the flight attendant escorted him to first class. That was in the days before first class was always full due to upgrades, of course.
But there are tens of thousands, maybe hundreds of thousands of people who can be a risk for a company if they piss us off. All bloggers with a decent audience (and even some who have an audience that includes the A-list bloggers.) People with high search engine rank. People who can simply write well to get their story out there — in particular people who are good at making a story funny and entertaining. And of course, musicians and people who are good at video editing and producing viral videos. Perhaps them most of all.
So I predict that before long services will spring up to enumerate these D-list and E-list celebrities and potential celebs. Everybody will get graded. And a flag will show up in the customer service computer for the top few percentiles saying, “this one is an influencer.” It will say, “you are authorized, though you are just a script monkey customer rep, to do more for this customer.” Or you might just be direct right to a more powerful rep. This “long tail elite” may just start getting better service and even better deals, so long as they identify themselves first.
Companies have done this for some time based on how good a customer you are, ie. how much you spend. If you are a big spending customer, you get the magic 800 number or just get routed to the better service due to your frequent flyer number or even caller-ID. But I’m talking about doing this not just for those who spend a lot, but for those who influence a lot of spending — or could influence it in a negative way.
And of course they are working hard to make us identify ourselves in every transaction, just not yet for this. People who review products for a living will need to be sure they are anonymous when they buy and ask for service. But oddly, negative reviews from people who review stuff for a living are becoming less important than the horror story from the negative guy. Since most product reviewers at magazines are unwilling to go through the horrors of real customer service, they call the PR flacks and get top-rated service, and then explain in the review that they did this (if they are honest.)
If you’re not in the long-tail elite, this is all a bad sign. You’ll never get much satisfaction, and the number of horror stories on the net will go down below what the true level should be. Of course you will be able to join the long tail elite if you want to, since I am sure those who track it will note the names of people who regularly show up on consumer complaint message boards that have high readership or rank. But that’s a lot of work.
It doesn’t really do a lot of good for the rest of the world if perks are given to the long tail elite. Better just for companies to get good enough that they make mistakes rarely, and thus can afford to go the extra distance to fix them when it happens.
Submitted by brad on Tue, 2009-12-15 16:46.
I think URL shorteners are are a curse, but thanks to Twitter they are growing vastly in use. If you don’t know, URL shorteners are sites that will generate a compact encoded URL for you to turn a very long link into a short one that’s easier to cut and paste, and in particular these days, one that fits in the 140 character constraint on Twitter.
I understand the attraction, and not just on twitter. Some sites generate hugely long URLs which fold over many lines if put in text files or entered for display in comments and other locations. The result, though, is that you can no longer determine where the link will take you from the URL. This hurts the UI of the web, and makes it possible to fool people into going to attack sites or Rick Astley videos. Because of this, some better twitter clients re-expand the shortened URLs when displaying on a larger screen.
Anyway, here’s an idea for the Twitter clients and URL shorteners, if they must be used. In a tweet, figure out how much room there is to put the compacted URL, and work with a shortener that will let you generate a URL of exactly that length. And if that length has some room, try to put in some elements from the original URL so I can see them. For example, you can probably fit the domain name, especially if you strip off the “www.” from it (in the visible part, not in the real URL.) Try to leave as many things that look like real words, and strip things that look like character encoded binary codes and numbers. Of course, in the end you’ll need something to make the short URL unique, but not that much. Of course, if there already is a URL created for the target, re-use that.
Google just did its own URL shortener. I’m not quite sure what the motives of URL shortener sites are. While sometimes I see redirects that pause at the intermediate site, nobody wants that and so few ever use such sites. The search engines must have started ignoring URL redirect sites when it comes to pagerank long ago. They take donations and run ads on the pages where people create the tiny URLs, but when it comes to ones used on Twitter, these are almost all automatically generated, so the user never sees the site.
Submitted by brad on Tue, 2009-11-17 16:18.
(Update: I had a formatting error in the original posting, this has been fixed.)
A few weeks ago when I wrote about the non deployment of SSL I touched on an old idea I had to make web transactions vastly more efficient. I recently read about Google’s proposed SPDY protocol which goes in a completely opposite direction, attempting to solve the problem of large numbers of parallel requests to a web server by multiplexing them all in a single streaming protocol that works inside a TCP session.
While calling attention to that, let me outline what I think would be the fastest way to do very simple web transactions. It may be that such simple transactions are no longer common, but it’s worth considering.
Today the way this works is pretty complex:
- You do a DNS request for www.example.com via a UDP request to your DNS server. In the pure case this also means first asking where “.com” is but your DNS server almost surely knows that. Instead, a UDP request is sent to the “.com” master server.
- The “.com” master server returns with the address of the server for example.com.
- You send a DNS request to the example.com server, asking where “www.example.com is.”
- The example.com DNS server sends a UDP response back with the IP address of www.example.com
- You open a TCP session to that address. First, you send a “SYN” packet.
- The site responds with a SYN/ACK packet.
- You respond to the SYN/ACK with an ACK packet. You also send the packet with your HTTP “GET” reqequest for “/page.html.” This is a distinct packet but there is no roundtrip so this can be viewed as one step. You may also close off your sending with a FIN packet.
- The site sends back data with the contents of the page. If the page is short it may come in one packet. If it is long, there may be several packets.
- There will also be acknowledgement packets as the multiple data packets arrive in each direction. You will send at least one ACK.
The other server will ACK your FIN.
- The remote server will close the session with a FIN packet.
- You will ACK the FIN packet.
You may not be familiar with all this, but the main thing to understand is that there are a lot of roundtrips going on. If the servers are far away and the time to transmit is long, it can take a long time for all these round trips.
It gets worse when you want to set up a secure, encrypted connection using TLS/SSL. On top of all the TCP, there are additional handshakes for the encryption. For full security, you must encrypt before you send the GET because the contents of the URL name should be kept encrypted.
A simple alternative
Consider a protocol for simple transactions where the DNS server plays a role, and short transactions use UDP. I am going to call this the “Web Transaction Protocol” or WTP. (There is a WAP variant called that but WAP is fading.)
- You send, via a UDP packet, not just a DNS request but your full GET request to the DNS server you know about, either for .com or for example.com. You also include an IP and port to which responses to the request can be sent.
- The DNS server, which knows where the target machine is (or next level DNS server) forwards the full GET request for you to that server. It also sends back the normal DNS answer to you via UDP, including a flag to say it forwarded the request for you (or that it refused to, which is the default for servers that don’t even know about this.) It is important to note that quite commonly, the DNS server for example.com and the www.example.com web server will be on the same LAN, or even be the same machine, so there is no hop time involved.
- The web server, receiving your request, considers the size and complexity of the response. If the response is short and simple, it sends it in one UDP packet, though possibly more than one, to your specified address. If no ACK is received in reasonable time, send it again a few times until you get one.
- When you receive the response, you send an ACK back via UDP. You’re done.
The above transaction would take place incredibly fast compared to the standard approach. If you know the DNS server for example.com, it will usually mean a single packet to that server, and a single packet coming back — one round trip — to get your answer. If you only know the server for .com, it would mean a single packet to the .com server which is forwarded to the example.com server for you. Since the master servers tend to be in the “center” of the network and are multiplied out so there is one near you, this is not much more than a single round trip. read more »
Submitted by brad on Wed, 2009-10-28 22:03.
I just returned from Jeff Pulver’s “140 Characters” conference in L.A. which was about Twitter. I asked many people if they get Twitter — not if they understand how it’s useful, but why it is such a hot item, and whether it deserves to be, with billion dollar valuations and many talking about it as the most important platform.
Some suggested Twitter is not as big as it appears, with a larger churn than expected and some plateau appearing in new users. Others think it is still shooting for the moon.
The first value in twitter I found was as a broadcast SMS. While I would not text all my friends when I go to a restaurant or a club, having a way so that they will easily know that (and might join me) is valuable. Other services have tried to do things like this but Twitter is the one that succeeded in spite of not being aimed at any specific application like this.
This explains the secret of Twitter. By being simple (and forcing brevity) it was able to be universal. By being more universal it could more easily attain critical mass within groups of friends. While an app dedicated to some social or location based application might do it better, it needs to get a critical mass of friends using it to work. Once Twitter got that mass, it had a leg up at being that platform.
At first, people wondered if Twitter’s simplicity (and requirement for brevity) was a bug or a feature. It definitely seems to have worked as a feature. By keeping things short, Twitter makes is less scary to follow people. It’s hard for me to get new subscribers to this blog, because subscribing to the blog means you will see my moderately long posts every day or two, and that’s an investment in reading. To subscribe to somebody’s Twitter feed is no big commitment. Thus people can get a million followers there, when no blog has that. In addition, the brevity makes it a good match for the mobile phone, which is the primary way people use Twitter. (Though usually the smart phone, not the old SMS way.)
And yet it is hard not to be frustrated at Twitter for being so simple. There are so many things people do with Twitter that could be done better by some more specialized or complex tool. Yet it does not happen.
Twitter has made me revise slightly my two axes of social media — serial vs. browsed and reader-friendly vs. writer friendly. Twitter is generally serial, and I would say it is writer-friendly (it is easy to tweet) but not so reader friendly (the volume gets too high.)
However, Twitter, in its latest mode, is something different. It is “sampled.” In normal serial media, you usually consume all of it. You come in to read and the tool shows you all the new items in the stream. Your goal is to read them all, and the publishers tend to expect it. Most Twitter users now follow far too many people to read it all, so the best they can do is sample — they come it at various times of day and find out what their stalkees are up to right then. Of course, other media have also been sampled, including newspapers and message boards, just because people don’t have time, or because they go away for too long to catch up. On Twitter, however, going away for even a couple of hours will give you too many tweets to catch up on.
This makes Twitter an odd choice as a publishing tool. If I publish on this blog, I expect most of my RSS subscribers will see it, even if they check a week later. If I tweet something, only a small fraction of the followers will see it — only if they happen to read shortly after I write it, and sometimes not even then. Perhaps some who follow only a few will see it later, or those who specifically check on my postings. (You can’t. Mine are protected, which turns out to be a mistake on Twitter but there are nasty privacy results from not being protected.)
TV has an unusual history in this regard. In the early days, there were so few stations that many people watched, at one time or another, all the major shows. As TV grew to many channels, it became a sampled medium. You would channel surf, and stop at things that were interesting, and know that most of the stream was going by. When the Tivo arose, TV became a subscription medium, where you identify the programs you like, and you see only those, with perhaps some suggestions thrown in to sample from.
Online media, however, and social media in particular were not intended to be sampled. Sure, everybody would just skip over the high volume of their mailing lists and news feeds when coming back from a vacation, but this was the exception and not the rule.
The question is, will Twitter’s nature as a sampled medium be a bug or a feature? It seems like a bug but so did the simplicity. It makes it easy to get followers, which the narcissists and the PR flacks love, but many of the tweets get missed (unless they get picked up as a meme and re-tweeted) and nobody loves that.
On Protection: It is typical to tweet not just blog-like items but the personal story of your day. Where you went and when. This is fine as a thing to tell friends in the moment, but with a public twitter feed, it’s being recorded forever by many different players. The ephemeral aspects of your life become permanent. But if you do protect your feed, you can’t do a lot of things on twitter. What you write won’t be seen by others who search for hashtags. You can’t reply to people who don’t follow you. You’re an outsider. The only way to solve this would be to make Twitter really proprietary, blocking all the services that are republishing it, analysing it and indexing it. In this case, dedicated applications make more sense. For example, while location based apps need my location, they don’t need to record it for more than a short period. They can safely erase it, and still provide me a good app. They can only do this if they are proprietary, because if they give my location to other tools it is hard to stop them from recording it, and making it all public. There’s no good answer here.
Submitted by brad on Wed, 2009-09-23 17:50.
It seems that with more and more of the online transactions I engage in — and sometimes even when I don’t buy anything — I will get a request to participate in a customer satisfaction survey. Not just some of the time in some cases, but with every purchase. I’m also seeing it on web sites — sometimes just for visiting a web site I will get a request to do a survey, either while reading, or upon clicking on a link away from the site.
On the surface this may seem like the company is showing they care. But in reality it is just the marketing group’s thirst for numbers both to actually improve things and to give them something to do. But there’s a problem with doing it all the time, or most of the time.
First, it doesn’t scale. I do a lot of transactions, and in the future I will do even more. I can’t possibly fill out a survey on each, and I certainly don’t want to. As such I find the requests an annoyance, almost spam. And I bet a lot of other people do.
And that actually means that if you ask too much, you now will get a self-selected subset of people who either have lots of free time, or who have something pointed to say (ie. they got a bad experience, or perhaps rarely a very good one.) So your survey becomes valueless as data collection the more people you ask to do it, or rather the more refusals you get. Oddly, you will get more useful results asking fewer people.
Sort of. Because if other people keep asking everybody, it creates the same burn-out and even a survey that is only requested from 1 user out of 1000 will still see high rejection and self-selection. There is no answer but for everybody to truly only survey a tiny random subset of the transactions, and offer a real reward (not some bogus coupon) to get participation.
I also get phone surveys today from companies I have actually done business with. I ask them, “Do you have this survey on the web?” So far, they always say no, so I say, “I won’t do it on the phone, sorry. If you had it on the web I might have.” I’m lying a bit, in that the probability is still low I would do it, but it’s a lot higher. I can do a web survey in 1/10th the time it takes to get quizzed on the phone, and my time is valuable. Telling me I need to do it on the phone instead of the web says the company doesn’t care about my time, and so I won’t do it and the company loses points.
Sadly, I don’t see companies learning these lessons, unless they hire better stats people to manage their surveys.
Also, I don’t want a reminder from everybody I buy from on eBay to leave feedback. In fact, remind me twice and I’ll leave negative feedback if I’m in a bad mood. I prefer to leave feedback in bulk, that way every transaction isn’t really multiple transactions. Much better if ebay sends me a reminder once a month to leave feedback for those I didn’t report on, and takes me right to the bulk feedback page.
Submitted by brad on Sun, 2009-06-07 16:29.
Twenty years ago (Monday) on June 8th, 1989, I did the public launch of ClariNet.com, my electronic newspaper business, which would
be delivered using USENET protocols (there was no HTTP yet) over the internet.
ClariNet was the first company created to use the internet as its platform for business, and as such this event has a claim at being the birth of the “dot-com” concept which so affected the world in the two intervening decades. There are other definitions and other contenders which I discuss in the article below.
In those days, the internet consisted of regional networks, who were mostly non-profit cooperatives, and the government funded “NSFNet” backbone which linked them up. That backbone had a no-commercial-use policy, but I found a way around it. In addition, a nascent commercial internet was arising with companies like UUNet and PSINet, and the seeds of internet-based business were growing. There was no web, of course. The internet’s community lived in e-Mail and USENET. Those, and FTP file transfer were the means of publishing. When Tim Berners-Lee would coin the term “the web” a few years later, he would call all these the web, and HTML/HTTP a new addition and glue connecting them.
I decided I should write a history of those early days, where the seeds of the company came from and what it was like before most of the world had even heard of the internet. It is a story of the origins and early perils and successes, and not so much of the boom times that came in the mid-90s. It also contains a few standalone anecdotes, such as the story of how I accidentally implemented a system so reliable, even those authorized to do so failed to shut it down (which I call “M5 reliability” after the Star Trek computer), stories of too-early eBook publishing and more.
There’s also a little bit about some of the other early internet and e-publishing businesses such as BBN, UUNet, Stargate, public access unix, Netcom, Comtex and the first Internet World trade show.
Extra, extra, read all about it: The history of ClariNet.com and the dawn of the dot-coms.
Submitted by brad on Sat, 2009-03-14 16:43.
As you may know, I allow anonymous comments on this blog. Generally, when a blog is small, you don’t want to do too much to discourage participation. Making people sign up for an account (particularly with email verification) is too much of a barrier when your comment volume is small. You can’t allow raw posting these days because of spammers — you need some sort of captcha or other proof-of-humanity — but in most cases moderate readership sites can allow fairly easy participation.
Once a site gets very popular, it probably wants to move to authenticated user posting only. In this case, once the comment forums are getting noisy, you want to raise the bar and discourage participation by people who are not serious. My sub blog on Battlestar Galactica has gotten quite popular of late, and is attracting 100 or more comments per post, even though it has only 1/10th the subscribers of the main blog. Almost all post using the anonymous mechanism which lets them fill in a name, but does nothing to verify it. Many still post under the default name of “Anonymous.”
Some sites let you login using external IDs, such as OpenID, or accounts at Google or Yahoo. On this site, you can log in using any ID from the drupal network, in theory.
However, drupal (which is the software running this site) and most other comment/board systems are not very good at providing an intermediate state, which I will call “casual comments.” Here’s what I would like to see:
- Unauthenticated posters may fill in parameters as they can now (like name, email, URL) and check a box to be remembered. They would get a long-term cookie set. The first post would indicate the user was new.
- Any future posts from that browser would use that remembered ID. In fact, they would need to delete the cookie or ask the site to do so in order to change the parameters.
- If they use the cookie, they could do things like edit their postings and several of the things that registered users can do.
- If they don’t pick a name, a random pseudonym would be assigned. The pseudonym would never be re-used.
- Even people who don’t ask to be remembered would get a random pseudonym. Again, such pseudonyms would not be re-used by other posters or registered users. They might get a new one every time they post. Possibly it could be tied to their IP, though not necessarily traceable back to it, but of course IPs change at many ISPs.
- If they lose the cookie (or move to another computer) they can’t post under that name, and must create a new one. If they want to post under the same name from many machines, create an account.
- The casual commenters don’t need to do more special things like create new threads, and can be quite limited in other ways.
In essence, a mini-account with no authorization or verification. These pseudonyms would be marked as unverified in postings. A posting count might be displayed. A mechanism should also exist to convert the pseudonym to a real account you can login from. Indeed, for many sites the day will come when they want to turn off casual commenting if it is getting abused, and thus many casual commenters will want to convert their cookies into accounts.
The main goal would be to remove confusion over who is posting in anonymous postings, and to stop impersonation, or accusations of impersonation, among casual posters.
I don’t think it should be too hard to make a module for drupal to modify the comment system like this if I knew drupal better.
Submitted by brad on Mon, 2008-04-07 14:58.
Ok, admit it, who likes blogging in to a vacuum. You want to know how many people are actually reading your blog.
I have created a simple Perl script that scans your blog’s log file and attempts to calculate how many people read the blog and the RSS feeds.
You can download the feed reader script. I release it under GPL2.
It’s a perl script, so you would go to your web server log in the shell, and type “perl feedreaders.pl logfilename”
or if you like just “tail -99999 blogfilename | perl feedreaders.pl -” because you only need to scan a couple of days worth of logs to get the figures.
Here are some notes:
- I take advantage of the fact that most blog aggregators now report how many people they are aggregating for. There is no standard but I have put in code to match the common patterns.
- I identify common RSS feed URLs, as well as the most common “main feed” names. If you have other feeds that it doesn’t pick up on, it’s easy to add them to the list at the start of the program.
- A reader has to fetch the feed or home page multiple times from the same IP to count
- On the other hand, people who change IPs regularly will count multiple times. People behind caches may count just once all together.
- I try to eliminate fetches from the most common non-RSS-aggregating spiders
- Based on my experiences, Google Reader and Bloglines are the most popular aggregators, then NewsGator.
- At least one aggregator identifies as Mozilla, custom code tags it.
- It also counts people who fetch your non-RSS blog page multiple times as readers.
- Programs that don’t say they handle multiple users get grouped among the singles.
- Programs with only a few fetches are not counted
I invite my 1146 main blog readers to give it a whirl. (The 53 readers of the new Battlestar blog feed won’t see this notice, nor the 72 reading the comments.
Submitted by brad on Sun, 2008-04-06 17:07.
Recently, while keynoting the Freedom 2 Connect conference in Washington, I spoke about some of my ideas for fiber networks being built from the ground up. For example, I hope for the day when cheap kits can be bought at local stores to fiber up your block by running fiber through the back yards, in some cases literally burying the fiber in the “grass roots.”
Doc Searls, while he was listening to the talk made up a clever term — “Glass Roots” to describe this, and other movements to deploy fiber bottom up, without waiting for telcos and city governments. Any time you can deploy a technology without permission and red tape, it quickly zooms ahead of other technology. Backyard fiber, — combined with cheaper, mass produced free-space-optics or gigabit EHF radio equipment to bridge blocks together across streets or make links to hilltops — could provide the bandwidth we want without waiting.
Because let’s face it. While wireless ISPs sound great and are indeed great for serving some types of customers, right now real bandwidth requires a wire or glass fiber in the ground, and that means monopoly telcos and cable companies as well as the hassles of city government. We want our gigabits (forget megabits) and we want them now.
There are other elements to this Glass Roots movement, though usually with city involvement. Several small towns have put in fiber based ISPs with good success. My friend Brewster Kahle, from the Internet Archive, has brought 100 megabit service to housing projects in San Francisco using some city-laid fiber and the Archive’s bandwidth. You go, Brewster.
Brough Turner has the right idea. We should get dark fiber under our streets, and lots of it, installed and leased by a company that is only in the fiber business, and not in the business of selling you video or phone service or internet. While this company might get a franchise, the important difference is that the franchised monopoly would not light the fiber. Instead, anybody could lease a fiber from their house to a major switching point, and light it any way they want. Darth Vader would tell us “you don’t understand the power of the dark fiber.”
Why is that important? While fiber and wire are basic, the technologies to “light them up” run on Moore’s law. They get obsolete very quickly. Instead of monopoly rents and long cost-plus amortization tables, you want lots of turnover in the actual electronics found at the ends. You want the option to get the latest stuff, which is usually faster and cheaper than the stuff from 2 years ago. Lots faster and lots cheaper.
If you get a lot of free market competition on what lights those endpoints, it gets even better. The result is plenty of choice in how you light it and who you get connectivity from. And that eliminates all the issues around network neutrality or walled gardens. The investment in the dark fiber can probably be amortized over a decade or two, which is long enough.
One might argue the monopoly should even just be at the level of a conduit which it’s easy to drag other things like fiber or wire through. And indeed, whoever does bury pipes under the streets should expect to pull other wires before too long. But having monopoly lockdown at any level above the glass is what slows down the advance of broadband. Get rid of that lockdown, and the real glass roots revolution can begin.