Submitted by brad on Sun, 2013-12-15 23:07.
Here in Canada, a hot political issue (other than disgust with Rob Ford) is the recent plan by Canada Post to stop home delivery in cities. My initial reaction was, “Wow, I wish we could get that in the USA!” but it turns out all they are doing is making people go to neighbourhood mailboxes to get their mail. For many years, people in new developments have had to do this — they install a big giant mailbox out on the street, and you get a key to get your mail. You normally don’t walk further than the end of your block. However, this will save a lot of work — and eliminate a lot of jobs, which also has people upset.
But let me go back to my original reaction — I want to see home letter delivery abolished.
Why? All I, and most other people get by mail are:
- Junk mail (the vast bulk of the mail.)
- One or two magazines
- Bills and communications from companies that refuse to switch to all-electronic communication
- Official notices (from governments who refuse to switch to all-electronic communication)
- Cheques from companies who refuse to do direct deposit (see note below.)
- Parcels (lots of these, though many more from UPS/Fedex/etc.)
- A tiny and dwindling number of personal cards and letters. Perhaps 2-3 personal xmas cards.
The abolition of general mail delivery would force all those parties who refuse to do electronic communication to switch to it. The concept of an official e-mail address would arise. We would also need to see a better e-cheque service, something priced like a cheque (ie. not paypal which takes 2% or more) and as easy to use (ACH is not there yet.) This would force it into existing if you could not mail a cheque.
A replacement for registered mail would need to arise — that is what is needed for legal service. Putting that into e-mail is doable though challenging, as it requires adding money to e-mail, because you want people to have to pay to use it so that you don’t get it all the time.
And of course, parcel service would continue. And people who really want to send a letter could send it via parcel service, but not for sub-dollar first class mail prices.
Magazines would have to go all-electronic. Some may not see the world ready for that, but I think the time is very near. Today, one can make cheap large tablets in the 14 to 17 inch size that would be great for magazines. They would be too heavy to handhold (though possibly if they had no batteries and used a small cord they could be light enough for that) but they could easily be held on laps and tables and replace the magazine.
Few would mourn the death of junk mail, though it might lead to more spam in e-mail boxes until that’s under control. Senders of junk mail (notably politicians) might mourn it.
So the only sad thing would be the loss of the dwindling supply of personal letters. People getting married could use the parcel companies or go electronic. Thank-you notes would go electronic, making Miss Manners spin in her grave, but spin she eventually will. Truth is, the parcel companies would probably start up a basic letter service priced higher than 1st class mail but less than their most basic parcel. The more addresses you can share the cost of a truck on, the better — until the deliverbots arrive, at least. This is not easy, though. The postal service got to use the economies of delivering several letters a day to your house, and this could pay for a person to walk the street with a bag full, while the parcel companies use trucks.
We all know this day is coming. The question is, can we do better if we force it, and shut down letter delivery sooner rather than later?
Submitted by brad on Mon, 2013-12-09 16:46.
It’s the bane of the wanderer. A large fraction of open Wifi access points don’t connect you to the internet, but instead want you to login somehow. They do this by redirecting (hijacking) any attempt to fetch a web page to a login or terms page, where you either have to enter credentials, or just click to say you agree to the terms of service. A few make you watch an ad. It’s sometimes called a captive portal.
I’m going to contend that these hijack screens are breaking a lot of things, and probably not doing anybody — including portal owners — any good.
The terms of service generally get you to declare you will be a good actor. You won’t spam or do anything illegal. You won’t download pirated content or join torrents of such content. You waive rights to sue the portal. Sometimes you have to pay money or show you are a hotel guest or have an access card.
These screens are a huge inconvenience, and often worse than that. All sorts of things go wrong when they are in place:
- Until you do the login with the browser, your other apps, like e-Mail, don’t work though it looks like internet is there.
- With devices that don’t have keyboards, like Google Glass, you can’t use the network at all!
- Some redirect you from the link you wanted, and don’t pass you on to that link when you are logged in, you have to type it in again.
- If you go to a secure URL (https) some of them attempt an insecure redirect and cause browser security warnings. They look like a hijack because they are a hijack! This trains people to be more tolerant of browser security warnings, and breaks tools that try to improve your security and stop more malicious hijacks properly.
- Some for “security” block the remembering of credentials, making it hard to login every time.
- Really bad ones time-out quickly, and make you repeat the login process every time you suspend your laptop, and worse, every time you turn off and turn on your phone — making the network almost unusable. Almost all require re-login one or two times a day — still very annoying.
- Every so often the login systems are broken on mobile browsers, locking out those devices.
A lot of headaches. And one can perhaps understand the need for this when you must pay for the network or only authorized users are allowed in, though WPA passwords are much better for that because they need only one-time setup and also offer security on the wireless connection.
With all this pain, the question the world needs to answer is, “is it worth it?” What is the value of this hijack and “I agree” terms page? Nobody reads the terms, and people who connect, and would ignore the terms to spam or do other bad things, will happily agree to them and ignore them, and they will do so anonymously leaving no way to punish them for violating the terms. This is not to say that certain entities have not desired to actually find users of open Wifi networks and try to enforce terms on them, but this is extremely rare and almost certainly not desirable to most access point operators.
There are thus just a few remaining purposes for the hijack screen.
If you want to charge money, you might need a login screen. I don’t deny the right of a provider to ask for money, but there are different ways to do it. There are a variety of aggregator networks (Such as Boingo and FON) which will handle billing. They have already installed an app on the user’s device which allows it to authenticate and handle billing (mostly) seamlessly for the user. The very common skype application is one of these, and people pay from their skype credit accounts. Of course, you may not like Skype’s rates or the cut it takes, so this may not be enough.
You might also want to consider why you are charging the money. If bandwidth is very expensive, I can see it, but it’s not been uncommon to find some sites like cafes saying they charge — I kid you not — because the whole system including the charging gateway — is expensive to run. A cheap free gateway would have been much more affordable. Many operators decide that it’s worth it to offer it free, since it draws people in to restaurants, cafes and hotels. Cheap hotels usually give free Wifi — only expensive hotels put on fat charges.
It could be that your real goal is just to get attention…
Letting them know who provided the Wifi
I’ve seen a number of gateways that primarily seem to exist just to let you know who provided the gateway. Very rarely (I’ve mostly seen this at airports) they will make you watch a short ad to get your free access. They break a lot of stuff to do this. The SSID name is another way to tell them, though of course it’s not nearly as satisfactory.
Reducing the amount of usage
There is a risk that fully open networks will get overused by guests, and often thanklessly, too. You may be afraid your neighbours will realize they don’t need to buy internet at all, and can just use your open network. Here, making it hard to use and broken is a feature, not a bug. If you have to go through the hijack every so often it’s a minor burden to cafe patrons but a bigger annoyance to overusing neighbours. Those neighbours can play tricks, like using programs that do automatic processing of hijack gateways, but not too many do. They can also change their MAC addresses to get past restrictions based on that. You can do MAC limiting without a hijack screen, and it’s a great way to do it, possibly saving the hijack for after they reach the limit, not using it at the start. Clever abusers can change their MACs, though again most people don’t.
Covering your ass
The large number of complex terms of service suggest that people believe, or have been told, that it is essential they keep themselves covered in case a user of open Wifi does something bad, such as spamming or violating copyrights or even nastier stuff. They figure that if they made them agree to a terms-of-service that forbade this, this absolves them of any responsibility for the bad actions, and even, just maybe, offers a way to go after the unwanted guest.
Turns out that there is much less need to cover your ass in this situation, at least in the USA. You aren’t liable for coypright infringement by your guests if you did not encourage it. Thanks to the DMCA and CDA rules, you are probably not liable for a lot of other stuff these unwanted guests might do.
I am interested to hear reports from anybody of how they used the fact that Wifi guests had to agree to terms of service to protect themselves in an actual legal action. I have not heard of any, and I suspect there are few. It would be a great shame to confirm that everybody is breaking their networks in hope of a protection that’s actually meaningless.
It is true that you can get in real world trouble for what your unwanted guests do. If they violate copyrights, you might be the one getting the nasty letter from the copyright holder. The fact that you are not actually liable may not be much comfort when you are faced with taking the time and cost to point that out. Often these lawsuits come with offers to settle for less than the cost of consulting a lawyer on the matter. Naturally, those interested in violating copyrights are unlikely to be all that worried that they clicked on a contract that promised they wouldn’t. This is just a risk of an open network.
Likewise, if they send spam over your network, you may find yourself on spam-blocking blacklists who don’t care that it wasn’t you who did the spamming. Those vigilante groups run by their own rules. Again, the contract isn’t much protection. You may instead want to look to technical measures, including throttling the use of certain ports or bandwidth limits on guests. (It is better if you can throttle rather than cut off, since your guests probably do need to send e-Mail, just not thousands of them.)
Towards a protocol of open guest WIFI
How could we do this better? In part two I talk about how to have a secure open WIFI and the problems in doing that. Part three will talk about how to make it easy to connect to any of these networks automatically.
Submitted by brad on Tue, 2013-09-03 20:18.
I’m back from Burning Man, and this year, for the first time in a while, we didn’t get internet up in our camp, so I only did occasional email checks while wandering other places. And thus, of course, there are many hundred messages backed up in my box to get to. I will look at the most important but some will just be ignored or discarded.
We all know it’s getting harder and harder to deal with email backlog after travel, even connected travel. If you don’t check in it gets even worse. Vacation autoreplies can help a little, but I think they are no longer enough.
Some years ago a friend tried something radical. He had is autoreply say that he was away for 2 months, and could not possibly handle the email upon his return. He said that thus the email you had sent had been discarded. You were told that if it was still important when he returned that you should send it again then. His correspondents were completely furious at the temerity of this action, though it has a lot of attractions. They had taken the time to write an email, and to have it discarded and left in their hands to resend seemed rude. (I believe the reply included a copy of the email at least.)
Worse, because we are always connected, vacation replies sometimes lie. People are scanning their email, responding to the most important ones if they can, even though a vacation autoreply was sent. And so we always hope for that.
I think the time has come for an extra internet protocol as a companion to mail. When you type an E-mail address into your mail client, it should be able to query a server that handles information for that domain — something like an MX record — and query it about the email that is about to be written, including the sender address and recipient address, and possibly a priority. If the recipient is in a vacation mode or other do not disturb mode, the sender would be told immediately, before writing the e-mail. They would have the option of not writing it, writing it for delivery at the designated date in the future, or writing it with various tags about its urgency in case the recipient is doing some checking of mail.
This could be an LDAP derived protocol or something else. Indeed it could be combined, when trusted, with directory lookup and autocomplete directory services. It’s not easy because often (with things like MX) the server that handles mail for a user may not have a strong link to the user in order to serve this data. In the old e-mail regime of store and forward, live connections were not expected. Still, I think it can be done, and it would not be a mandatory thing.
There are some security and privacy implications here that are challenging:
- Spammers will try to use this information to confirm addresses or hunt for them
- This lets the recipient know if somebody just typed in their name to send mail, and when they did so, and thus how long they took to write a mail, or if they aborted one. To avoid this, the directory servers could be trusted 3rd parties.
- This provides a reliable IP address for the sender’s client, or at least a proxy acting for the sender.
- It could be misused to build a general database of many people’s vacation status, invading their privacy, unless there are tools to prevent broad spidering of this sort.
Mail servers would remember who queried, and in fact it might be encouraged to include a header in the email that came from the query, to officially tie them together. This would allow clients to know who queried and who did not, giving priority to messages which came from people who queried and acted upon the result (for example waiting to send) over those who just sent mail without checking. Users could get codes that would allow them to declare the message higher (or lower) priority that would not be available to those who just did plain SMTP.
Mailing lists might also make use of this data, and the response could tell mailing lists what the user wants to do, including temporarily unsubscribing until a given date, or asking for a digest of threads to be sent upon return, or other useful stuff. Responsible corporate bulk mailers could also accept that you don’t want customer satisfaction surveys or useful coupon offers during your vacation and just not send them. Ok, I’m dreaming on that one, perhaps.
For security, it could be that only past correspondents could do this query, or only users with some amount of authentication. Anonymous email and mail from strangers would still be possible, but not with a pre-query. The response could also be sent back via a special email that servers know to intercept, so it can’t be used to gain information that would not be gained by mailing a person today. (You could get a report of people who queried you and never mailed you when not on vacation.)
We might see some features in mailers, like a pop-up in your mailers that says, “Brad just started writing you a message” the way instant messaging programs do. I am not sure this is a good idea, but it would happen. Readers: what other consequences do you see happening?
Submitted by brad on Sun, 2013-06-30 12:50.
Yahoo announced that in a few days they will shut down the altavista web site. This has prompted a few posts on the history of internet search, to which I will add an anecdote.
The first internet search engine predated the “web” and was called Archie search engine. Archie (an archive search) was basic by today’s standards. The main protocol for getting files on the internet in those days was FTP. Many sites ran an open FTP server, which you could connect to and download files from. If you had files or software to share with people, you put it up on an FTP server, in particular one that allowed anonymous login to get public files. The Archie team (from Montreal) built a tool to go to all the open servers, read their indexes and generate a database. You could then search, and get a pointer to all the places you could get a file. It was hugely popular for the day.
(You will probably note that this is almost exactly the way Napster worked, the only difference being that Napster was a bit more sophisticated and people used it to share files that were copyrighted. FTP servers had copyrighted material, but mostly they had open source software and documents.)
Around the same time, a lot of folks were building full-text search engines for use on large collections of documents. You could find these on private databases around the world, and the WAIS protocol was developed by Brewster Kahle to make a standardized interface to text search and his own text search tools.
Not long after the web started to grow, Fuzzy Mauldin at CMU made Lycos which was a full-text search engine applied to documents gathered from the web. The ability to search the web generated much attention, and a few other competing spiders and search engines appeared. Everybody had a favourite. (To add to my long list of missed opportunities, in April of 95 I wrote a few notes to Fuzzy looking to get his spider index so we could sort web pages based on how many incoming links they had. Nothing ever came of that but as you may know that concept later had some value. :-) And I also turned down a $4M offer from Lycos to buy ClariNet (which would have turned into $40M when their stock shot up in the bubble. Sigh.)
In 1995, for many people that favourite changed to Alta Vista, a new search engine from Digital Equipment Corp. DEC was a huge name at the time, the biggest name in minicomputers, and it was just losing the Unix crown to Sun. The team at DEC put a lot of computing power into Alta Vista, and so it had two useful attributes. First, they spidered a lot more pages, and thus were more likely to find stuff. They were also fast compared to most of the other engines. In a precursor to other rapid turnarounds in the internet business, you could switch your favourite search engine in a heartbeat and many did. It was big and fast due to DEC putting a lot of fancy computer hardware on it, and DEC eventually justified the money they were spending on it (there was no revenue for search in those days) by saying it showed off just how powerful DEC’s computers with big address spaces were. Indeed the limits of Alta Vista were the limits of the architecture, using the 64-bit Alpha to address 130gb of RAM and 500gb of disk — huge for the day.
On Alta Vista’s home page, they gave you a sample query to type in the search box, to show you how to use it. That query was:
kayak sailing “san juan islands”
Indeed, if you typed that, you got a nice array of pages which talked about kayaking up in the San Juan islands, tour operators, etc. — just what you wanted to get from a query.
My devious mind wondered, “what if I put up a page on my own web site with this as the title?” I created the Kayak Sailing “San Juan Islands” home page on the rec.humor.funny site, which was already a very popular site in those days. (Indeed it’s around 1995 that RHF fell behind Yahoo as the most widely read thing on the internet, but that refers to the USENET group, not the page.)
You will note as you look at the page that it contains the words in the title and headers, and repeated many times in invisible comments. In those days the search engines were ranking higher simply based on where words were, and if they were repeated many times. So I gave it a whirl. This was an early attempt at what is now called “black hat search engine optimization” though I was doing it for fun, rather than nefarious gain.
The results didn’t change though. Alta Vista relied on huge computer power, but it only rebuilt the index by hand. It would be a month or more before Alta Vista recalculated its index. One day I went to type in the query and bingo — there was my page on the first page of search results. Along with a dozen other people who had tried the same thing, and a few pages that were articles writing about Alta Vista and giving the example query, or which were copying its search page which of course had that string.
More to the point, not a single item on the results page was about actual Kayaking! The sample query was ruined, though the results were quite amusing. Not long after, Alta Vista changed the example to Pizza “deep dish” Chicago and of course I added it to my page as well. So not much longer after that AV switched to showing different examples from a rotating and changing collection so people could not play this game any more.
While Alta Vista ruled Search, in spite of efforts from Infoseek, Inktomi/Hotbot and others, we all know that a few years later, Google was born at Stanford, and it proved again how quickly people could switch to a new favourite search engine, and lives under that fear (but with great success) to this day. And Google’s dominance turned SEO into a giant industry.
Submitted by brad on Sat, 2013-04-13 11:26.
Bitcoin is having its first “15 minutes” with the recent bubble and crash, but Bitcoin is pretty hard to understand, so I’ve produced this analogy to give people a deeper understanding of what’s going on.
It begins with a group of folks who take a different view on several attributes of conventional “fiat” money. It’s not backed by any physical commodity, just faith in the government and central bank which issues it. In fact, it’s really backed by the fact that other people believe it’s valuable, and you can trade reliably with them using it. You can’t go to the US treasury with your dollars and get very much directly, though you must pay your US tax bill with them. If a “fiat” currency faces trouble, you are depending on the strength of the backing government to do “stuff” to prevent that collapse. Central banks in turn get a lot of control over the currency, and in particular they can print more of it any time they think the market will stomach such printing — and sometimes even when it can’t — and they can regulate commerce and invade privacy on large transactions. Their ability to set interest rates and print more money is both a bug (that has sometimes caused horrible inflation) and a feature, as that inflation can be brought under control and deflation can be prevented.
The creators of Bitcoin wanted to build a system without many of these flaws of fiat money, without central control, without anybody who could control the currency or print it as they wish. They wanted an anonymous, privacy protecting currency. In addition, they knew an open digital currency would be very efficient, with transactions costing effectively nothing — which is a pretty big deal when you see Visa and Mastercard able to sustain taking 2% of transactions, and banks taking a smaller but still real cut.
With those goals in mind, they considered the fact that even the fiat currencies largely have value because everybody agrees they have value, and the value of the government backing is at the very least, debatable. They suggested that one might make a currency whose only value came from that group consensus and its useful technical features. That’s still a very debatable topic, but for now there are enough people willing to support it that the experiment is underway. Most are aware there is considerable risk.
Update: I’ve grown less fond of this analogy and am working up a superior one, closer to the reality but still easy to understand.
Bitcoins — the digital money that has value only because enough people agree it does — are themselves just very large special numbers. To explain this I am going to lay out an imperfect analogy using words and describe “wordcoin” as it might exist in the pre-computer era. The goal is to help the less technical understand some of the mechanisms of a digital crypto-based currency, and thus be better able to join the debate about them. read more »
Submitted by brad on Thu, 2013-03-21 22:37.
Earlier in part one I examined why it’s hard to make a networked technology based on random encounters. In part two I explored how V2V might be better achieved by doing things phone-to-phone.
For this third part of the series on connected cars and V2V I want to look at the potential for broadcast data and other wide area networking.
Today, the main thing that “connected car” means in reality is cell phone connectivity. That began with “telematics” — systems such as OnStar but has grown to using data networks to provide apps in cars. The ITS community hoped that DSRC would provide data service to cars, and this would be one reason for people to deploy it, but the cellular networks took that over very quickly. Unlike DSRC which is, as the name says, short range, the longer range of cellular data means you are connected most of the time, and all of the time in some places, and people will accept nothing less.
I believe there is a potential niche for broadcast data to mobile devices and cars. This would be a high-power shared channel. One obvious way to implement it would be to use a spare TV channel, and use the new ATSC-M/H mobile standard. ATSC provides about 19 megabits. Because TV channels can be broadcast with very high power transmitters, they reach almost everywhere in a large region around the transmitter. For broadcast data, that’s good.
Today we use the broadcast spectrum for radio and TV. Turns out that this makes sense for very popular items, but it’s a waste for homes, and largely a waste for music — people are quite satisfied instead with getting music and podcasts that are pre-downloaded when their device is connected to wifi or cellular. The amount of data we need live is pretty small — generally news, traffic and sports. (Call in talk shows need to be live but their audiences are not super large.)
A nice broadcast channel could transmit a lot of interest to cars.
- Timing and phase information on all traffic signals in the broadcast zone.
- Traffic data, highly detailed
- Alerts about problems, stalled vehicles and other anomalies.
- News and other special alerts — you could fit quite a few voice-quality station streams into one 19 megabit channel.
- Differential GPS correction data, and even supplemental GPS signals.
The latency of the broadcast would be very low of course, but what about the latency of uploaded signals? This turns out to not be a problem for traffic lights because they don’t change suddenly on a few milliseconds notice, even if an emergency vehicle is sending them a command to change. If you know the signal is going to change 2 seconds in advance, you can transmit the time of the change over a long latency channel. If need be, a surprise change can even be delayed until the ACK is seen on the broadcast channel, to within certain limits. Most emergency changes have many seconds before the light needs to change.
Stalled car warnings also don’t need low latency. If a car finds itself getting stalled on the road, it can send a report of this over the cellular modem that’s already inside so many cars (or over the driver’s phone.) This may take a few seconds to get into the broadcast stream, but then it will be instantly received. A stalled car is a problem that lasts minutes, you don’t need to learn about it in the first few milliseconds.
Indeed, this approach can even be more effective. Because of the higher power of the radios involved, information can travel between vehicles in places where line of sight communications would not work, or would actually only work later than the server-relayed signal. This is even possible in the “classic” DSRC example of a car running a red light. While a line of sight communication of this is the fastest way to send it, the main time we want this is on blind corners, where LoS may have problems. This is a perfect time for those longer range, higher power communications on the longer waves.
Most phones don’t have ATSC-M/H and neither do cars. But receiver chips for this are cheap and getting cheaper, and it’s a consumer technology that would not be hard to deploy. However, this sort of broadcast standard could also be done in the cellular bands, at some cost in bandwidth for them.
19 megabits is actually a lot, and since traffic incidents and light changes are few, a fair bit of bandwidth would be left over. It could be sold to companies who want a cheaper way to update phones and cars with more proprietary data, including map changes, their own private traffic and so on. Anybody with a lot of customers might fight this more efficient. Very popular videos and audio streams for mobile devices could also use the extra bandwidth. If only a few people want something, point to point is the answer, but once something is wanted by many, broadcast can be the way to go.
What else might make sense to broadcast to cars and mobile phones in a city? While I’m not keen to take away some of the nice whitespaces, there are many places with lots of spare channels if designed correctly.
Submitted by brad on Mon, 2013-03-18 16:28.
Last week, I began in part 1 by examining the difficulty of creating a new network system in cars when you can only network with people you randomly encounter on the road. I contend that nobody has had success in making a new networked technology when faced with this hurdle.
This has been compounded by the fact that the radio spectrum at 5.9ghz which was intended for use in short range communications (DSRC) from cars is going to be instead released as unlicenced spectrum, like the WiFi bands. I think this is a very good thing for the world, since unlicenced spectrum has generated an unprecedented radio revolution and been hugely beneficial for everybody.
But surprisingly it might be something good for car communications too. The people in the ITS community certainly don’t think so. They’re shocked, and see this as a massive setback. They’ve invested huge amounts of efforts and careers into the DSRC and V2V concepts, and see it all as being taken away or seriously impeded. But here’s why it might be the best thing to ever happen to V2V.
The innovation in mobile devices and wireless protocols of the last 1-2 decades is a shining example to all technology. Compare today’s mobile handsets with 10 years ago, when the Treo was just starting to make people think about smartphones. (Go back a couple more years and there weren’t any smartphones at all.) Every year there are huge strides in hardware and software, and as a result, people are happily throwing away perfectly working phones every 2 years (or less) to get the latest, even without subsidies. Compare that to the electronics in cars. There is little in your car that wasn’t planned many years ago, and usually nothing changes over the 15-20 year life of the car. Car vendors are just now toying with the idea of field upgrades and over-the-air upgrades.
Car vendors love to sell you fancy electronics for your central column. They can get thousands of dollars for the packages — packages that often don’t do as much as a $300 phone and get obsolete quickly. But customers have had enough, and are now forcing the vendors to give up on owning that online experience in the car and ceding it to the phone. They’re even getting ready to cede their “telematics” (things like OnStar) to customer phones.
I propose this: Move all the connected car (V2V, V2I etc.) goals into the personal mobile device. Forget about the mandate in cars.
The car mandate would have started getting deployed late in this decade. And it would have been another decade before deployment got seriously useful, and another decade until deployment was over 90%. In that period, new developments would have made all the decisions of the 2010s wrong and obsolete. In that same period, personal mobile devices would have gone through a dozen complete generations of new technology. Can there be any debate about which approach would win? read more »
Submitted by brad on Fri, 2013-03-15 16:18.
The blogging world was stunned by the recent announcement by Google that it will be shutting down Google reader later this year. Due to my consulting relationship with Google I won’t comment too much on their reasoning, though I will note that I believe it’s possible the majority of regular readers of this blog, and many others, come via Google reader so this shutdown has a potential large effect here. Of particular note is Google’s statement that usage of Reader has been in decline, and that social media platforms have become the way to reach readers.
The effectiveness of those platforms is strong. I have certainly noticed that when I make blog posts and put up updates about them on Google Plus and Facebook, it is common that more people will comment on the social network than comment here on the blog. It’s easy, and indeed more social. People tend to comment in the community in which they encounter an article, even though in theory the most visibility should be at the root article, where people go from all origins.
However, I want to talk a bit about online publishing history, including USENET and RSS, and the importance of concepts within them. In 2004 I first commented on the idea of serial vs. browsed media, and later expanded this taxonomy to include sampled media such as Twitter and social media in the mix. I now identify the following important elements of an online medium:
- Is it browsed, serial or to be sampled?
- Is there a core concept of new messages vs. already-read messages?
- If serial or sampled, is it presented in chronological order or sorted by some metric of importance?
- Is it designed to make it easy to write and post or easy to read and consume?
Online media began with E-mail and the mailing list in the 60s and 70s, with the 70s seeing the expansion to online message boards including Plato, BBSs, Compuserve and USENET. E-mail is a serial medium. In a serial medium, messages have a chronological order, and there is a concept of messages that are “read” and “unread.” A good serial reader, at a minimum, has a way to present only the unread messages, typically in chronological order. You can thus process messages as they came, and when you are done with them, they move out of your view.
E-mail largely is used to read messages one-at-a-time, but the online message boards, notably USENET, advanced this with the idea of move messages from read to unread in bulk. A typical USENET reader presents the subject lines of all threads with new or unread messages. The user selects which ones to read — almost never all of them — and after this is done, all the messages, even those that were not actually read, are marked as read and not normally shown again. While it is generally expected that you will read all the messages in your personal inbox one by one, with message streams it is expected you will only read those of particular interest, though this depends on the volume.
Echos of this can be found in older media. With the newspaper, almost nobody would read every story, though you would skim all the headlines. Once done, the newspaper was discarded, even the stories that were skipped over. Magazines were similar but being less frequent, more stories would be actually read.
USENET newsreaders were the best at handling this mode of reading. The earliest ones had keyboard interfaces that allowed touch typists to process many thousands of new items in just a few minutes, glancing over headlines, picking stories and then reading them. My favourite was TRN, based on RN by Perl creator Larry Wall and enhanced by Wayne Davison (whom I hired at ClariNet in part because of his work on that.) To my great surprise, even as the USENET readers faded, no new tool emerged capable of handling a large volume of messages as quickly.
In fact, the 1990s saw a switch for most to browsed media. Most web message boards were quite poor and slow to use, many did not even do the most fundamental thing of remembering what you had read and offering a “what’s new for me?” view. In reaction to the rise of browsed media, people wishing to publish serially developed RSS. RSS was a bit of a kludge, in that your reader had to regularly poll every site to see if something was new, but outside of mailing lists, it became the most usable way to track serial feeds. In time, people also learned to like doing this online, using tools like Bloglines (which became the leader and then foolishly shut down for a few months) and Google Reader (which also became the leader and now is shutting down.) Online feed readers allow you to roam from device to device and read your feeds, and people like that. read more »
Submitted by brad on Tue, 2013-02-12 11:47.
Interesting article about a new plan for mesh networking Android phones if the cell network fails. I point this out because of another blog post of mine from 2005 on a related proposal from Klein Gilhousen that he was pushing after Katrina.
The wifi mesh has the problem that wifi range is not going to get much better then 30-40m, and so you need a very serious density of phones to get a real mesh going, especially to route IP as this plan wishes to. Klein’s plan was to have the phones mesh over the wireless bands that were going unusued when the cell networks were dead (or absent in the wilderness.) The problem with his plan was that phone tranceivers tend to not be able to transmit and receive on the same bands, they need a cell tower. He proposed new generations of phones be modified to allow that.
But it hasn’t happened, in spite of being an obviously valuable thing in disasters. Sure there are some interference issues at the edges of legitimate cell nets, but they could be worked out. Cell phones are almost exclusively sold via carriers in the many countries, including the USA. They haven’t felt it a priority to push for phones that can work without carriers.
I suspect trying to route voice or full IP is also a mistake, especially for a Katrina like situation. There the older network technologies of the world, designed for very intermittent connectivity, make some sense. A network designed to send short text messages, a “short message service” if you will, using mesh principles combined with store and forward could make sure texts got to and from a lot of places. You might throw in small photos so trapped people could do things like send photos of wounds to doctors.
Today’s phones have huge amounts of memory. Phones with gigabytes of flash could store tens to hundreds of millions of passing (compressed and encrypted) texts until work got out that a text had been delivered. Texts could hop during brief connections, and airplanes, blimps and drones could fly overhead doing brief data syncs with people on the ground. (You would not send every text to every phone, but every phone would know how many hops it has been recently from the outside, and you could send always upstream.) A combination of cell protocols when far and wifi when close (or to those airplanes) could get decent volumes of data moving.
Phones would know if they were on their own batteries, or plugged into a car or other power source, and the ones with power would advertise they can route long term. It would not be perfect but it would be much better than what we have now.
But the real lament is that, as fast as the pace of change is in some fields of mobile, here we are 7.5 years after Katrina, having seen several other disasters that wiped out cell nets, and nothing much has changed.
Submitted by brad on Thu, 2012-05-17 11:10.
Like most people, I have a lot of different passwords in my brain. While we really should have used a different system from passwords for web authentication, that’s what we are stuck with now. A general good policy is to use the same password on sites you don’t care much about and to use more specific passwords on sites where real harm could be done if somebody knows your password, such as your bank or email.
The problem is that over time you develop many passwords, and sometimes your browser does not remember them for you. So you go back to a site and try to log in, and you end up trying all your old common passwords. The problem: At many sites, if you enter the wrong password too many times, they lock you out, or at least slow you down. That’s not unwise on their part, but a problem for you.
One solution: Sites can remember hashes of your old passwords. If you type in an old password, they can say, “No, that used to be your password but you have a new one now.” And not count that as a failed attempt by a password cracker. This adds a very slight risk, in that it lets a very specific attacker who knows you super well get a few free hits if they have managed to learn your old passwords. But this risk is slight.
Of course they should store a hash of the password, not the actual password. No site should store the actual password. If a site can offer to mail you your old password rather than offering a link to reset the password, it means they are keeping it around. That’s a security risk for you, and also means if you use a common password on such sites, they now know it and can log in as you on all the other sites you use that password at. Alas, it’s hard to tell when creating an account whether a site stores the password or just a hash of it. (A hash allows them to tell if you have typed in the right password by comparing the hash of what you typed and the stored hash of the password back when you created it. A hash is one-way so they can’t go from the hash to the actual password.) Alas, only a small minority of sites do this right.
This is just one of many things wrong with passwords. The only positive about them is you can keep a password entirely in your memory, and thus go to a random computer and login without anything but your brain. That is also part of what is wrong with them, in that others can do that too. And that the remote computers can quite easily be compromised and recording the password. The most secure systems use the combination of something in your memory and information in a device. Even today, though, people are wary of solutions that require them to carry a device. Pretty soon that will change and not having your device will be so rare as to not be an issue.
Submitted by brad on Tue, 2012-03-20 10:05.
I’m back from our fun “Singuarlity Week” in Tel Aviv, where we did a 2 day and 1 day Singularity University program. We judged a contest for two scholarships by Israelis for SU, and I spoke to groups like Garage Geeks, Israeli Defcon, GizaVC’s monthly gathering and even went into the west bank to address the Palestinian IT Society and announce a scholarship contest for SU.
Of course I did more photography, though the weather did not cooperate. However, you will see six new panoramas on my Israel Panorama Page and my Additional Israeli panoramas. My favourite is the shot of the western wall during a brief period of sun in a rainstorm.
In Ramallah, the telecom minister for the Palestinian Authority asked us, jokingly, “how can this technology end the occupation?” But I wanted to come up with a serious answer. Everybody who goes to the middle east tries to come up with a solution or at least some sort of understanding. Israelis get a bit sick of it, annoyed that outsiders just don’t understand the incredible depth and nuance of the problem. Outsiders imagine the Israelis and Palestinians are so deep in their conflict that they are like fish who no longer see the water.
In spite of those warnings, here’s my humble proposal for how to use new media technology to help.
Take classrooms of Israelis and classrooms of Palestinians and give them a mandatory school assignment. Their assignment is to be paired with an online buddy from the “other side.” Students would be paired based on a matching algorithm, considering things like their backgrounds, language skills or languages and subjects they want to learn. The other student, with whom they would interact over online media and video-conferencing (like Skype or Google Hangouts,) would become a study partner and the students would collaborate on projects suitable to them. They might also help one another learn a language, like English, Arabic or Hebrew. Students would be encouraged to add their counterpart to their social networking circles.
Both students would also be challenged to write an essay attempting to see the world from the point of view of the other. They will not be asked to agree with it, but simply to be able to write from that point of view. And their counterpart must agree at the end that it mostly does reflect their point of view. Students would be graded on this.
It would be important not to have this be a “forced friendship.” The students would be told it was not demanded they forget their preconceptions; not demanded they agree with everything their counterpart says. In fact, they would be encouraged to avoid conflict, to not immediately contradict statements they think are false. That the goal is not to convince their counterpart of things but to understand and help them understand. And in particular, projects should be set up where the students naturally work together viewing the teachers as the common enemy.
At the end of the year, a meeting would be arranged. For example, west bank students would be thrilled at a chance to visit the beach or some amusement park. A meeting on the west bank border on neutral ground might make sense too, though parents would be paranoid about safety and many would veto trips by their children into the west bank.
Would this bring peace? Hardly on its own. But it would improve things if every student at least knew somebody from outside their world, and had tried to understand their viewpoint even without necessarily agreeing with it. And some of the relationships would last, and the social networks would grow. Soon each student would have at least one person in their network from outside their formerly insular world. This would start with some schools, but ideally it would be something for every student to do. And it could even be expanded to include online pen-pals from other countries. With some students it would fail, particularly older ones whose views are already set. Alas, for younger ones, finding a common language might be difficult. Few Israelis learn Arabic, more Palestinians learn Hebrew and all eventually want to learn English. Somebody has to provide computers and networking to the poorer students, but it seems the cost of this is small compared to the benefit.
Submitted by brad on Mon, 2012-01-09 12:11.
Back to wishlists on credit cards: Every year, for tax time, I go over my downloaded credit card records and I classify them into categories. I could just try to divide out the business and personal expenses (which I handle by having credit cards for business only and for personal only) but I try to do a bit more categorization, and from time to time there’s a reason I don’t follow the strict rule about what card to use.
So I would like, when I do an online purchase, to have an optional field in the form in which I can type anything. This would get put in or added to the memo field that you get when you download your transactions into accounting software. A quick script could then turn these memos into the categories we need for accounting.
Since getting a new field in forms is a lot of work, card companies could also offer me a small set of similar card numbers, though there might be only one on the physical card. This could be used to do some very basic categorization on the same card. They would all download to the same account, but the last digit would show up in the memo field. I know there are cards that issued a new number for every internet transaction back in the more paranoid days, but I’m talking about a series of cards where only one digit changes (if accepted by the processors because they pass it fully along,) and I can fill in the digit I like for a given transaction. (If I wish I could also get another card made of course. In fact, that would be handy if I decide to get two cards on the same account when giving a sub-card to a family member but not wanting a completely independent account for them.)
Anybody do this?
Submitted by brad on Fri, 2012-01-06 10:44.
Over the years I have come to the maxim that “Everything should be as secure as is easy to use, and no more secure” to steal a theme from Einstein. One of my peeves has been the many companies who, feeling that E-mail is insecure, instead send you an E-mail that tells you you have an E-mail if you would only log onto their web site (often one you rarely log into) with the password you set up 2 years ago to read it. I often get these for things like bills and statements — “Your statement is now available online.” A few nicer ones tell me that my statement is online but the e-maiil does contain the total in the statement. Only if the total is unexpected do I need to login to see the statement.
None of these sites seem to offer me the option of saying, “My E-mail is secure, at least if you are doing your job, so just send me the data in E-mail” or of using one of the end-to-end encrypted E-mail systems. Alas, there is more than one E-mail system, but it’s not hard to do the two most popular, PGP/GPG and S-Mime and they are fairly widely supported in mailers.
As I noted, my own mail is secure in that I run an SMTP server on my home server, and only access it over encrypted IMAP. If they have set up their server to do encrypted SMTP (which should be the default by now, frankly) then the mail is generally secure (though it does do a brief unencrypted stop at my spam filter system.)
However, somtimes the contents of the mail need no security, and so instead it’s just annoyance. I have an acccount with Wachovia bank, and yesterday got an E-mail that there was an “important, secure E-mail” I should read on their server. After logging in, I found that all they had to say was public information about their merger with Wells Fargo, and how accounts would be shifted over. There was no reason that needed to be secure, since the only secret to reveal was that I had an account there, and the E-mail revealed that.
So I wrote a note back to complain, telling them not to make me jump through hoops to read public information. What’s so much fun is the response I got back:
Thank you for contacting Wachovia. My name is Tulanee E, and I am happy
to assist you.
Mr. Templeton, I would be happy to assist you. However, to guarantee the
security of your information prior to confidential information being
disclosed or any account activities being performed we need to verify
your personal information. For this we kindly ask you to please call us
at 1-800-950-2296 to discuss this issue. Representatives are available
to assist you 24 hours a day, seven days a week.
I apologize for any inconvenience.
My goal today was to provide you a complete and helpful answer. Thank
you for banking with Wachovia.
Online Services Team
Online Customer Service: 1-800-950-2296
Submitted by brad on Sat, 2011-12-31 10:12.
Almost all credit cards will let you download transactions. Many will e-mail you a balance or payment reminder once a month, or a warning if your balance goes above a certain amount. And I’ve seen a small number that will e-mail you on every transaction.
But does anybody have a smart notification system which I can set, allowing me to be comfortable that there is no misuse of my card without filling my mailbox?
- At a basic level, notification (email or SMS) of transactions above a certain amount
- Combine that with notification when a group of small transactions exceed a set amount or an amount of time goes by (E-mail only)
- “You don’t need to notify me of this transaction” for your repeating transactions
- Easy console to turn on or off warnings and fraud alerts on foreign locations
For those of us who are on e-mail or SMS literally every day, this is a lot better fraud protection for us than the systems they use now, especially the one that find us with denied transactions in unusual places.
Update: Thanks to all commenters — some cards are providing many of these services.
Submitted by brad on Thu, 2011-12-22 15:49.
This time of year I do a lot of online shopping, and my bell rings with many deliveries. But today and tomorrow, not Saturday. The post office comes Saturday but has announced it wants to stop doing that to save money. They do need to save money, but this is the wrong approach. I think the time has come for Saturday and Sunday delivery to be the norm for UPS, Fedex and the rest.
When I was young almost all retailers closed on Sunday and even had limited hours on Saturday. Banks never opened on the weekend either. But people soon realized that because the working public had the weekend off, the weekend was the right time for consumer services to be operating. The weekend days are the busiest days at most stores.
The shipping companies like Fedex and UPS started up for business to business, but online shopping has changed that. They now do a lot of delivery to residences, and not just at Christmas. But Thursday and Friday are these odd days in that business. An overnight package on Friday gets there 3 days later, not 1. (If you use the post office courier, you get Saturday delivery as part of the package, and the approximately 2 day Priority mail service is a huge win for things sent Thursday.) In many areas, the companies have offered Saturday and even Sunday delivery, but only as a high priced premium service. Strangely, the weekend also produces a gap in ground shipping times — the truck driving cross-country presumably pauses for 2 days.
We online shoppers shop 7 days a week and we want out stuff as soon as we can get it. I understand the desire to take the weekend off, but usually there are people ready to take these extra shifts. This will cost the delivery companies more as they will have to hire more workers to operate on the weekend. And they can’t just do it for ground (otherwise a 3 day package sent Friday arrives the same time as an overnight package.)
Update: I will point out that while online shipping is the David to the Goliath of brick & mortar, changing shipping to 7 days a week will mean a bunch more stuff gets bought online, and shipped, and will bring new revenue to the shipping companies. It’s just just a cost of hiring more people. It also makes use of infrastructure that sits idle 2 days a week.
This is particularly good for those who are always not hope to sign for packages that come during the work week. The trend is already starting. OnTrak, which has taken over a lot of the delivery from Amazon’s Nevada warehouse to Californians, does Saturday delivery, and it’s made me much more pleased with Amazon’s service. When Deliverbots arrive, this will be a no brainer.
Submitted by brad on Mon, 2011-10-03 10:27.
I’m actually not a fan of login and sessions on the web, and in fact prefer a more stateless concept I call authenticated actions to the more common systems of login and “identity.”
But I’m not going to win the day soon on that, and I face many web sites that think I should have a login session, and that session should in fact terminate if I don’t click on the browser often enough. This frequently has really annoying results — you can be working on a complex form or other activity, then switch off briefly to other web sites or email to come back and find that “your session has expired” and you have to start from scratch.
There are times when there is an underlying reason for this. For example, when booking things like tickets, the site needs to “hold” your pending reservation until you complete it, but if you’re not going to complete it, they need to return that ticket or seat to the pool for somebody else to buy. But many times sessions expire without that reason. Commonly the idea is that for security, they don’t want to leave you logged on in a way that might allow somebody to come to your computer after you leave it and take over your session to do bad stuff. That is a worthwhile concept, particularly for people who will do sessions at public terminals, but it’s frustrating when it happens on the computer in your house when you’re alone.
Many sites also overdo it. While airlines need to cancel your pending seat requests after a while, there is no reason for them to forget everything and make you start from scratch. That’s just bad web design. Other sites are happy to let you stay “logged on” for a year.
To help, it would be nice if the browser had a way of communicating things it knows about your session with the computer to trusted web sites. The browser knows if you have just switched to other windows, or even to other applications where you are using your mouse and keyboard. Fancier tools have even gone so far as to use your webcam and microphone to figure if you are still at your desk or have left the computer. And you know whether your computer is in a public space, semi-public space or entirely private space. If a browser, or browser plug-in, has a standardized way to let a site query session status, or be informed of session changes and per-machine policy, sites could be smarter about logging you out. That doesn’t mean your bank still should not be paranoid if you are logged in to a session where you can spend your money, but they can be more informed about it. read more »
Submitted by brad on Mon, 2011-08-22 12:12.
Today an op-ed by John Sununu and Harold Ford Jr. of “Broadband For America” (a group of cable companies and other ISPs which says it is really a grass-roots organization) declared that the net needs a better pricing model for what Netflix is doing. For a group of ISPs, they really seem to not understand how the internet works and how pricing works, so I felt it was worthwhile to describe how things work with a remarkably close analogy. (I have no association with Netflix, I am not even a customer, but I do stream video on the net.)
You can liken the internet to a package delivery service that works somewhat differently from traditional ones like the postal service or FedEx. The internet’s pricing model is “I pay for my line to the middle, and you pay for your line to the middle and we don’t account for the costs of individual traffic.”
In the package model, imagine a big shipping depot. Shippers send packages to this depot, and it’s the recipient’s job to get the package from the depot to their house. The shippers pay for their end, you pay for your end, and both share the cost of creating the depot.
Because most people don’t want to go directly to the depot to get their packages, a few “last mile” delivery companies have sprung up. For a monthly fee, they will deliver anything that shows up at the depot addressed to you directly to your house. They advertise in fact, that for the flat fee, they will deliver as many packages as show up, subject to a fairly high maximum rate per unit of time (called bandwidth in the internet world.) They promote and compete on this unlimited service.
To be efficient, the delivery companies don’t run a private truck from the depot to your house all the time. Instead, they load up a truck with all the packages for your neighbourhood, and it does one delivery run. Some days you have a lot of packages and your neighbours have few. Other days you have few and they have a lot. The truck is sized to handle the high end of the total load for all the neighbours. However, it can’t handle it if a large number of the neighbours all want to use a large fraction of their total load on the same day, they just didn’t buy enough trucks for that, even though they advertised they were selling that.
This is not unreasonable. A majority of the businesses in the world that sell flat rate service work this way, not just internet companies. Though there are a few extra twists in this case:
- The last mile companies have a government granted franchise. Only a couple can get permission to operate. (In reality — only a few companies have got permission to have wires strung on poles or under the street.)
- Some of the last mile companies also used to be your exclusive source for some goods (in this case phone service and TV) and are concerned that now there are competitors delivering those things to the customers.
The problem arises because new services like Netflix suddenly have created a lot more demand to ship packages. More than the last mile companies counted on. They’re seeing the truck fill up and need to run more trucks. But they proudly advertised unlimited deliveries from the depot to their customers. So now, in the op-ed, they’re asking that companies like Netflix, in addition to paying the cost of shipping to the depot, pay some of the cost for delivery from the depot to the customer. If they did this, companies would pass this cost on to the customer, even though the customer already paid for that last mile delivery. read more »
Submitted by brad on Tue, 2011-03-15 23:13.
ICANN is meeting in San Francisco this week. And they’re getting closer to finally implementing a plan they have had in the works for some time to issue new TLDs, particularly generic top level domains.
Their heart is in the right place, because Verisign’s monopoly on “.com” — which has become the de facto only space where everybody wants a domain name, if they can get it — was a terrible mistake that needs to be corrected. We need to do something about this, but the plan of letting other companies get generic TLDs which are ordinary English words, with domains like “.sport” and “.music” (as well as .ibm and .microsoft) is a great mistake.
I have an odd ambivalence. This plan will either fail (as the others like .travel, .biz, .museum etc appear to have) or it will succeed at perpetuating the mistake. Strangely it is the trademark lawyers who know the answer to this. In trademark law, it was wisely ruled centuries ago that nobody gets ownership of generic terms. But some parties will offer the $185,000 fee to own .music precisely because they hope it will give them a monopoly on naming of music related internet sites. Like all monopolies these TLDs will charge excessive fees and give poor customer service. They’ll also get to subdivide the monopoly selling domains like rock.music or classical.music. And while .music will compete with .com, the new TLDs will largely not compete with one another — ie. nobody will be debating whether to go with .music or .sport, and so we won’t get the competition we truly need.
I’ve argued this before, but I have just prepared two new essays in my DNS sub-site:
Since I don’t like either of the two main consequences, what do I propose? Well for years I have suggested we should instead have truly competitive TLDs which can compete on everything — price, policies, service, priority and more. They should each start on an equal footing so they are equal competitors. That means not giving any one a generic name that has an intrinsic value like “.music.” People will seek out the .music domain not because the .music company is good or has good prices, they will seek it out because they want to name a site related to music, and that’s not a market.
Instead I propose that new TLDs be what trademark people call “coined terms” which are made up words with no intrinsic meaning. Examples from the past include names like Kodak, Xerox and Google. Today, almost every new .com site has to make up a coined term because all the generics are taken. If the TLDs are coined terms, then the owners must build the value in them by the sweat of their brow (or with mone) rather than getting a feudal lordship over an existing space. That means they can all compete for the business of people registering domains, and competition is what’s good for the market and the users.
Sadly the .com monopoly remains (along with the few other generic TLDs.) The answer there is to announce a phase out. All .com sites with generic meanings should get new names in the new system, but after a year or two they’ll get redirect as long as they want to pay. (Their new registrar will manage this and set the price.) All http requests, in particular would get an HTTP Redirect Permanent (301) so the browser shows the new name. E-mail MX would be provided but all sent email would use the new name. All old links and addresses would still work forever, but users would switch advertising and everything else to the new names at a reasonable pace. Yes, people who invested lots of money in trying to own words like “drugstore.com” lose some of that value, but it’s value they should never have been sold in the first place. (Companies with unique strings like microsoft.com could avoid the switch, but not non-unique ones like apple.com or ibm.com)
Check out the essays for the real details. Of course, at this point the forces of the “stakeholders” at ICANN are so powerful that I am tilting at windmills. They will go ahead even though it’s the wrong answer. And once done, it will be as hard to undo as .com is. But the right answer should still be proclaimed.
Submitted by brad on Wed, 2011-03-09 15:19.
In media today, it’s common to talk about three screens: Desktop, mobile and TV. Many people watch TV on the first two now, and tools like Google TV and the old WebTV try to bring interactive, internet style content to the TV. People like to call the desktop the “lean forward” screen where you use a keyboard and have lots of interactivity, while the TV is the “lean back” couch-potato screen. The tablet is also distinguishing itself a bit from the small screen normally found in mobile.
More and more people also find great value in having an always-on screen where they can go to quickly ask questions or do tasks like E-mail.
I forecast we will soon see the development of a “fourth screen” which is a mostly-always-on wall panel meant to be used with almost no interaction at all. It’s not a thing to stare at like the TV (though it could turn into one) nor a thing to do interactive web sessions on. The goal is to have minimal UI and be a little bit psychic about what to show.
One could start by showing stuff that’s always of use. The current weather forecast, for example, and selected unusual headlines. Whether each member of the household has new mail, and if it makes sense from a privacy standpoint, possibly summaries of that mail. Likewise the most recent status from feeds on twitter or Facebook or other streams. One could easily fill a screen with these things so you need a particularly good filter to find what’s relevant. Upcoming calendar events (with warnings) also make sense.
Some things would show only when important. For example, when getting ready to go out, I almost always want to see the traffic map. Or rather, I want to see it if it has traffic jams on it, no need to show it when it’s green — if it’s not showing I know all is good. I may not need to see the weather if it’s forecast sunny either. Or if it’s raining right now. But if it’s clear now and going to rain later I want to see that. Many city transit systems have a site that tracks when the next bus or train will come to my stop — I want to see that, and perhaps at morning commute time even get an audio alert if something unusual is up or if I need to leave right now to catch the street car. A view from the security camera at the door should only show if somebody is at the door.
There are so many things I want to see that we will need some UI for the less popular ones. But it should be a simple UI, with no need to find a remote (though if I have a remote — any remote — it should be able to use it.) Speech commands would be good to temporarily see other screens and modes. A webcam (and eventually Kinect style sensor) for gestural UI would be nice, letting me swipe or wave to get other screens. read more »
Submitted by brad on Fri, 2011-02-18 17:29.
You may have heard of Bus Rapid Transit — a system to give a bus line a private or semi-private right-of-way, along with bus stops that are more akin to stations than bus shelters (with ticket-taking machines and loading platforms for multiple doors.) The idea is to make bus transit competitive with light-rail (LRT) in terms of speed and convenience. Aside from getting caught in slow traffic, buses also are slow to board. BRT is hoped to be vastly less expensive than light rail — which is not hard because LRT (which means light capacity rail, not lightweight rail) has gotten up to $80 to $100M per mile. When BRT runs down the middle of regular roads, it gets signal timing assistance to help it have fewer stops. It’s the “hot new thing” in transit. Some cities even give it bits of underground or elevated ROW (the Boston Silver Line) and others just want to wall off the center of a road to make an express bus corridor. Sometimes BRT gets its own highway lane or shares a special carpool lane.
At the same time just about anybody who has looked at transit and the internet has noticed that as the buses go down the street, they travel with tons of cars carrying only one person and lots of empty seats. Many have wondered, “how could we use those empty private car seats to carry the transit load?” There are a number of ride-sharing and carpooling apps on web sites and on smartphones, but success has been modest. Drivers tend to not want to take the time to declare their route, and if money is offered, it’s usually not enough to counter the inconvenience. Some apps are based on social networks so friends can give rides to friends — great when it works but not something you can easily do on demand.
But one place I’ve seen a lot of success at this is the casual carpooling system found in a number of cities. Here it’s very popular to cross the Oakland-SF Bay Bridge, which has a $6 toll to cross into SF. It used to be free for 3-person carpools, now it’s $2.50, but the carpools also get a faster lane for access to the highly congested bridge both going in and out of SF.
Almost all the casual carpool pickup spots coming in are at BART (subway) stations, which are both easy for everybody to get to, and which allow those who can’t get a carpool to just take the train. There is some irony that it means that the carpools mostly take people who would have ridden BART trains, not people who would have driven, the official purpose of carpool subsidies. In the reverse direction the carpools are far fewer with no toll to be saved, but you do get a better onramp.
People drive the casual carpools because they get something big for for it — saving over $1,000/year, and hopefully a shorter line to the bridge. This is the key factor to success in ride share. The riders are saving a similar amount of money in BART tickets, even more if they skipped driving.
Let’s consider what would happen if you put in the dedicated lane for BRT, but instead of buses created an internet mediated carpooling system. Drivers could enter the dedicated lane only if:
- They declared their exit in advance to the app on their phone, and it’s far enough away to be useful to riders.
- They agree to pick up riders that their phone commands them to.
- They optionally get a background check that they pay for so they can be bonded in some way to do this. (Only the score of the background check is recorded, not the details.)
Riders would declare their own need for a ride, and to what location, on their own phones, or on screens mounted at “stops” (or possibly in nearby businesses like coffee shops.) When a rider is matched to a car, the rider will be informed and get to see the approach of their ride on the map, as well as a picture of the car and plate number. The driver will be signaled and told by voice command where to go and who to pick up. I suggest calling this Carpool-Rapid-Transit or CRT. read more »