Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Like most people, I have a lot of different passwords in my brain. While we really should have used a different system from passwords for web authentication, that’s what we are stuck with now. A general good policy is to use the same password on sites you don’t care much about and to use more specific passwords on sites where real harm could be done if somebody knows your password, such as your bank or email.
The problem is that over time you develop many passwords, and sometimes your browser does not remember them for you. So you go back to a site and try to log in, and you end up trying all your old common passwords. The problem: At many sites, if you enter the wrong password too many times, they lock you out, or at least slow you down. That’s not unwise on their part, but a problem for you.
One solution: Sites can remember hashes of your old passwords. If you type in an old password, they can say, “No, that used to be your password but you have a new one now.” And not count that as a failed attempt by a password cracker. This adds a very slight risk, in that it lets a very specific attacker who knows you super well get a few free hits if they have managed to learn your old passwords. But this risk is slight.
Of course they should store a hash of the password, not the actual password. No site should store the actual password. If a site can offer to mail you your old password rather than offering a link to reset the password, it means they are keeping it around. That’s a security risk for you, and also means if you use a common password on such sites, they now know it and can log in as you on all the other sites you use that password at. Alas, it’s hard to tell when creating an account whether a site stores the password or just a hash of it. (A hash allows them to tell if you have typed in the right password by comparing the hash of what you typed and the stored hash of the password back when you created it. A hash is one-way so they can’t go from the hash to the actual password.) Alas, only a small minority of sites do this right.
This is just one of many things wrong with passwords. The only positive about them is you can keep a password entirely in your memory, and thus go to a random computer and login without anything but your brain. That is also part of what is wrong with them, in that others can do that too. And that the remote computers can quite easily be compromised and recording the password. The most secure systems use the combination of something in your memory and information in a device. Even today, though, people are wary of solutions that require them to carry a device. Pretty soon that will change and not having your device will be so rare as to not be an issue.
I’m doing a former-cold-war tour this month and talking about robocars.
This Friday, May 11, I will be giving the 2301st lecture for the Philosophical Society of Washington with my new, Prezi-enabled robocars talk. This takes place around 8pm at the John Wesley Powell Auditorium. This lecture is free.
A week later it’s off to Moscow to enjoy the wonders of Russia.
There will be a short talk locally in between at a private charity event on May 14.
I found this recent article from the editor of the MIT Tech review on why apps for publishers are a bad idea touched on a number of key issues I have been observing since I first got into internet publishing in the 80s. I recommend the article, but if you insist, the short summary is that publishers of newspapers and magazines flocked to the idea of doing iPad apps because they could finally make something they that they sort of recognized as similar to a traditional publication; something they controlled and laid out, that was a combined unit. So they spent lots of money and ran into nightmares (having to design for both landscape and portrait on the tablet, as well as possibly on the phones or even Android.) and didn’t end up selling many subscriptions.
Since the dawn of publishing there has been a battle between design and content. This is not a battle that has or should have a single winner. Design is important to enjoyment of content, and products with better design are more loved by consumers and represent some of the biggest success stories. Creators of the content — the text in this case — point out that it is the text where you find the true value, the thing people are actually coming for. And on the technology side, the value of having a wide variety of platforms for content — from 30” desktop displays to laptops to tablets to phones, from colour video displays to static e-ink — is essential to a thriving marketplace and to innovation. Yet design remains so important that people will favour the iPhone just because they are all the same size, and most Android apps still can’t be used on Google TV.
This is also the war between things like PDF, which attempts to bring all the elements of paper-based design onto the computer, and the purest form of SGMLs, including both original and modern HTML. Between WYSIWYG and formatting languages, between semantic markup and design markup. This battle is quite old, and still going on. In the case of many designers, that is all they do, and the idea that a program should lay out text and other elements to fit a wide variety of display sizes and properties is anathema. To technologists, that layout should be fixed is almost as anathema.
Also included in this battle are the forces of centralization (everything on the web or in the cloud) and the distributed world (custom code on your personal device) and their cousins online and offline reading. A full treatise on all elements of this battle would take a book for it is far from simple.
I sit mostly with the technologists, eager to divide design from content. I still write all my documents in text formatting languages with visible markup and use WYSIWYG text editors only rarely. An ideal system that does both is still hard to find. Yet I can’t deny the value and success of good design and believe the best path is to compromises in this battle. We need compromises in design and layout, we need compromises between the cloud and the dedicated application. End-user control leads to some amount of chaos. It’s chaos that is feared by designers and publishers and software creators, but it is also the chaos that gives us most of our good innovations, which come from the edge.
Let’s consider all the battles I perceive for the soul of how computing, networks and media work:
The design vs. semantics battle (outlined above)
The cloud vs. personal device
Mobile, small and limited in input vs. tethered, large screen and rich in input
Central control vs. the distributed bazaar (with so many aspects, such as)
The destination (facebook) vs. the portal (search engine)
The designed, uniform, curated experience (Apple) vs. the semi-curated (Android) vs. the entirely open (free software)
The social vs. the individual (and social comment threads vs. private blogs and sites)
The serial (email/blogs/RSS/USENET) vs. the browsed (web/wikis) vs. the sampled (facebook/twitter)
The reader-friendly (fancy sites, well filtered feeds) vs. writer friendly (social/wiki)
In most of these battles both sides have virtues, and I don’t know what the outcomes will be, but the original MITTR article contained some lessons for understanding them.
I have not intended for this blog to become totally about robocars but the news continues to flow at a pace more rapid than most expected.
Nevada has issued its first licence for an autonomous car — to Google, of course. This is a testing licence with a special red plate with an infinity symbol on it. It’s a cool looking licence but what’s really cool is that even in the 2000s when I would give talks on this technology and get called a ridiculous optimist, I never expected that we would see an official licenced robocar in the USA in the spring of 2012 — even if only for testing.
This is a picture of a car with a California plate. The new plate has licence number 001, you can see a picture here.
The Nevada law enabled both the testing of vehicles in the state and their eventual operation by regular owners. For testing, the vehicles need to have two people in them, as has been normal Google policy. They must do 10,000 miles first off of Nevada roads — either on test tracks, or in the case of the early vehicles, in other states that don’t have a 10,000 mile requirement. German auto and tire supplier Continental has said it’s been racking up the 10,000 miles and wants to apply, and press reports say other applicants are in the wings. As far as I know this is the first officially licenced car in the world, though several other research cars have gotten special one-off permits to allow them to be tested on the roads in places like Germany and China.
More information has come from the Google team (to which I am a consultant) at the Society of Automotive Engineers conference in Detroit. In a speech there, covered in the Detroit Free Press and many others Anthony Levandowski outlined how Google has been talking to all significant car manufacturers about how they might work together to produce cars with Google’s technology. Google is not looking to become a car manufacturer, but does want to see a real car on the roads — and not next decade.
At the same time, talks with insurance companies about how to provide insurance for self-driving cars are also going on. Insurance companies pay the cost of all accidents, either directly through policies bought by the driver, or indirectly through insurance sold to manufacturers, and of course all these policies and cars are really paid for by car owner/drivers. As long as accidents are lowered, and the cost per accident remains the same, it’s a win.
At the same time J.D. Power and Associates released a study on self driving car markets. This survey shows around a third of buyers would like to get self-driving functionality in their car, and about 20% would pay $3,000 for it. While advanced laser-based scanners cost much more than that today, I am confident that Moore’s Law and higher volumes can bring things down to that price. These numbers are quite high for such a radical new technology. Such technologies normally only require a small volume of early adopters to get them going. The varoius basic autopilots announced by car manufacturers which require you to still keep your attention on the road will sell for well under $3,000.
Sebastian Thrun, leader of the Google X Lab, recently appeared on Charlie Rose where he spoke about the car, about Glass, and mostly about Udacity, his personal online education project. Sebastian also publicly posted that he took one of the Google self-driving Lexus cars up to lake Tahoe this weekend. I do think those long vacation home drives will be a big driver of people to pay serious money for a self-driving car. Saving time on the average 30 minute commute is one thing, but the 4 hour drive to Lake Tahoe is a real change, especially if you can use the time to interact with your family or get in serious reading or video watching. Of course, right now, Sebastian was keeping his eyes on the road in case he needed to intervene, since this is still a prototype.
Finally, NHTSA has released a report saying that robocars could eliminate up to 80% of crashes. While they won’t get to that number right away, I think they can even do better in time. David Strickland, the head of NHTSA, has stated he has very high hopes for the technology, which is tremendous news, because it means that one my biggest fears in my early days of forecasting this technology — too much government opposition — seems less likely.
Some accidents are caused by mechanical failures (like tire blowouts or bad brakes) freak weather and other situations a self-driving car can’t do much about. We may never get to zero. But this should still be the biggest lifesafer in the developed world until somebody cures some of the biggest diseases.
While Mercedes has been reported as promising a traffic-jam autopilot in the 2013 S class due later this year, I was surprised to learn that Honda briefly made claims that their 2006 “Accord ADAS” in the UK was a self-driving car.
However this car is, as the name suggests, an ADAS car with Honda’s lane-keeping system which will nudge the car back into the lane if you drift out of it. Such lane keeping systems have indeed been around for a while. This car notices if you keep your hands off the wheel for more than a short time, and sounds an alarm. In order to “self-drive” the demonstrator keeps his hands close to the wheel and touches it every so often to avoid the alarm. You get the impression that he and others have been using the car in this fashion.
It is no idle alarm. The LKAS nudge is not quite powerful enough to steer the car in any kind of real turn, and the camera finding lane markers of course occasionally fails to find them. This, again is common in fancy ADAS cars. What is interesting is that Honda allowed this to be pitched as an attempt at self-driving. They have not done this recently, though lane-keep ADAS systems have continued to be available since then from Honda and other vendors.
Honda has been generally not too active in announcements of self-driving cars. They have shown concept cars that listed self-driving as one of the features, but these were concept cars, not actual implementations. Toyota and Nissan have both made various announcements. The smaller Japanese companies (Mazda, Mitsubishi and Subaru/Fuji) also have no public projects.
On a second note, I will be speaking Wednesday morning at the MLOVE Conference in Monterey on self-driving cars. Then I will be heading over to the Asilomar Microcomputer Workshop — a 35 year old conference I’ve been going to for decades which happens to be in the same place at the same time.
In the Cadillac video below, they explain the system as a combination of ACC, lane-keeping and GPS. This is similar to the other announced plans from many other car companies, including Mercedes, BMW, VW/Audi and others. The use of GPS suggests the car may also use map information, which is not known to be used by the other announced products, but is heavily used by Google and the various eyes-free projects.
It is pure speculation, but perhaps they are building maps of where the lane markers are reliable enough and where they have faded out so that they can refuse to super-cruise when approaching those zones. They might also use the GPS to assure you super-cruise only on the highway or other limited areas.
In the video, which shows a demo at about the 1:10 mark, they are driving on a test track, and always next to a blue line along the lane markers. Obviously a real product could not depend on special lane striping if it wants to be broadly usable, but this may assist them in testing their system with confidence. (ie. compare what their lane-finder detects to what an independent system that tracks the blue line detects.)
GM has had various self-driving projects, including the futuristic EN-V and the sponsorship of BOSS in the Darpa Urban Challenge. The Cadillac brand is well positioned. Self-driving is initially going to be a luxury feature, but companies that sell sporty performance cars don’t want to detract from their image as selling a fun driving experience. A pure luxury brand like Cadillac does not have as much of that problem as BMW and Mercedes have. At the same time, the video insists that they don’t want to take away from driving.
It’s been interesting to see how TV shows from the 60s and 70s are being made available in HDTV formats. I’ve watched a few of Classic Star Trek, where they not only rescanned the old film at better resolution, but also created new computer graphics to replace the old 60s-era opticals. (Oddly, because the relative budget for these graphics is small, some of the graphics look a bit cheesy in a different way, even though much higher in technical quality.)
The earliest TV was shot live. My mother was a TV star in the 50s and 60s, but this was before videotape was cheap. Her shows all were done live, and the only recording was a Kinescope — a film shot off the TV monitor. These kinneys are low quality and often blown out. The higher budget shows were all shot and edited on film, and can all be turned into HD. Then broadcast quality videotape got cheap enough that cheaper shows, and then even expensive shows began being shot on it. This period will be known in the future as a strange resolution “dark ages” when the quality of the recordings dropped. No doubt they will find today’s HD recordings low-res as well, and many productions are now being shot on “4K” cameras which have about 8 megapixels.
But I predict the future holds a surprise for us. We can’t do it yet, but I imagine software will arise that will be able to take old, low quality videos and turn them into some thing better. They will do this by actually modeling the scenes that were shot to create higher-resolution images and models of all the things which appear in the scene. In order to do this, it will be necessary that everything move. Either it has to move (as people do) or the camera must pan over it. In some cases having multiple camera views may help.
When an object moves against a video camera, it is possible to capture a static image of it in sub-pixel resolution. That’s because the multiple frames can be combined to generate more information than is visible in any one frame. A video taken with a low-res camera that slowly pans over an object (in both dimensions) can produce a hi-res still. In addition, for most TV shows, a variety of production stills are also taken at high resolution, and from a variety of angles. They are taken for publicity, and also for continuity. If these exist, it makes the situation even easier. read more »
By now, you’ve probably heard of the proposal from the White House to abolish April Fool’s Day as a national holiday starting in 2015. Some in the comedy community are upset at the end of an old tradition and a day devoted to what we love.
But it’s time to face facts. It’s just not working any more. When I was a kid, April 1st was mostly a day of physical pranks or very short gags. You would replace the sugar with salt or put a white powder in an envelope. But the internet changed it and made every gag global.
The key to a good gag was the person believing in the gag and then suddenly remembering what day it was. If you were lucky they didn’t clue in and you could exclaim “April Fool” for much hilarity.
It was common in days past for people to forget what day it was. One of my best pranks came decades ago, when I posted in Science Fiction forums on April 1 that Fred Saberhagen’s “Berserker” novels were a rip-off of the fine original Battlestar Galactica series. Over 70 different people posted rants about how stupid I was, and a serious fraction of them pointed out that the Saberhagen books long predated Galactica, and said things like “why don’t you check the dates on what you read?”
Now, nobody is surprised. Google has 13 different gags up today, including one on the front page. Every major web site has a gag, many have long traditions. Perhaps somebody is briefly surprised by the first one, but generally everybody knows what day it is and nobody is fooled.
Some have proposed that the national Fool’s day be moved to a random day each year, with not much promotion done about what the date is. People who were funny (or thought they were funny) would make sure they knew the date. I am not sure that’s enough — it would help make the first gag a surprise but soon the tolerance would build up.
A bit better is the proposal from then National Comedy & Gag Association to have a different day in each state, as proclaimed by the Governor, or even every city. This would allow surprise because when you read jokes from other geographic regions, you might see only half a dozen on any given day. You would then have to research the location of the joke and check to see if that location is having its local Fool’s day that day.
Can anything restore the sanctity of this holiday? It may be that this is one thing the internet has destroyed.
I recently updated my book recommendation box to list the very best recent SF to read from the last few years. This is SF that meets my goals for great SF. I see somewhat “hard” SF that speaks about important and real ideas, while being entertaining writing at the same time.
The Quantum Thief by Hannu Rajaniemi (2011)
This astounding first novel rates as best of 2011 for me. Except it came out in 2010, but in limited release in the UK so most people did not see it until 2011. An amazingly constructed post-singularity world that deserves all the superlatives. The next book is eagerly awaited. Particularly remarkable is that as a Finn, I presume his first language was not English. It is disappointing that it did not receive a Hugo nomination.
Super Sad True Love Story by Gary Shteyngart (2010)
This novel was paid surprisingly little attention by the SF community, but in fact it’s the best SF novel of 2010. A wonderful dystopian view of a failing USA where only dollars backed by the Yuan are valuable and the coveted jobs are in retail and media. A dark view of whuffie-like reputation where everybody’s credit score is displayed everywhere they go, and at every gathering everybody is rated on fuckability (and you see where you stand.) The anti-hero works for an anti-aging company that is a marvelous parody but the topics are deep and serious. Not even nominated for the Hugo which is a terrible mistake.
The City and the City by China Miéville (2009)
The best of 2009 (tied for the Hugo award, too.) The City and the City at first may not seem like SF because the cities are so implausible, but it’s really a fun experiment in social or political science to imagine two towns co-existing like this, partly overlaid in space while the residents are trained from birth to pay no notice to the other city. This is probably the weakest on this list, and indeed the co-winner that year (Windup Girl) was almost anti-SF as the science it it was fully bogus. But CatC grew on me as I came to see it as alternate-social worldbuilding.
Anathem by Neal Stephenson (2008)
It came 2nd for the Hugo, but even the winner, Neil Gaiman, declared it should have won. Read my full review.
Rainbow’s End by Vernor Vinge (2006)
The Hugo Winner for 2006 is also my pick for the best of the decade. If you like your SF full of wonderful new ideas, in this case related to the near future rather than the more abstract distant ones seen in earlier Vinge triumphs, this is the book for you. The protagonist has recently been cured of Alzheimer’s but that doesn’t mean many of his memories weren’t destroyed. He tries to fit into a world where everybody wears augmented reality lenses and clothes, education and play are radically different and a conspiracy is trying to develop a drug that makes you more accepting of suggestions. Note that 2006 also included the excellent Blindsight by Peter Watts available free here.
Other great reads
As noted above check out Embassytown (nominated for the Hugo in 2012) and other Miéville works, and Blindsight by Peter Watts.
If you like Zombies, read Feed by Mira Grant — or rather read it for its treatment of a future, blogger-centered media world. It and its sequel were/are Hugo nominated. Several by Charlie Stross rate highly, such as Halting State, which is probably the best SF novel of 2007 — though the alternate history and Hugo winner The Yiddish Policemen’s Union is a better overall novel. And if you’re from the 80s like me you will want to read the recent Ready Player One, a novel about a world where the now richest man in the world created a globe-spanning MMORPG, and then willed it to whoever could solve a challenge in it. To win, you needed to know all the obscure 70s and 80s culture references that were dear to the deceased programmer.
Going back in the decade 2004 was also a very strong year with River of Gods being worth of a best-of-decade list, and The Algebraist and Iron Sunrise (particularly for its wonderful reMastered cult of the unborn god) are also very strong. 2006 had the very fun Old Man’s War as a fine debut novel, and Accelerando is superb (indeed unmatched until Rainbow’s End) for its ideas but lacking in its characters — Stross gets better at this later.
Today Google released a new 3 minute video highlighting advanced self-driving car use. Here I embed the video, discussion below includes some minor spoilers on surprises in the video. I’m pleased to see this released as I had a minor & peripheral role in the planning of it, but the team has done a great job on this project.
This video includes active operation of the vehicle on not just ordinary streets, by private parking lots for door to door transportation. You can click on it to see it in HD directly on Youtube. read more »
For some time, the US Postal Service has allowed people to generate barcoded postage. You can do that on the expensive forms of mail such as priority mail and express mail, but if you want to do it on ordinary mail, like 1st class mail or parcel post, you need an account with a postage meter style provider, and these accounts typically include a monthly charge of $10/month or more. For an office, that’s no big deal, and cheaper than the postage meters that most offices used to buy — and the pricing model is based on them to some extent, even though now there is no hardware needed. But for an ordinary household, $120/year is far more than they are going to spend on postage.
There is one major exception I know of — if you buy something via PayPal, they allow you to print a regular postage shipping label with electronic postage. This is nice and convenient, but no good for sending ordinary letters and other small items.
I think the USPS is shooting itself in the foot by not letting people just buy postage online with no monthly fee. The old stamp system is OK for regular letters, and indeed they finally changed things so that old first class stamps still work after price raises, but for anything else you have to keep lots of stamps in supply and you often waste postage, or make a trip to a mailing office. This discourages people from using the post office, and will only hasten its demise. Make it trivial to mail things and people will mail more.
It could be a web printed mailing label as you can use for priority mail, but most software vendors would quickly support such a system. If people wanted, they could even buy “stamps” which were collections of electronic postage in various denominations that could be used by programs so there is no need to handle transactions. Address label printers would all quickly also do postage.
Of course the official suppliers like Endica and stamps.com would fight this completely. They love being official suppliers and charging large fees. They have more lobbying power than ordinary mailers. So the post office is going to quietly slip away into that good night, instead of taking advantage of the fact that it’s the one delivery company that comes to my door every day (for both pick up and delivery) and all the effiencies that provides.
Sometimes when I travel I see a great idea that hasn’t yet spread everywhere yet. A parking garage I parked at in Tel Aviv had LEDs visible on the roof above every stall. These were red and green if the stall was full or empty. So it was quick to find an empty stall. This probably makes the garage more efficient because people don’t have to circle hunting for a spot, and this justifies the cost. (The main cost of these is probably wiring the power for them.)
I’ve seen studies claiming that in busy areas, up to 30% of the traffic is cars circling looking for parking. Mostly they are looking for free parking or convenient on-street parking, since parking garages, though expensive can usually be found and entered quickly. Indeed, while on-street parking is often much more convenient, in many cases this is an artifact of parking being subsidized (because it’s free, or free to people who live in an area) or cheaper than commercial parking markets. But we don’t seem ready to fix that, though many cities put restrictions on street and metered parking, limiting the number of hours so that it is in theory only for visitors rather than all-day parkers.
There are many companies trying to see if they can improve parking using mobile devices and the internet. There are companies with sensors that manage parking spaces, companies that let you find spaces on a mobile device and even enter a garage with your mobile device. In some cases you can even extend your parking (if you prepaid) over the phone. Cities have been moving away from traditional meters to things like block meters (where you get a ticket and then put it on your dash) or fancy enforcement vehicles with licence plate cameras that spot not only if you are in a spot too long, but if you move within the busy zone to another spot.
As a user of parking, I would like to know I’ve got a good spot lined up before I get to my destination, and just pull right into it. I want a competitive market but I don’t want to waste time and gas hunting. There are companies trying to address this, though mostly in commercial lots. It’s mostly pretty basic right now — it’s considered fancy to even have sites like parkopedia or bestparking with a database of the parking in a city with the prices so you can comparison shop the parking lots.
So now for some rambling on what might be done on street. read more »
You may not know the name of Continental, but they are a major supplier of components to the big automakers. A story in the Detroit Free Press details their latest project in autonomous driving. This is a VW Passat using radar Automatic-Cruise-Control combined with lane-keeping, similar to projects announced by Mercedes and VW/Audi itself. The story has a video showing the screen of the car displaying its lane-keeping. The car also has side radars to track vehicles or barriers to the left and right, according to the story. It’s aimed at stop-and-go traffic and empty highway. If it’s like the other products it requires constant human supervision, as it is not safe to not look at the road in case the lane markers vanish or other unexpected problems occur.
They claim they have done 6500 miles in Michigan, and that soon they will have the 10,000 needed for a testing licence in Nevada. The new Nevada law allows developers of robocars to test on Nevada highways once they have shown 10,000 miles on a test track or in another state, and under special testing rules. (The Google cars have over 200,000 miles in California and Nevada.)
The Nevada regulations specifically exempt vehicles which require full time human supervision, so in theory they don’t need a Nevada testing licence if this is such a vehicle. If it is planned to operate without such supervision it needs the licence and is more advanced that the other systems of this type.
An interesting note about the photo, credited to Conti — if this car actually does qualify as an autonomous car in Nevada, then that picture of the car robo-driving in Las Vegas presumably was taken before the regulations came into effect.
I’m back from our fun “Singuarlity Week” in Tel Aviv, where we did a 2 day and 1 day Singularity University program. We judged a contest for two scholarships by Israelis for SU, and I spoke to groups like Garage Geeks, Israeli Defcon, GizaVC’s monthly gathering and even went into the west bank to address the Palestinian IT Society and announce a scholarship contest for SU.
Of course I did more photography, though the weather did not cooperate. However, you will see six new panoramas on my Israel Panorama Page and my Additional Israeli panoramas. My favourite is the shot of the western wall during a brief period of sun in a rainstorm.
In Ramallah, the telecom minister for the Palestinian Authority asked us, jokingly, “how can this technology end the occupation?” But I wanted to come up with a serious answer. Everybody who goes to the middle east tries to come up with a solution or at least some sort of understanding. Israelis get a bit sick of it, annoyed that outsiders just don’t understand the incredible depth and nuance of the problem. Outsiders imagine the Israelis and Palestinians are so deep in their conflict that they are like fish who no longer see the water.
In spite of those warnings, here’s my humble proposal for how to use new media technology to help.
Take classrooms of Israelis and classrooms of Palestinians and give them a mandatory school assignment. Their assignment is to be paired with an online buddy from the “other side.” Students would be paired based on a matching algorithm, considering things like their backgrounds, language skills or languages and subjects they want to learn. The other student, with whom they would interact over online media and video-conferencing (like Skype or Google Hangouts,) would become a study partner and the students would collaborate on projects suitable to them. They might also help one another learn a language, like English, Arabic or Hebrew. Students would be encouraged to add their counterpart to their social networking circles.
Both students would also be challenged to write an essay attempting to see the world from the point of view of the other. They will not be asked to agree with it, but simply to be able to write from that point of view. And their counterpart must agree at the end that it mostly does reflect their point of view. Students would be graded on this.
It would be important not to have this be a “forced friendship.” The students would be told it was not demanded they forget their preconceptions; not demanded they agree with everything their counterpart says. In fact, they would be encouraged to avoid conflict, to not immediately contradict statements they think are false. That the goal is not to convince their counterpart of things but to understand and help them understand. And in particular, projects should be set up where the students naturally work together viewing the teachers as the common enemy.
At the end of the year, a meeting would be arranged. For example, west bank students would be thrilled at a chance to visit the beach or some amusement park. A meeting on the west bank border on neutral ground might make sense too, though parents would be paranoid about safety and many would veto trips by their children into the west bank.
Would this bring peace? Hardly on its own. But it would improve things if every student at least knew somebody from outside their world, and had tried to understand their viewpoint even without necessarily agreeing with it. And some of the relationships would last, and the social networks would grow. Soon each student would have at least one person in their network from outside their formerly insular world. This would start with some schools, but ideally it would be something for every student to do. And it could even be expanded to include online pen-pals from other countries. With some students it would fail, particularly older ones whose views are already set. Alas, for younger ones, finding a common language might be difficult. Few Israelis learn Arabic, more Palestinians learn Hebrew and all eventually want to learn English. Somebody has to provide computers and networking to the poorer students, but it seems the cost of this is small compared to the benefit.
A recent article on bicycles and pedestrians in the robocar world appears at the Greater Washington web site, which has taken an interest in robocar topics. In particular they are concerned about the vision of a reservation-based intersection, which does not use traffic signals. These designs from U of Texas got a lot of press in the last few weeks after a presentation at AAAS, but they’ve been around for years and I have a number of links to them. What’s new is that the coming of robocars makes them seem more practical.
In a reservation based intersection, the computer handling the intersection hands out slots to cross the intersection. The slots are moving boxes that you have reserved, and you cross in them. The computer hands out the boxes so they never hit one another. The simulated result at first would scare people to death but over time they might trust it. However, it requires that every car on the road have automatic operation, since deviation from your reserved box does indeed mean serious risk. Human judgement just would not cut it here. As such, intersections like this are a long, long way away.
Closer, I think, is the concept of reservation based roads. These are road segments which hand out long term slots, such as “You can drive this block between 8:30 and 9am.” The road only hands out as many slots as it can handle, but does not try to schedule the cars down to the square foot-second. In such a system, as you approach that block on your trip, you would refine and correct the initial reservation, so that by the time you are a minute away, your window is just a few minutes. If roads can do this they can assure, well in advance, that they never get more cars on them than they can handle, and this reduces the odds that traffic will collapse due to congestion. The biggest cause of congestion is basic excess of demand over supply — accidents are the #2 cause.
Such a system can also handle human driven cars. Those cars are a bit less predictable and need wider reservation windows. They also will eventually need more space on the road, since robocars will eventually start packing themselves closer together once they are common enough to do that. Half-width robocars will commonly pair up in a lane with other half-width vehicles.
So what about the bicycles? It will be daunting for them. If there is a bike lane, that’s great of course. And at “bike rush hour” we can even make sure “parked” robocars get out of the way to make a bike lane if that’s what we want. (We may want another car lane even more.) Otherwise a virtual bike lane can be made if the bikes have to ride with the traffic.
Bikes do present a safety issue to be sure. In the worst case situation, a cyclist can fall off their bike and stop immediately, lying in the road. A vehicle following a bike has to leave enough space to assure they can stop before that, including reaction time. Reaction time should be better for robocars than for humans. Humans don’t leave enough space right now. We leave even less space behind cars because cars actually can’t stop super fast, and you brake with them. and if you hit them at slow speeds it’s “tolerable” — nobody will be seriously hurt. Hitting a cyclist or pedestrian at slow speeds can mean death.
(Head-on collisions are a different matter and they can cause great mayhem. I believe that moving mostly to one-way streets is the best solution to the problem of head-ons, and with robocars, the inconvenience of one-way streets can be greatly reduced.)
Robocars should end up much better at spotting cyclists than humans are, because robocar vision is 360 degrees and in 3-D. There are no blind spots in a robocar system and it’s always paying attention in all directions. The only negative in spotting them is their small size. A bike that appears out of nowhere from behind an obstruction is always at risk to both robocars and human drivers. Robocars will work very hard to not hit cyclists, and in fact in the future street that’s 100% robocar, a cyclists should feel pretty safe, and could even abuse the system, weaving back and forth and causing jolts for the passengers around the bike.
On the plus side, robocars might enable two things. The first would be the creation of dedicated lanes, paths and even elevated guideways for use by both bicycles and narrow lightweight robocar trikes. I anticipate these lightweight vehicles will become very common, as they are the most efficient vehicle for short urban trips. Because they are light and small, it’s vastly cheaper to build dedicated pathways and elevated guideways for them. These guideways could be made open to bikes if there are passing zones, since the robocars would sustain higher speeds. (We have not yet convinced many US cities to dedicate a lot of space and money to bike-only paths, otherwise that would be obviously better for bikes.) Robocar only lanes offer a cheap way to increase road capacity and offer ultralight robocar users a faster, zero-congestion trip in the busiest areas, and thus make a lot of sense for cities. The bang/buck is as high as it can get in transportation development, and it encourages green transportation, as these trikes use less energy/person than transit systems do.
Another interesting development might be the bike-bot. As I envision it, this is a very small robot that’s able to clamp onto a bicycle and move the bike from place to place, using the bicycle’s wheels as well as its own. This could offer a world of “bikes on demand.” No matter where you are, you could summon up a bicycle in a short time, and drop it anywhere. (At your destination, you would insert the bike into a bike-bot that sent itself there ahead of your arrival, and the bike-bot would take the bike to its next rider.) This could make bicycle use very convenient, and would be good, efficient exercise for all who need it.
I also suspect that we’ll see ultralight robocars that feature pedals. With the pedals, the rider would have the option of exercising and their energy would also go into powering the vehicle. The commute is a good time to exercise and watch videos or read. Not as much fun as recreational cycling, but more pleasant in other ways that cycle-commuting.
In the more distant future, when all cars are robocars, we will begin to see the conflict between the cars and the bikes and pedestrians described in the article cited above. The author is right that putting pedestrians on elevated bridges is not a good answer, and forcing bikes off valuable road is not good either. In an idealized robocar road, which has no parked cars on the side, and just many lanes of one-way traffic, the presence of the cyclist does use up a lot more road capacity per person than the cars do. We’re a long way from that idealized capacity, but should we come to depend on it, we might see pressure to push the bikes away, or charge them or the amount of square-foot-seconds of road they use. That will be a political decision, where we may decide many decades from now that to encourage cycling, it’s worth subsidizing it a bit.
In our effort to reduce the corruption in politics, one of the main thrusts in campaign finance regulation has been for transparency. Donations to candidates must be declared publicly. We want to see who is funding a candidate. This applies even to $100 donations.
While the value of such transparency seems clear — though how effective it’s been remains less clear — there are some things that have bothered me about it.
It’s quite a violation of privacy. We demand secret ballot, but supporting a candidate gets us in a database and a lot of spam.
Some people are so bothered by this invasion of privacy that they actually refrain from making donations, even small ones, to avoid it.
What if we reversed that thinking. What if we demanded that donations to candidates be anonymous?
A special agency would be created. All donations would flow into that agency, along with which candidate they are meant for.
Only the agency would know who the money went to. After auditing was done to assure the agency was distributing the money correctly, the info would be destroyed. Before that it would be kept securely.
Money would be given to candidates in a smoothed process with a randomized formula every few weeks, to avoid linking donations with dates. This might mean delays in getting some money to candidates.
While anybody could say that they donated, to offer, solicit, show or receive proof of donation would be a crime. An official method of hiding donations in corporate P&Ls would need to be established.
In general, all donations in any given period (a month or quarter?) must be given as a lump sum, with a list of how much to give each candidate. So even if you’re sure a donor would never give anything but party X, you don’t know which candidates in party X.
Now it would not be impossible to hide things entirely. If the Koch brothers say they gave a big donation, and you believe them, it’s fairly safe to say it wasn’t to Obama. At least for now, this will buy them more access to candidates on their side. But this gets harder over time. And the common corporate strategy of donating to both sides of a race to assure access no matter who wins becomes vastly less valuable. While you might convince somebody you are a regular donor and will pull your donation if you don’t get what you want, it becomes very hard for you to prove. read more »
One of the useful attributes of electronic paper (such as E-Ink) is that it doesn’t take any power to retain an image, it only takes power to change the image. This is good for long-lasting E-readers, and digital signs are one of the other key applications of electronic paper, though today they are sold with a focus on the retail market.
Earlier, I wrote about concepts for a fourth screen which is an always-on wall computer that has effectively no user interface — its purpose is to show you stuff that is probably of interest to you based on time of day and who is looking at the screen. That proposal requires that the display be located where there is power, but there are many locations where wiring in permanent power is not a readily available option.
The typical e-book reader has all the hardware needed to act as a very low-power digital wall display. Such a display would have electronic paper and wifi. It would only wake up very rarely to briefly check, over the wifi (or better still bluetooth) if there is new data to display, in which case it would download it and display it. During these updates, it might also check to see if there is a new updating schedule.
You can do better than wifi, which usually requires a process of associating with an access point, getting an IP address, and then making queries. Bluetooth can connect with lower power. Even better would be a chip which is able to listen constantly at very low power for a special radio pulse (“wake on pulse”) from a powered transmitter, and then power on the rest of the system for data transfer. The panel could be put anywhere, and then a pulse generator would be put somewhere nearby that has power and is close enough to wake up the panel. (It might be something that plugs into a wall outlet and even does networking over the power lines.) This would allow the valuable ability to push information to the panel.
The panel’s battery would of course die in time, so there would need to be a battery swap ability or if need be a means to charge with a temporary extension cord, a battery-powered charger or taking the panel off the wall.
An immediate market for these would be the doors of meeting rooms, so that they can show the schedule for the meeting room. Many hotels and convention centers have screens to do this now, but due to the need for power and other integration, these tend to be quite expensive, while ebook readers are now in the $100 range.
But they would also be useful around the home for 4th screen applications, displaying useful info. They could also be put near fridges or stoves to display recipes and family information. Obviously if you can put in a powered LCD display, that’s going to be able to do more, but without the power constraint more people might use it. They do need to be lit by external light, of course, but also are visible in bright sun in a way that lcds are not. And a product like this might well start eating into the retail digital signage market — anybody know what the price points are these days in that market?
The state of Nevada today approved regulations for self-driving cars in the state. Last year, Nevada passed a law outlining the path to these regulations, and their DMV has been working in consultation with Google, car makers and other parties to write them down. Today they were approved, allowing testing, certification and — someday — operation of vehicles in the state. Other laws are in consideration in other states inspired by the Nevada move. This is, frankly, much sooner than I anticipated.
In other news, a junker car race known as “24 hours of LeMons” (completely unrelated to Le Mans) has announced that self-driving cars may enter and are exempt from the normal requirement that cars cost no more than $500. The “X cedingly bad idea prize” of a million nickels (not quite as good as X prize purses of $10 million) probably won’t get too many takers at first. This race has a sense of humour but I’m not sure too many folks would risk their expensive autonomous car on that track or feel it safe enough to drive with crazy amateur racing drivers. I suspect they don’t really mean it and just wanted to issue a press release, but it will be fun when robocar technology is common enough that garage tinkerers on low budgets can enter races like this.