This is a challenge to blog readers to come up with (or find examples in practice) of good systems to allocate students to parallel sessions based on their preferences. I’ve just concluded one round of this, and the bidding system I built worked OK, but is not perfect.
The problem: Around 80 students. On 10 days over 4 weeks they will be split into 3-5 different parallel sessions on those days. Many sessions have a cap on the number of students, and more students will have them as a 1st choice than can fit. Some sessions can take many students and won’t fill up. The students can express their preference as ranking, or with numeric values.
This is known in the literature as the Allocation problem, and there are various approaches, though none I found seemed to fit just right, either being easy to code or having existing running code. But I am keen on pointers.
Maximize student satisfaction/minimize disappointment. Giving a student their 1st choice is good. Giving 3rd or 4th choices is bad.
The system must be easy for the students to understand and use.
Fairness. This has many meanings, but ideally mismatches that can’t be avoided should be distributed. If somebody gets a 4th choice one day, they perhaps should have a better shot at a 1st choice on another day.
It’s nice if there’s a means of applying penalties to students who violate rules, sneak into sessions etc. Academic violations can result in less chance at getting your 1st choice.
It should be flexible. Sessions may have to be changed or many not fully finalize until a week before the session.
It is nice to handle quirks, like duplicated sessions a student takes only once, but where the student might have preferences for one of the instances over another. There may also be pre-requisites, so only students who take one session can have the sequel.
Things change and manual tweaking can be advised.
Rank sessions in order, 1st come, 1st served
This was used in the prior year. Much like a traditional sign-up sheet in some ways, students could indicate their choices in order. If more students had a session as 1st choice than would fit, the ones who filled out their form first got in. This gave priority over all 10 days and so it was changed to rotate each week to distribute who was first in line. read more »
A wrapup of robocar news from the past couple of weeks:
Nevada governor Brian Sandoval rides in Google Car
After Nevada’s recent legislation directing their DOT to explore legal operations for robocars in the state, the governor “took the wheel” of a Google car. Very positive impressions from the governor and DMV head.
A new student robocar team has sprung up in India. They’re still early but their goal of driving in the crazy Indian traffic is a daunting one. Robocars have many advantages at low speed, where the 360 degree vision of LIDARS makes them see more than a human will. Harder is modeling the behaviour of other vehicles and playing games of chicken.
More mainstream press articles
Mainstream press articles of the robocar future and the intermediate technologies are growing in number. Here’s Smartmoney on near-term technologies and a Slate piece that, like almost all mainstream press pieces, asks whether people are really willing to give up the freedom of driving. Perhaps I’m too immersed, but in my immersed perspective I have simply stopped wondering about this. There will be a few who think like the dodge ad but huge numbers of people keep asking me when they can get one.
IEEE conference at Stanford paints alternating views with optimism vs. long roadmaps
Last Saturday a small IEEE conference at Stanford covered car automation technologies, including a morning on autonomous vehicles with mixed views. Steven Shladover, for example has a decades long history in important projects like cars guided by embedded road magnets, ITS, cooperative cruise control and platooning, but he is highly skeptical of autonomous cars which drive with regular cars, insisting instead that dedicated lanes are the answer. He believes this will start by building dedicated lanes for express buses (BRT) — which is something there is political will to do in many cities — and then automating the buses in those lanes. Once this is done, cars can enter the lanes if they communicate properly with the other vehicles in the lane and the lane itself.
This infrastructure approach is simpler from a technical standpoint, but the building of new infrastructure is such a hard problem and point of slow progress that my bet, as readers know, is on robocars on ordinary streets. Without the BRT component, I view proposals for new robot-only lanes to be dead in the water. Still, it’s worth paying attention when somebody with lots of experience disagrees so fundamentally with your views.
Volkswagen, while having recently promoted their Temporary Auto Pilot, displayed a roadmap that was much slower, suggesting that having a car that could pick you up at the airport or park itself on streets was something we might see in 2028.
Another lesson from the conference was the extreme difficulty of introducing radical innovation through big automakers. Cars are perhaps the most complex product sold, as well as the most expensive consumer product for most. As a result the industry has created huge amounts of “process” to how it plans and innovates, and that process is not ready to accept much in the way of disruptive technology. As I wrote earlier about the radio as the potential place for innovation in cars, car makers are now considering the central console where the radio and other controls are found the “golden stack” and they want to be the provider of it. Especially because the stuff they sell there sells for a huge margin; people often pay $2000 for an in-car GPS that’s worse than what they get free in their phone or for $250 in the aftermarket.
German team gets permission for their robocar tests on city streets
The AutoNOMOS team at Freie Universität Berlin reports they have been approved to test on city streets. This testing will be similar to the testing Google has reported doing in California, with a safety driver and copilot in the car to monitor and take control in any situation that presents a safety risk. According to the New York Times, Google didn’t seek a specific permission but state officials did agree, when asked by the times, with the interpretation that a vehicle with a licenced driver responsible for vehicle operations was legal.
Porsche trying to make a very smart cruise control
While not up to Volswagen’s temporary auto pilot, which combines ACC with lane-following, Porsche is developing a learning automatic cruise control that will come to understand road curves and speed and drive better as it learns.
Lots of exciting news, even in the slow summer season. Disclaimer note: The Google car project is a consulting client of mine.
I often see people say they would like to see solar panels on electric cars, inspired by the solar-electric cars in the challenge races, and by the idea that the solar panel will provide some recharging for the car while it is running and without need to plug it in.
It turns out this isn’t a tremendously good idea for a variety of reasons:
You’re probably not going to get more than a couple hundred watts of PV peak power on a car with typical cells. Even properly mounted on a roof in a sunny place like California, each peak watt delivers an average of about 5 watt-hours in a day, so 200 watts gives you 1kw-h. That’s good for around 4 to 6 miles on today’s electric cars. Not a huge range boost.
While thin film panels don’t weigh a lot the power they provide during actual driving would normally be only a minor boost. My math suggests they weigh more than the battery for the power they will provide while operating.
Panels on a car will instead be mounted flat, cutting about 30% of their output. Normally you want to tilt to the angle of the sun.
Cars are often in the shade, even parked indoors. Unless you work to pick your parking to have sun all day, you’ll only get a fraction of the power.
If you do leave your car in the sun, in many places that means it will get quite hot, you’ll burn up some of the solar energy cooling it down. (Indeed, the solar panels sometimes found on today’s hybrids and EVs don’t charge the battery, they just run a cooling fan.)
The worst one: If your battery is not somewhat discharged, it doesn’t have any place to put the solar energy, and so it is just thrown away. But due to range anxiety, people prefer their electric cars be kept full. It takes careful planning to use that energy.
A car is a very bumpy place, so you need more rugged panels than what you might put on a roof.
It is possible to get more than 200w on a car — some of the solar challenge cars that exist to be nothing but panels have gotten around a kw by using high price, high-efficiency panels. But it’s still generally much better to just put the panels on a roof where they will realize their full potential, and feed the grid, and charge from the grid.
However, on Friday I was teaching a class on the future of Robocars to my students at Singularity University and in the exercises some students wondered if they might do something for solar powered cars. (I was impressed since the students, having had only a short time to think about the issue, have to work to bring up something new.)
Robocars might solve some of the problems above, and thus possibly make sense as a place to put panels.
A robocar parks itself and can move. So one with a solar panel can move around to make sure it’s always in the sun, and that the sun is striking it from the right angle. It can’t move too far or too often without wasting some of the power, but it can do something.
When the batteries get so full that they are not making proper use of the solar energy, a robocar can find a charging station, not to charge but rather to sell excess power back to the grid and other cars. (This presumes charging stations are set up this way.)
Robocars could dock with other robocars that are more discharged and offer them the extra solar power, no charging stations involved — though fancy robotics are needed on the charging interface, or human beings who can do the connections.
If a robocar has an actuator that can tilt the panels, it can do even better. While an ordinary car could have this, an ordinary car would not have the ability to rotate in the plane of the ground to track the sun without another actuator.
It’s still not great, but it might improve things. Generally it still may be better to have the panels on rooftops and get the most from them. However, when we start thinking about super lightweight cars, cars that travel for under 100 watt-hours/mile, as well as higher efficiency panels, we might get some value if the panels are light.
It’s also expensive to install panels on top of existing facilities. Turns out that while panels are dropping below 1$/watt next year thanks to cheap Chinese capital and manufacturing, the cost of install is still over $2/watt. Cost of install on newly manufactured buildings — or cars — can be cheaper because it’s designed in from the start. The car already has the complex electrical system, while houses need to add them if they go solar.
People really are in love with the idea of a solar powered car. It’s not really possible to go green this way right now, but the future might bring something interesting.
I discovered this year that something I’ve seen a zillion times, the standard map of Canada, features a giant, brain-eating zombie. I’m naming the zombie “Hudson” because that’s the Bay that makes up most of him. He’s a plump undead with stubby legs, a big blank eye (Price Charles Island,) and a slack jaw, and it looks like Newfoundland is in trouble.
The way the human mind likes to find figures and faces in natural patterns is called pareidolia but what surprises me is that in spite of seeing this map so often, I never saw Hudson until recently, and based on web searches, neither has anybody else.
But now I find it difficult to look at the regular map, without highlights, and not see the zombie.
He is so big he may be a cosmological zombie. They eat “branes, branes.”
The latest JD Power survey on car satisfaction has a very new complaint that has now the second most annoying item to new car owners namely problems with the voice recognition system in their hands-free interface. This is not too surprising, since voice recognition, especially in cars, is often dreadful. It also reveals that most new tech has lots of UI problems — not every product is the iPod, lauded from the start for its UI.
But one interesting realization in the study is that users have become frustrated at having too many devices with too many UIs. Their car (which now has a touchpad and lots of computer features) uses a different UI from their phone and computer and tablet and whatever. Even if the car has a superb UI, the problem is that it is different, something new to learn and remember.
One might fix this by having the same platform, be it iOS or Android on several of the devices, but that’s a tall order. Car vendors do not want to make a phone one one platform and tick off people used to the other platform.
The answer lies in something the car makers don’t like: Don’t put much of their own smarts in the car at all, and expect the user to slot their own mobile phone or tablet into the car. This might be done with something like Nokia’s “Terminal Mode” where the car’s screen and buttons can be taken over by the phone, or by not having a screen in the car at all, just a standard mounting place.
Some time ago I wrote that cars should stop coming with included radios as they used to 30 years ago, and let the slot in the dashboard where the radio and electronics go become a center for innovation. In particular innovation at the speed of consumer and mobile devices, not innovation at the speed of car companies. But there are too many pressures to stop this from happening. Car companies get to charge a lot for fancy radio and electronics systems in the cars, and they like this. And they like the control over the whole experience. But as they get more complaints they may realize that it’s not the right thing for them to be building. Especially not when the car (and the in-dash system) last for 10 to 15 years, while most consumer electronic devices are obsolete in 1-2 years.
There aren’t that many makes of cars, nor so many mobile platforms, so making custom apps for the car and the mobile platform isn’t that hard. In fact, I would expect you would see lots of competing aftermarket ones if they opened up the market to it. And open source ones too, built by fans of the particular cars.
An update on the backlog of robocar related news caused by my recent travel and projects:
Many people have noticed the new law recently passed in Nevada which directs the Dept. of Transportation to create guidelines for the introduction of self-driving cars on Nevada roads. Here is the text of the law. Because Google, whom I consult for on robocars, helped instigate this law, I will refrain from comment, other than to repeat what I’ve said before: I predict that most transportation innovation will take place in robocars because they will be built from the ground up and bought by early adopters. The government need merely get out of the way and do very basic facilitation. This is very different from things like PRT and new transit lines, which require the government’s active participation and funding.
You’ll find lots of commentary on the story in major news media. read more »
A new paper on trusted traveler programs from RAND Corp goes into some detailed math analysis of various approaches to a trusted traveler program. In such a program, you pre-screen some people, and those who pass go into a trusted line where they receive a lesser security check. The resources saved in the lesser check are applied to give all other passengers a better security check. This was the eventual goal of the failed CLEAR card — though while it operated it just got you to the front of the line, it didn’t reduce your security check.
The analysis shows that with a “spherical horse” there are situations where the TT program could reduce the number of terrorists making it through security with some weapon, though it concludes the benefit is often minor, and sometimes negative. I say spherical horse because they have to idealize the security checks in their model, just declaring that an approach has an X% chance of catching a weapon, and that this chance increases when you spend more money and decreases when you spend less, though it has diminishing returns since you can’t get better than 100% no matter what you spend.
The authors know this assumption is risky. Turns out there is a form of security check which does match this model, which is random intense checking. There the percentage of weapons caught is pretty closely tied with the frequency of the random check. The TTs would just get a lower probability of random check. However, very few people seem to be proposing this model. The real approaches you see involve things like the TTs not having to take their shoes off, or somehow bypassing or reducing one of the specific elements of the security process compared to the public. I believe these approaches negate the positive results in the Rand study.
This is important because while the paper puts a focus on whether TT programs can get better security for the same dollar, the reality is I think a big motive for the TT approach is not more security, but placation of the wealthy and the frequent flyer. We all hate security and the TSA, and the airlines want to give better service and even the TSA wants to be hated a bit less. When a grandmother or 10 year old girl gets a security pat down, it is politically bad, even though it is the right security procedure. Letting important passengers get a less intrusive search has value to the airlines and the powerful, and not doing intrusive searches that seem stupid to the public has political value to the TSA as well.
We already have such a program, and it’s not just the bypass of the nudatrons (X ray scanners) that has been won by members of congress and airline pilots. It’s called private air travel. People with their own planes can board without security at all for them or their guests. They could fly their planes into buildings if they wished, though most are not as big as the airliners from 9/11. Fortunately, the chance that the captains of industry who fly these planes would do this is tiny, so they fly without the TSA. The bypass for pilots seems to make a lot of sense at first blush — why search a pilot for a weapon she might use to take control of the plane? The reality is that giving a pass to the pilots means the bad guy’s problem changes from getting a weapon through the X-ray to creating fake pilot ID. It seems the latter might actually be easier than the former. read more »
This blog has been silent the last month because I’ve been on an amazing trip to Botswana and a few other places. There will be full reports and lots of pictures later, but today’s idea comes from experiments in shooting HD video using my Canon 5D Mark II. As many people know, while the 5D is an SLR designed for stills, it also shoots better HD video than all but the most expensive pro video cameras, so I did a bit of experimenting
The internal mic in the camera is not very good, and picks up not just wind but every little noise on the camera, including the noises of the image stabilizer found in many longer lenses. I brought a higher quality mic that mounts on the camera, but it wasn’t always mounted because it gets a little in the way of both regular shooting and putting the camera away. When I used it, I got decent audio, but I also got audio of my companion and our guide rustling or shooting stills with their own cameras. To shoot a real video with audio I had to have everybody be silent. This is why much of the sound you see in nature documentaries is actually added later, and very often just created by Foley artists. I also forgot to turn on my external mic, which requires a small amount of power, a few times. That was just me being stupid — as the small battery lasts for 300 hours I could have just left it on the whole trip. (Another fault I had with the mic, the Sennheiser MKE 400, was that the foam wind sleeve kept coming off, and after a few times I finally lost it.) read more »
Dodge has released a few interesting commercials for its Charger muscle car, somewhat prematurely pushing it as the antithesis of a robocar. Most amusing is the second ad which features an ugly car with a literal robot in the driver’s seat (something also seen in the Total Recall and I, Robot movies.) The first ad just has visuals of the car but actually mentions the Google car as one of the signs of increasing robot control. For some reason, the car, rather than the people behind it, is named the “leader of the human resistance.”
It’s easy to understand the sentiment behind these ads, particularly when you are trying to market a car as a powerful “man’s car” oriented to the thrill of driving. The people who want the car to drive itself are not like you, you want an exciting drive and this is the car for you, it says. (Other ads decry an online test drive, and cars that get lots of “boring” miles per gallon.)
The ad does pose an interesting question. When I talk, I often get people who say that they have no interest in a robocar (and that Americans won’t have interest in them) because they love to drive and would not give it up. I often ask back, “so do you love to commute?” It’s also clear from the example of New York City that Americans will certainly give up driving if it’s the right choice for their locale. People who grew up in L.A. don’t try to keep their car if they move to Manhattan, they do what makes sense for their new area.
Driving is fun, of course, particularly on an interesting road with a powerful car. Indeed, many find driving a stickshift even more fun in such circumstances, though they are almost gone from U.S. cars. (I’ve mostly owned stickshift cars though when I bought my most recent I ended up with an automatic where you can manually change the gears. But I find I don’t use the manual mode.) Being a passenger on windy roads is not nearly so much fun, and even makes many people a bit queasy, though this almost never happens to the driver even with the same moves.
Obviously I suspect the Dodge ad is wrong when it says that “robots will never take our cars.” But human driven cars will also exist for a long time, and not just in the muscle car market. Many people will enjoy — or even need — a car they can take control of when the road gets “interesting.” But in our ordinary driving, the road itself is rarely interesting. We may well take special trips where the software drives us to the fun road and we take over after that, though with a better safety system. On the other hand, when it comes to scenic drives, people will want to go slowly and be passengers, getting a chance to look out the windows and enjoy the view rather than concentrate on the road. We may see “tourist cars” in popular tourist spots which are either convertibles or have nearly transparent tops — reminding us perhaps of the bubble roof cars from the Jetsons — for those whose focus is on the view.
There will be a sector of the market that wholly buys into Dodge’s tongue-in-cheek message. I’m pretty confident in predicting that the opposite segment that embraces the technology will be more than large enough for it to find all the early adopters it needs. As people get used to the idea, it will then go mainstream, even if it never captures everybody.
Of course, I’m almost certain the Dodge Charger, like all other cars, is full of processors with tons of code. The fuel mixing system that gives it its power is computerized in a typical car. One technology “leading the resistance” against another.
This does mean a lot of changes for the automobile industry, as I wrote in my article on car design changes. Today a car’s price is remarkably correlated with its horsepower, which is part of the reason Dodge wants to advertise this way. Even when luxury is the real product, you will still find extra horsepower. This may change as people want comfort in their ordinary car, and only want horsepower in the vehicle they rent for the weekend.
It’s very common to use mobile phones for driving activities today. Many people even put in cell phone holders in their cars when they want to use the phones as navigation systems as well as make calls over a bluetooth. There’s even evidence that dashboard mounting reduces the distracted driving phenomenon associated with phones in cars.
Nokia and others are pushing one alternative for the cars that have dashboard screens. This is called “Terminal Mode” and is a protocol so the phone can make use of the display, buttons and touchscreens in the car. Putting the smarts in the phone and making the dash be the dumb peripheral is the right idea, since people upgrade phones frequently and cars not nearly so much. The terminal mode interface can be wireless so the phone does not have to be plugged in, though of course most people like to recharge phones while driving.
Terminal mode will be great if it comes, but it would be good to also push for a standard port on dashboards for mounting mobile phones. Today, most mobile phone holders either stick to the windshield with a suction cup, or clamp onto the vents of the air conditioner. A small port or perhaps flip out lever arm would be handy if standardized on dashboards. The lever arm would offer a standard interface for connecting a specific holder for the specific device. In addition, the port would offer USB wiring so that the holder could offer it to the phone. This would offer power at the very least but could also do data for terminal mode and some interfacing with other elements of the car, including the stereo system, or the onboard-diagnostics bus. Access to other screens in the back (for playing video) and to superior antennas might make sense. While many phones use their USB port to be a peripheral to a PC, some have “USB to go” which allows a device to be either master or peripheral, allowing more interesting functions.
Even with terminal mode, there could be value in having two screens, and more buttons, though of course apps would have to be developed to understand that. However, one simple thing is that a phone could run two apps at once on two screens (or even two apps at once on the larger screen of the car) which would actually be pretty handy.
While I believe airlines could sell the empty middle for somewhere in the range of 30-40% of a regular ticket, this still has issues. In particular, are they really going to bump a poor standby passenger who had a cancelled flight and make them stay another night so that people can get a more comfortable seat?
One idea is to allow the sale of empty middles by dutch auction. In effect this would say, “If there are going to be empty middles on this plane, those who bid the most will get to sit next to them.” If this can be done, it’s a goldmine of extra revenue for the airline. What they sell costs them nothing — they are just selling the distribution of passengers on the plane. If the plane fills up, however, they sell it all and nobody is charged.
The dutch auction approach would let each passenger make an offer. If there are 5 empty middles, then the 10 people who sit next to them win, but they all pay the 10th highest bid price. If only 9 passengers bid, the 10th highest price is zero, and everybody pays zero — which is what happens today, except it’s semi-random. While this may seem like a loss for the airline, many game theory tests suggest that dutch auctions often bring the best result, as they make both sides happy, and people bid more, knowing they will actually pay the fair price if they win.
(On the other hand, airlines are masters at having two people pay vastly different prices for exactly the same thing and have managed to avoid too much resentment over it.)
There is one huge problem to solve: How do you arrange that matched bidders are sitting together to share the empty middle? Each empty middle benefits two passengers. read more »
First of all, the TED talk given by Sebastian Thrun, leader of the Google self-driving car team (disclaimer: they are a consulting client) is up on the TED web site. This is one of the short TED talks, so he does not get to go into a lot of depth, but notable is one of the first public showings of video of the Google car in action on ordinary city streets. (The first was at PodCarCity, but video was not made available on the web.)
At TED the team also set up a demonstration course on the roof of a parking lot, and allowed some attendees to ride and shoot videos, many of which are up on the web. While the car does perform well zooming a slalom course, and people have a lot of fun, the real accomplishment is what you see video during the talk.
Another “City of the future” video has appeared featuring robocars prominently. This Shanghai 2030 video plays out a number of interesting robocar aspects, though their immense elevated road network reminds me more of retro futurism. A few things I think will be different:
The people in the car sit side-by-side. I think face-to-face is much more useful. It’s more pleasant for conversation, and it allows for a narrower car which has huge advantages in road footprint and drag. Some people can’t stand facing backwards, and so there will still be side-by-side cars if you have two people like that, but I think a large fraction of cars will move to face-to-face, either narrow (for 2) or wide (for 3 or more.)
The video shows cool displays projected onto the windscreen. This “heads up” sort of display makes sense if you have to keep your eyes on the road while using the screen, but in these cars, the people don’t. On the other hand it’s true that some people get motion sick looking down while riding, but you can also put an opaque screen in the middle of the window in a robocar.
It’s National Robotics Week with lots of robot related events. In the Bay Area on Thursday, an all-day robotics demo day for kids and adults will take place at Stanford’s robotic car lab, so people will get a chance to see Junior and other Stanford robocars there.
The trend continues — last year U.S. road fatalities dropped again to 32,788. That’s a steady decline since over 43,000 5 years ago. And this is in spite of total vehicle miles going up. As a result, the death rate per 100 million miles is now 1.09, the lowest it has been in 60 years.
That’s very good news, though many forces fight for the credit. The leading contender seems to simply be that cars are getting safer in crashes, with better crumple zones and air bags, and more people wearing seatbelts. Medicine has also gotten better. Some will also be coming from better cars with safety systems like anti-lock brakes, crash-warnings and lane-departure warnings — precursors to robocar technology — but it would be wrong to assume these are a big component. Also worth noting that this happens in spite of the rise of people talking and texting while driving, though the secretary gives some credit to the recent laws banning this. But that doesn’t explain why the drop began in 2005.
It’s also odd that while fatalities drop almost everywhere, they’re actually up in New England by 18% and by 4% around the midwestern Great Lakes, and generally up around the north-east.
I never combined the two together, however. In the citizen examiner approach, when you apply for a patent you are also put into a pool of available experts in your field to assist examiners with other patents in the field. You need to do this several times (and get anonymously graded by your peers as having done a decent job) in order to get the patent you applied for.
While this helps the patent examination in so many ways, by providing an automatically scaling pool of skilled labour, I had not addressed how to deal with the novelty test.
I propose that when filing a patent, first the applicant must file a clearly written statement of the problem being solved by the patent. This would be public, and citizen examiners would be asked to consider it and see if any obvious solutions come to mind to them as those skilled in the art. In addition, they would grade the problem statement for clarity.
When the actual filing is disclosed, a second review would be done both by the examiner, an the citizen examiners as to whether the claimed invention really does solve the problem, and whether it had been a clear statement of the problem and not an attempt to obfuscate. I already plan for the citizen examiners to grade the patent itself on how clearly it teaches the invention. Patents which do not have cohesive problem statements and clear teaching of the invention would be returned in an office action for revision.
The idea behind the problem statement is a test both for obviousness and novelty of the problem. In many cases, experts in the field will come up with proposed solutions to the problem quickly. If they come up with the invention-about-to-be-disclosed, then it’s clear that it was obvious to one skilled in the art. If nobody comes close to the invention, it is evidence that it is not obvious, though there would still be general judgement of that, as well as prior art searching by examiners, citizen examiners and the public.
Today, patent lawyers earn their keep in part by writing patents in non-clear ways, to make them hard to find and understand. That is against the goal of the patent system, which is to reward those who disclose their inventions, teach how to build them and leave them after 17 years as a legacy to the world. While any one examiner may not make a good decision, a panel of experts in a field can provide some solid evidence on whether the problem is hard and the invention is novel.
Getting such proposals into patent reform is hard. Big patent holders want to make it easy to build up their patent portfolios. Many would fight meaningful reform like this. But perhaps there is a way to get it kickstarted. It might be interesting to see a web site where new patents are put forward and examined by ordinary citizens that care. Examiners could of course look at that, but they would not be obligated to. There are so many patents that a lot would pass by without attention. There are sites that report on new patents, but what we perhaps need is a site like “reddit” or “digg” for patents which takes the whole patent inflow and lets people vote up patents of interest for examination and comment by others. The most interesting ones would get more attention and more people searching for prior art and commenting. If a little money was involved they might even get prizes, though that would take a wealthy patron willing to spend money for patent reform.
To sum up the proposed patent process:
Applicant files/publishes “statement of problem.” Also declares the discipline/areas of expertise.
The public, and a set of citizen examiners chosen from the pool in that subject area write comments on the problem and propose solutions over the course of a few weeks.
The patent filing is studied by the examiner. She picks some suitable citizen examiners without apparent conflict of interest, as well as one likely competitor, if available. Chosen examiners agree or beg off, if they beg off, alternates are selected.
Examiner and citizen assistants consider the patent, how well it is written and do searches for prior art. The “adversarial” examiner does only prior art search.
The patent is considered in the light of prior art. Novelty and how well it addresses the pre-stated problem are judged, as well as clarity. Obfuscated patents, as judged by the examiner based on views of the assistants, are rejected in office actions. The patents can be re-filed but the problem statement can’t.
If a patent is found to be clear, novel and well tied to the problem, and non-obvious, including that nobody who examined the problem came up with too similar a solution, the patent can be granted.
The examiner and other citizen examiners (including some who did not work on this particular patent) grade the work of the citizen examiners, to assure they were thorough, diligent and honest. Those who were earn a credit towards their obligation.
Citizen examiners are almost unlimited, in that we can ask each one to do multiple jobs to get their patent, within reason. Small inventors can get less duty than large ones, and anybody (but particularly large companies) can have another qualified expert do the work if the main inventor is too important. But I imagine the job as being about 2-3 days of work, researching, reading and commenting, and 5x of that is pretty tolerable for somebody wanting a patent.
As such we could also, more slowly, put citizen examiners on to re-examining other patents that are challenged. We would not revoke patents that met the rules of their day, but if further examination shows they had prior art or documented obviousness, that should be considered.
If you’re going to have a meeting with people in a meeting room and one or more people calling in remotely, I recommend trying to have a remote multi-party video call, or at the very least a high-fidelity audio call, and avoid the traditional use of a phone conference bridge to a speakerphone on the meeting room table. The reality is the remote people never feel part of the meeting, and no matter how expensive the speakerphone, the audio just doesn’t cut it. There are several tools that can do a multi-party video call, including Oovoo, Sightspeed, Vsee and others, but for now I recommend Skype because it’s high quality, cheap, encrypted and already ubiquitous.
While you can just set up the meeting room with Skype on a typical laptop, it’s worth a bit of extra effort to make things run more smoothly in the meeting room, and to get good audio and video. Here are some steps to take, in rough order of importance.
You should upgrade to the latest Skype. Use “Help/Check for upgrades” in Skype or download from their web site.
Create or designate a “conference master” account. (Skype no longer needs a Premium account for this but calls are limited to 4 hours/day and 100hrs/month.) I also recommend you have some money in the Skype account for outbound calling, see below.
The conference master should learn the UI of multi-party calling. They must be on Windows or a Mac. (Sadly, for now, only Windows is recommended.) The UI is slightly different, annoyingly. Read Skype’s instructions for windows or Mac. They also have some how-to videos. The hard reality is that the Windows version is more advanced. Don’t learn the UI during the conference — in particular make sure you know how to deal with late callers or re-adding bounced people because it can happen.
The conference master should have a decently high-powered PC, especially if having 4 or more remotes.
Notify all participants of the name of the conference master. Have them add the conference master to their contact list in advance of the conference. Confirm them as buddies. Alternately, if you know their Skype names, add them and get them to confirm.
Create, in advance, a call group for the conference.
Here are the typical problems that we see if the meeting room just uses a laptop on the table for the video call:
The camera is low down on the table, and laptop quality. It often captures backlights and looks up at people. Half the people are blocked from view by other people or stuff on the table.
The microphone is at the far end of the table, and it’s a cheap laptop mic that picks up sound of its own fan, keyboard and possibly projector. When it sets levels based on the people at that end of the table, it makes the people at the other end hard to hear.
You need the sound up loud to hear the remote folks, but then any incoming calls or other computer noises are so loud as to startle people.
People haven’t tried the interface before, so they fumble and have problems dealing with call setup and adding new callers or returning callers. This frustrates the others in the room, who just want to get on with the meeting.
Some folks have to come in by telephone, but you can’t really have a speaker phone and a computer conference talking speaker to microphone very well.
Having a group videoconference, or participating by video in a group meeting (where several people are in a meeting room, and one or more others are coming in via video) is quite useful. It’s much better than the traditional audio conference call on a fancy speakerphone. The audio is much better and the video makes a big difference to how engaged the remote parties are in the meeting.
There are many tools, but right now I recommend Skype, which can handle around 5 remote parties if you buy a one-day premium subscription or monthly. In theory it does 10 but they recommend 5, which means the meeting room and 4 others. Only one party (the meeting room account, typically) needs to have the premium subscription. The instructions for the meeting room are slightly more complex — this is a guide for the remote parties calling in. I also recommend Google Hangout, which handles 10 smoothly.
The advice below is definitely ordered. Even if you just do the first few it helps a bunch.
Upgrade to the latest Skype, at least version 5 is needed
Know the conference master’s account and have it on your contacts list
Get a headset
Get a headset
Mute your audio when not speaking, and definitely if you ignored the headset bit
Have a nice webcam and avoid having the light come from behind you
Use Windows over Mac, and your machine with the most CPU power
Make sure you can see the chat window so you can do IMs without disrupting the meeting
There’s a bunch of stuff here. It’s worth doing because you will be much more engaged in the meeting. You will know who is speaking and
see what’s going on. Your voice will be clear and loud. You’ll be able to interrupt and engage in dynamic conversation. You’ll be
in the meeting and not just an audience. You need to do the extra work because the people who physically went don’t want to put up with too much to make it easier for those phoning it in.
Upgrade to your latest Skype
The multi party video works only with version 5 for Mac or Windows. If you have a lower version, or you are on Linux (curse you, Skype) you will only come in by audio. That’s still better than coming in by a phone bridge. If you have Skype just go to the Help menu and tell it to check for upgrades (File menu on the Mac.) Hate to say it, but if you have a choice, use a Windows computer. Skype develops first on Windows and the other versions always lag behind. Some useful features are only on Windows.
So before the meeting, be sure to upgrade, and get to know the new UI if you have not seen it before — Skype changed their UI a bunch from 4 to 5.
Become a “contact” with the conference master.
Make sure you are buddies (contacts) with the premium account that will be the master for the conference. That doesn’t
have to be the meeting room, but it usually is. (Optionally you can add other participants to your list.) You will normally get an E-mail with the ID, or perhaps a contact invite. You can also search on Skype for most users.
Get a headset and get good audio. Really.
Skype does a very good job of speakerphone and echo cancellation in two-way calls. But it’s still much better if you have a headset, or failing that, headphones and a mic. The meeting room has no choice but to use speakerphone mode, which is an even bigger reason for you to get the headphones.
When you have a headset, or at least headphones and a clip-on mic or directional table mic near your mouth:
The sound doesn’t go out your speakers and right into the mic. That means Skype does not have to echo-cancel so much. When it echo cancels it makes it harder to talk while somebody else is talking. With the headset you can be more two-way, and that gives you more presence at the meeting.
Your mouth is close to the mic, so the mic adjusts its level down, and all background noise in your environment is thus not amplified nearly as much.
If you use the mic in your laptop, it really hears keyclicks, mouse click and even the fan too well. In fact, you dare not type without muting your mic first.
Do not use a bluetooth headset — they limit you to phone quality if you use the microphone. Hi-fi bluetooth headphones plus an independent mic will work fine.
You might want to test your audio by calling somebody, or calling the “Skype Test Call” address that goes into every Skype
contact list by default.
Mute your sound if you go away, or type, or are just listening for a while
The high quality audio of computer calls is really valuable. It helps everybody understand everybody, and makes it much clearer who is speaking. This comes with an ironic curse — it picks up all sorts of background sounds that regular telephones don’t transmit. You would be amazed what it picks up. Mouse clicks. Keyboard clicks. Grunts. Eating. People in the next room. Planes flying by. (It does less of this if you use a headset and manual volume setting.)
If you are going to be sitting back and listening, mute your own microphone while doing this. If you leave your computer definitely mute. If you leave to take a phone call, it’s even more important. I’ve been in calls where the person leaves their PC and we hear them eating, or on a phone call or talking to somebody else where they are, having forgotten to mute. And there can be no way to tell them to fix it because they took their headphones off. Skype has a microphone icon you can click on to mute your mic. It’s red when muted.
If you ignore all this advice and are using the microphone built into your laptop you must not type or move your computer around without muting first. Frankly it’s good to mute to type even with you have that headset, but mandatory if you don’t.
Extra credit if you have a headset: Go into the audio properties and set a manual level for your mic at your normal speaking voice. Then it won’t try to turn up the gain when you are not talking.
Next, consider your lighting
Nothing improves the quality of a webcam image more than decent lighting. Try to set things up so there isn’t a bright light or window in the background behind you, and ideally have a light shining on you from behind and above your monitor. This is worth more than the fanciest webcam. Be wary with laptops, since the webcam pointing up at you often catches ceiling lights.
A nice webcam does not hurt
While the webcam in your laptop will work, and do OK with good lighting, you can do a lot better. The laptop cam is usually low on the desk and looks up your nose. Higher end webcams do much better in bad lighting situations. The Logitech quickcams that Skype rates as “HQ” really are better than the others. You might want to get one if you are doing video calls frequently.
By the way, when the call starts, be sure to make it a video call, or if you are called, “accept with video.” Or you can click on the video button to start your video up.
Possibly turn off your video at certain times
Great as the multi party video is, the more people who use it, the more CPU and bandwidth everybody needs. So if you are just sitting back and not being super active, consider clicking on the “My Video” button to turn off your own video during those periods. Of course if you are going to do some extensive speaking be sure to turn it on again — it’s relatively fast and easy to turn on and off. In practice, unless everybody has fast machines, you don’t want to go above much more than 5 videos, so some people should remain invisible (but still getting HQ audio and seeing the meeting room.)
Optional: Cute video tricks
In Windows, you can turn on the “Dynamic View” and Skype will make the person (or people) who is speaking larger on your window. Handy if you have a big call which makes the individual videos small. Full screen mode (but leave chat visible) is also a good idea unless you want to surf and read e-mail during the meeting. Be warned — we can see you doing that. And your keyboard clicks come through so you may want to mute.
Instead of dynamic view (which jumps around) you can also just click on which video you want to be big. In many cases the best idea is to just click on the meeting room video, which you want to be big because there are many people, and the single-head videos are fine staying small.
Not sending video? Be sure to set a picture in your Skype profile. Others will see this picture highlighted when you talk and know it’s you talking. Even if you are sending video this is a good idea as video sometimes fails.
When problems occur — have chat open
You may get disconnected. The latest Skype tries automatic callback if it was not an explicit disconnect. If you call back the conference master, they have to be careful that they accept your call into the conference, because it’s unfortunately easy for them to just accept it like call-waiting, and put the whole conference room on hold. (This is a bad design, I think.)
Be sure to display Skype’s chat window and be ready for chats and IMs about problems. That way conference problems can be fixed without disturbing the whole meeting. But be sure to mute before you type. The chat window usually goes away in full screen mode, unfortunately, but if you hear little bleeps you don’t understand, it could be you are getting chat.
Hard truth is, some problems in Skype are best solved by stopping and restarting video, or sometimes having a person leave and re-enter the call. Or sometimes even restarting the whole call.
If you are on an ordinary phone
People on phones can join the call. The call manager will tell you one of these methods:
The call manager will have a Skype-in number. Just call it.
The call manager may have created a traditional conference dial-in number. Call that and do the rigmarole.
It is often easiest if the conference manager calls you — if so, make sure they have your numbers. Landlines are better, of course, and vastly less expensive than mobiles outside North America.
In the Meeting room
The situation in the meeting room is different. There you must use speakers with the volume up, and a microphone. Try to put them on the table, particularly the microphone. A quality webcam is much more important here, and the webcam should be up high, at the height of a standing person looking down at the table, so it can see everybody. If you use a laptop on the table the view is dreadful and people block those sitting further down the table. Consider getting USB speakers so you can have two speakers (internal and USB) and configure Skype to send call audio out the USB speakers (which you set loud) but have all other sounds (including Skype call tones) go out the internal audio and speakers which you set down low. Otherwise with the volume way up any PC sounds will drive people nuts.
Back at the start of this blog, in 2004, I described a product I wanted to see, which I called the Paperless Home Scanner. Of late, several companies have been making products like this (not necessarily because of this blog of course) and so I finally picked one up to see how things pan out.
Because I’m cheap, I was able to pick up an asian made scanner sold under many brand names for only $38 on eBay. This scanner sometimes called the Handyscan or PS-4100 or similar numbers, can also be found on amazon for much more.
The product I described is a portable sheetfed scanner which runs on batteries and does not need to be connected to a computer because it just writes to a flash card. This particular scanner isn’t that because it’s a hand scanner you swipe over your documents. For many years I have used a Visioneer Strobe, which is a slow sheetfed unit that has to be connected to a Windows computer. I found that having to turn the computer on and loading the right software and selecting the directory to scan was a burden. (You don’t strictly have to do that but strangely you seem motivated to do so.) The older scanner was not very fast, and suffered a variety of problems, being unable to scan thermal paper receipts (they are so thin it gets confused) and having problems with even slight skew on the documents.
I was interested in the hand-scanner approach because I presumed there had been vast improvements using the laser surface scanning found in mice. I figured a new scanner could do very good registration even if you were uneven in your wanding. Here are some of my observations: read more »
While it does a better job of making an undistorted scan than older hand scanners, it is still far from perfect, and any twists or catches can distort the scan, though not that much. Enough that you wouldn’t use it to print a copy, but fine for records archiving.
It’s exactly 8.5” wide. Since it’s hard to be exactly straight on any scan, that’s an annoyance as you will often drift slightly from a page. A scanner for letter paper should really be about 9” wide. I’ll gladly pay the extra for that.
Even today with Moore’s law it’s too slow scanning colour. Often the red light comes on that you are scanning too fast in colour. In B&W it is rare but still can happen. Frankly, by this time we should be able to make things fast and sensitive enough to allow scanning as fast as anybody is likely to do it.
While it is nice a small (and thus good for travel,) for use in the home, I would prefer it be a bit wider so I can get it on to the paper and scan the whole page with no risk of catching on the paper. And yes, there is always a risk of it catching.
It also catches on bends and folds in the paper, and so ideally you are holding the paper with one hand somehow and swiping with the other, but of course that is not really easy to do if scanning the whole page.
This particular scanner resets every time it turns off. And it resets to colour-300dpi. I wish it just remembered my settings.
In spite of what it said, it does not appear to have a monochrome setting, such as bitmap-600dpi or even 300dpi. That turns out to be fine, and even what you want, for records archiving. Sure, why throw away information in this era of cheap storage, I agree. On the other hand if it allows scanning-super-fast it may be worth it. A trick might be to start in grayscale and get levels, and then switch to bitmap/threshold
One huge difference with swipe scanners is they don’t know where the edges of the paper are. You can scan on a black background and have software crop and straighten, but feeding scanners do that for you because they know where those edges are. Again, having a bit of the background there is fine for archiving bills etc.
Overall, I do now realize that not having a view of what I scanned is more of burden than I thought. Particularly if you are thinking of disposing of the document after scanning. Did you get a good scan or not? Though it would add a lot to the cost and size, I now wonder if a very small display screen might be in order.
Instead of a display screen, one alternative might be bluetooth, and send the scan image to your smartphone or computer directly. Not required, so you can still scan at-will, but if you have your device with you, you can get a review screen and perhaps some more advanced UI.
Indeed, the bluetooth approach would save you the trouble of having to transfer the files, or of having a flash card. (A modest number of megs of internal flash would probably do the job of storing until you can get near the computer.)
While it does plug into USB (to read the flash card) that would be a pain if you wanted to scan to screen. Bluetooth is better.
Around the world, revolution has been brewing, and new governments are arising. So often, though, attempts to bring democracy to nations not used to it fail. I don’t know how to solve that problem, but I think it might be possible to make these transitions a bit easier, with a bit of modern experience and technology.
What these aspiring new governments and nations could use is a ready-made, and eventually time tested set of principles, procedures, services and people to take the steps to freedom. One that comes with a history, and with the respect of the world, as well as the ability to win the support of the people. I am not the first to suggest this, and there have been projects to prepare draft constitutions for new countries. George Soros has funded one, and one of its constitutions is being considered in Egypt, or so I have heard.
Eventually, I hope that a basic interim constitution could be created which not only is well crafted, but wins the advance support of the global community. This is to say that major nations, or bodies like the U.N. say, “If you follow these principles, really follow them, then your new government will get the recognition of the world as the legitimate new government”. This is particularly important with a revolution, or a civil war as we are seeing in Libya. Big nations are coming to the aid of those under attack. But we don’t know what sort of government they will create.
Today we assume that a people should self-determine their own constitution, to match their own culture. That is a valid goal, and a constitution just have the support of the vast majority of a people. But the people must also interact with the world, and the government must gain recognition. There are many lessons to be learned from the outside world, including lessons about what not to put in a constitution, even though it matches the local culture. Most new nations still find themselves wracked with sectarian, tribal and geographic divisions, and in this situation, impartial advice and even pressure can be valuable down the road.
I believe that each new country needs first an immediate, temporary, minimalist constitution. This constitution would define a transitional government, and put strong time limits on how long it can exist. This constitution would establish the process for creation of the permanent constitution, but also put limits on what can’t go in it without a major supermajority vote. Right after a revolution, a new nation may have a huge, but temporary sense of unity and devotion to principle. That devotion will fade as various factions arise and pressure is applied.
The temporary constitution should be minimalist, as should be the government. It should have strong principles of transparency and accountability, because in turbulent times there is often rampant corruption and theft.
It should also, ideally, bring in principles and bodies of law almost word-for-word from other countries. While this is temporary, it provides an immediate body of precedent, and a large body of experts already trained in that nation’s law. It isn’t that simple of course, since some laws are not meant to be enforced if it is known they are temporary, otherwise people will exploit the expiration.
Possibly the temporary constitution would define an executive with broader power than the permanent one. There may not be the
bureaucracy in place to do anything else. It could be that those who serve at the high levels of the transitional government
will be barred from standing in elections for some number of years, to assure they really are just there to serve in
the transition, and not become new autocrats. This may also be a useful way to make use of the services of the middle
echelons of the old regime, who may be the only ones who know how to keep some things running.
Imported, sometimes remote, jurists
If there is some standardization to the system of laws, the new country can import the services of impartial foreign jurists. Some will volunteer and come. Some will come for pay, even though the payment might be deferred until the new country is on its feet. And some might serve remotely, over videoconferencing. Modern telepresence tools might encourage volunteers (or deferred payment workers) to take some time to help a new country get on its feat, providing justice, auditing and oversight. read more »
In 2004, I described a system that would allow secure voting over an insecure internet and PC. Of late, I have been pondering the question of how to build a “turn-key democracy kit” — a suite of tools and services that could be used by a newly born democracy to smoothly create a new state. We’ve seen a surprising number of new states and revolutions in the last few years, and I expect we’ll see more.
One likely goal after any revolution is to quickly hold some sort of meaningful election so that it’s clear the new regime has popular support and is not just another autocracy replacing the old one. You don’t have time to elect a full government (and may not want to due to passions) but at some point you need some sort of government that is accountable to the people to oversee the transition to a stable democracy.
This may create a need for a quick, cheap, simple and reliable election. Even though I am generally quite opposed to the use of voting machines, particularly voting machines which only record results in digital form, there are a number of advantages to digital voting over cell phones and PCs in a new country, at least in a country that has a digital or mobile phone infrastructure established enough so that everybody, even if they don’t have a phone, knows someone who has one.
In a new country, fresh out of autocracy, powerful forces will oppose the election. They will often try to prevent it or block voters.
A common technique is intimidation, scaring people away from voting with threats of violence around polling places.
The attacks against digital voting systems tend to require both sophistication and advanced planning.
For a revolutionary election, the digital voting systems may well be brought in and operated by disinterested foreign parties, backed by the U.N. or other agencies.
An electronic system is also immune to problems like boxes of ballots disappearing or being stuffed or altered.
It may be judged that the risks of corruption of a digital or partially digital election may be less than the risks of a traditional polling place election in a volatile area. It may also be hard to build and operate trustable polling places in remote locations, and do it quickly.
The big issue I see is maintaining secret ballot. It is difficult to protect secret ballot with remote voting, and much easier in polling-station voting. If secret ballot is not adequately protected, forces could use intimidation to make sure people vote the right way, or in some cases to buy votes. I am not sure I have a really good solution to this and welcome input; this is an idea in the making. read more »
The images from Japan are shocking and depressing, and what seemed at first an example of the difference between a 1st and 3rd world earthquake has produced a 5 figure death toll. But the nerd and engineer in me has to wonder about some of the things I’ve seen.
While there has been some remarkable footage, some of it in HD, I was surprised at how underdocumented things were, considering Japan’s reputation as the most camera-carrying nation of the world — and the place where all the best cameras come from. I had expected this would be the “Youtube disaster” where sites like YouTube would fill with direct observer HD videos from every town, but the most of what was uploaded there in the first few days was stuff copied from the TV (in fact, due to DRM, often camcorders pointed at TVs.) Of course, the TV networks were getting videos from private individuals, but we saw the same dramatic videos over and over again, particularly the one from destroyed village of Miyako where the water swept boats and cars over the seawall and under a bridge.
Yes, there was a lot of individual reporting, but I expected a ton, an unprecedented amount, and I expected to see it online first, not on the news first.
Cell phone shutdown
Japan is also one of the world’s most connected countries, with phones for all. Not a lot has emerged about the lost of cell phone service. Some reports suggest some areas of the network were switched into texting-only mode for civilians to leave capacity for emergency workers. Other reports say that landlines were often up when cell lines were down. The world still awaits Klein Gilhousen’s plan to allow cell phones to text peer to peer which I reported on in 2005.
Nuclear plant worst case
The public is now fully aware of many of the issues with nuclear reactors which require active stabilization using external electricity. A lot had to happen to get to the pump shutdown:
The reactors themselves were auto-shutdown after the earthquake. Wise, though in theory the subsequent problems would not have happened if one reactor had remained up and powering the plant.
The quake or tsunami shut off the external power. A week later it’s still not up. It seems that restoring it should have been a top priority for TEPCO. Was the line so destroyed or did they not prioritize this?
The backup generators were damanged by the tsunami, all 19 of them. I have to admit, most people would think having 19 backup generators is a very nice amount of backup. But this teaches that if you have lots of backups, you have to think about what might affect all of them. 1 backup generator or 100, they all would have failed if unable to withstand the wave
The batteries supposedly lasted for 8 hours. This does not seem unreasonable. But they either did not realize that they had to get something else going in the 8 hours, or expected other power. Their procedure manuals should have had a “what to do if you have only 8 hours of battery left” contingency, but I can believe they didn’t because it seemed so unlikely.
That said, I believe the best backup plan has a fallback that involves emergency-level external resources. In particular, I have heard of no talk of sending a ship with a few hundred meters of cable to the docks there, one of which appears to be under 100m from reactor #1 and presumably the internal power grid. Many ships have big generators onboard or can deliver them.
Failing that, a plan for helicopter delivery of a generator and fuel in case all other channels are out.
Apparently they did bring in a backup generator by truck, but it was incompatible, and they are still without power.
It’s a hard question to consider whether they should have restarted a reactor while on batteries. There would not be enough time for a full post-quake, post-tsunami inspection of the reactor. On the other hand, they clearly didn’t realize just how bad it was to lose all power, and/or probably presumed they would get power before too long.
Everybody has now figured out the problem with spent fuel storage without containment in a zone where the chamber might crack and drain. Had nobody worried about that before? Most reactors don’t store all their spent fuel this way, but some do, and I have to presume work is underway to address this.
Japanese skill in robotics is world-leading. I’ve seen examples of some of that going on, but I’m surprised that they haven’t moved just about every type of robot that might be useful in the nuclear situation to near the nuclear plant. If they should ever have a situation where they must evacuate the plant again, as they did on Wednesday, it could be useful to have robots there, even if only to act as remote cameras to see what is happening in the reactors or control rooms.
There are also remote manipulator robots, and I am surprised no media organization has managed to get some sort of camera robot in the plant to report. Of course, keeping the robot powered is an issue. Few robots are actually able to hook themselves up to power easily, but a number of the telepresence robots can do that.
Many of the “work in danger zone” robots have been built for military applications, and the Japanese don’t have that military need so perhaps they are not so common in Japan. But they do have stair climbers, telepresence and basic manipulators. Even if the robots can do very little it would make the public feel better to know that something is there.
The Chernobyl cleanup was in part done by remote control bulldozers that the Russians made.
Future of Nuclear Power
The reactor failure is causing much public examination of nuclear power. This disaster does show just how bad the older designs are, but makes us question why companies were running them when it’s been known for decades that those designs were a poor idea. Obviously investors will not be keen on saying, “Oh, we made mistakes back then, let’s write off the billions.”
There is also an argument that a technology can’t develop without going through a phase where it is less well understood and designs are not as safe as can be. Would we have developed newer, safer designs if nobody had been able to build the older ones?
I have been seeing tons of ads on CNN by the coal, gas and oil industries about how wonderful their technology is. In spite of the fact that there have been quite a large number of deaths from these technologies, and tons of pollution, and now the fear of greenhouse gases.
According to one agency in Europe, I found a quote that the world’s nuclear plants had generated 64.6 trillion kwh in the period up to 2009, or 6.4 x 10^16 watt-hours. A watt-hour of coal produces about a gram of CO2. A watt-hour from the coal and gas plants at the US average is less than that, let’s call it 0.7 grams/watt-hour more than nuclear (there is some CO2 output from the full lifecycle nuclear industry.) Correcting from original where I had used euro-billion = 10^12 which can’t be right.
That’s about 4 * 10^16 grams of CO2 not put into the air by the nuclear industry. I’m looking for figures to see what that means, but one that I found says that the whole atmosphere of the planet has 2.7 x 10^18 grams of CO2 in it.
The number I would like to see is what difference those 10^16 grams of CO2 have made to the total PPM of CO2 in the atmosphere, which is to say, how much did those nuclear plants retard global warming according to accepted climate models. Anybody have info?
To solve the world’s energy needs, while we eventually would like to develop economical solar plants, biofuels that don’t use cropland, geothermal, fusion and other sources, right now it seems that there is no choice but to build lots more nuclear if we want to stop burning so much coal. Other choices are coming but are not assured yet. If this disaster scares the public away from newer reactor designs which go to a safe state without active support or human intervention, I think that would be a mistake.
I hope that Japan is able to recover as quickly as possible, and that more of the missing are found alive. Someday something like this is going to happen here in the Bay Area — though probably not a 9.0, but possibly an 8 — and it won’t be pretty.
It’s St. Paddy’s day but I can celebrate a little harder this time. Two days ago, I got my notice of entry into Ireland’s Foreign Birth Registry, declaring me an Irish citizen. I’m able to do that because I have 3 Irish grandparents (2 born in Ireland.) Irish law declares that anybody born to somebody born in Ireland is automatically Irish. That made my father, whose parents were both born there, an Irish citizen even though he never got a passport. Because my father was an Irish citizen (not born on the Island) that also gives me the right to claim it, though I had to do the paperwork, it is not automatic. If I had children after this, they could also claim it, but if I had any before this registration, they would not.
I decided to do this for a few reasons. First, it will allow me to live, work and travel freely in Ireland or anywhere else in the E.U. The passport control lines for Canadians are not usually that long, but it’s nicer to not be quizzed. But in the last few years, I have encountered several situations where it would have been very useful to have a 2nd passport:
On a trip to Russia, I discovered there was a visa war between Canada and Russia, and Russia was making Canadians wait 21 days for a Visa while the rest of the world waited 6 or less. I had to change a flight over that and barely made my conference. It would have been handy to use an Irish passport then. (Update: Possibly not. Russia and others require you to use the passport which allows residence, and you must apply where you live. So my Irish documents are no good at the San Francisco consulate as I don’t live there using the Irish passport.)
Getting stamps in your passport for Israel or its border stations means some other countries won’t let you in. Israelis will stamp a piece of paper for you but resent it, and you can lose it. A 2nd passport is a nice solution. (For frequent visitors, I believe Canada and the USA both offer a 2nd passport valid only for travel to Israel.)
Described earlier, last year I lost my passport in Berlin. While I got tremendous service in passport replacement, this was only because my mother was in hospital. Otherwise I would have been stuck, unable to travel. With 2 passports, you can keep them in two places, carry one and leave one in the hotel safe etc. While Canada does have an emergency temporary passport, some countries only offer you a travel document to get you home, and you must cancel any other travel on your trip.
On entry to Zimbabwe, I found they charged Canadians $75 per entry, while most other nations paid $30 for 1 and $45 for two. Canada is charging Zimbabweans $75 so they reciprocate. Stupid External Affairs, I bet far more Canadians go to Zimbabwe than the other way around
On entry to Zambia, it was $50 to transit for most countries but free/no-visa for the Irish. I got my passport 1 week after this, sigh. Ireland has a visa abolition deal.
Argentina charges a $150 “reciprocity fee” to US and Canadian passports, good for 10 years. Free for Irish, though. Yay!
All great reasons to have two passports. I don’t have that yet, though. (Update: I got it in June) Even though I presume that the vast majority of those who do the Irish foreign birth registry immediately want a passport, it doesn’t work that way. After a 21 month wait, I have my FBR certificate, which I now must mail back to the same consulate that sent it, along with several of the same documents I used in getting the FBR like my original birth certificate. While it makes huge sense to do them together, it doesn’t work that way. read more »