Brad Templeton is an EFF
director, Singularity U
faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Submitted by brad on Mon, 2009-10-26 14:42.
Saturday saw the dedication of a new autonomous vehicle research center at Stanford, sponsored by Volkswagen. VW provided the hardware for Stanley and Junior, which came 1st and 2nd in the 2nd and 3rd Darpa Grand Challenges, and Junior was on display at the event, driving through the parking lot and along the Stanford streets, then parking itself to a cheering crowd.
Junior continues to be a testing platform with its nice array of sensors and computers, though the driving it did on Saturday was largely done with the Velodyne LIDAR that spins on top of it, and an internal map of the geometry of the streets at Stanford.
New and interesting was a demonstration of the “Valet Parking” mode of a new test vehicle, for now just called Junior 3. What’s interesting about J3 is that it is almost entirely stock. All that is added are two lower-cost LIDAR sensors on the rear fenders. It also has a camera at the rear-view mirror (which is stock in cars with night-assist mode) and a few radar sensors used in the fixed-distance cruise control system. J3 is otherwise a Passat. Well, the trunk is filled with computers, but there is no reason what it does could not be done with a hidden embedded computer.
What it does is valet park itself. This is an earlier than expected implementation of one of the steps I outlined in the roadmap to Robocars as robo-valet parking. J3 relies on the fact the “valet” lot is empty of everything but cars and pillars. Its sensors are not good enough to deal well with random civilians, so this technology would only work in an enclosed lot where only employees enter the lot if needed. To use it, the driver brings the car to an entrance marked by 4 spots on the ground the car can see. Then the driver leaves and the car takes over. In this case, it has a map of the garage in its computer, but it could also download that on arrival in a parking lot. Using the map, and just the odometer, it is able to cruise the lanes of the parking lot, looking for an empty spot, which it sees using the radar. (Big metal cars of course show clearly on the radar.) It then drives into the spot.
read more »
Submitted by brad on Fri, 2009-10-23 17:54.
I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.
While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.
I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?
As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.
In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.
An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.
Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.
I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.
Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)
Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)
Submitted by brad on Wed, 2009-10-21 17:31.
Some time ago I proposed the “School of Fish Test” as a sort of turing test for robocars. In addition to being a test for the cars, it is also intended to be a way to demonstrate to the public when the vehicles have reached a certain level of safety. (In the test, a swarm of robocars moves ona track, and a skeptic in a sportscar is unable to hit one no matter what they do, like a diver trying to touch fish when swimming through a school.)
I was interested to read this month that Nissan has built test cars based on fish-derived algorithms as part of a series of experiments based on observing how animals swarm. (I presume this is coincidental, and the Nissan team did not know of my proposed test.)
The Nissan work (building on earlier work on bees) is based upon a swarm of robots which cooperate. The biggest test involves combining cooperating robots, non-cooperating robots and (mostly non-cooperating) human drivers, cyclists and pedestrians. Since the first robocars on the road will be alone, it is necessary to develop fully safe systems that do not depend on any cooperation with other cars. It can of course be useful to communicate with other cars, determine how much you trust them, and then cooperate with them, but this is something that can only be exploited later rather than sooner. In particular, while many people propose to me that building convoys of cars which draft one another is a good initial application of robotics (and indeed you can already get cars with cruise control that follows at a fixed distance) the problem is not just one of critical mass. A safety failure among cooperating cars runs the risk of causing a multi-car collision, with possible multiple injured parties, and this is a risk that should not be taken in early deployments of the technology.
My talk at the Singularity Summit on robocars was quite well received. Many were glad to see a talk on more near-future modest AI after a number of talks on full human level AI, while others wanted only the latter. A few questions raised some interesting issues:
- One person asked about the insurance and car repair industries. I got a big laugh by saying, “fuck ‘em.” While I am not actually that mean spirited about it, and I understand why some would react negatively to trends which will obsolete their industries, we can’t really be that backwards-looking.
- Another wondered if, after children discover that they nice cars will never hit them, they then travel to less safe roads without having learned proper safety instincts. This is a valid point, though I have already worried about what to do about the disruption to passengers who have to swerve around kids who play in the streets when it is not so clearly dangerous. Certain types of jaywalking that interfere with traffic will need to be discouraged or punished, though safe jaywalking, when no car is near, should be allowed and even encouraged.
- One woman asked if we might become disassociated with our environments if we spend our time in cars reading or chatting, never looking out. This is already true in a taxicab city like New York, though only limos offer face-to-face chat. I think the ability to read or work instead of focus on the road is mostly a feature and not a bug, but she does have a point. Still, we get even more divorced from the environment on things like subways.
As expected, the New York audience, unlike other U.S. audiences, saw no problem with giving up driving. Everywhere else I go, people swear that Americans love their cars and love driving and will never give it up. While some do feel that way, it’s obviously not a permanent condition.
Some other (non-transportation) observations from Singularity Summit are forthcoming.
BTW, I will be giving a Robocar talk next Wednesday, Oct 28 at Stanford University for the ME302 - Future of the Automobile class. (This is open to the general Stanford community, affiliates of their CARS institute, and a small number of the public. You can email firstname.lastname@example.org if you would like to go.)
Submitted by brad on Thu, 2009-10-15 12:01.
I’m impressed with a great interactive map of the U.S. power grid produced by NPR. It lets you see the location of existing and proposed grid lines, and all power plants, plus the distribution of power generation in each state.
On this chart you can see which states use coal most heavily — West Virginia at 98%, Utah, Wyoming, North Dakota, Indiana at 95% and New Mexico at 85%. You can see that California uses very little coal but 47% natural gas, that the NW uses mostly Hydro from places like Grand Coulee and much more. I recommend clicking on the link.
They also have charts of where solar and other renewable plants are (almost nowhere) and the solar radiation values.
Seeing it all together makes something clear that I wrote about earlier. If you want to put up solar panels, the best thing to do is to put them somewhere with good sun and lots of coal burning power plants. That’s places like New Mexico and Utah. Putting up a solar panel in California will give it pretty good sunlight — but will only offset natural gas. A solar panel in the midwest will offset coal but won’t get as much sun. In the Northeast it gets even less sun and offsets less coal.
Much better than putting up solar panels anywhere, howevever, is actually using the money to encourage real conservation in the coal-heavy areas like West Virginia, Wyoming, North Dakota or Indiana.
While, as I have written, solar panels are a terrible means of greening the power grid from a cost standpoint, people still want to put them up. If that’s going to happen, what would be great would be a way for those with money and a desire to green the grid to make that money work in the places it will do the best. This is a difficult challenge. People sadly are more interested in feeling they are doing the right thing rather than actually doing it, and they feel good when they see solar panels on their roof, and see their meter going backwards. It makes up for the pain of the giant cheque they wrote, without actually ever recovering the money. Writing that cheque so somebody else’s meter can go backwards (even if you get the savings) just isn’t satisfying to people.
It would make even more sense to put solar-thermal plants (at least at today’s prices,) wind or geothermal in these coal-heavy areas.
It might be interesting to propose a system where rich greens can pay to put solar panels on the roofs of houses where it will do the most good. The homeowner would still pay for power, but at a lower price than they paid before. This money would mostly go to the person who financed the solar panels. The system would include an internet-connected control computer, so the person doing the financing could still watch the meter go backwards, at least virtually, and track power generated and income earned. The only problem is, the return would be sucky, so it’s hard to make this satisfying. To help, the display would also show tons of coal that were not burned, and compare it to what would have happened if they had put the panels on their own roof.
Of course, another counter to this is that California and a few other places have very high tiered electrical rates which may not exist in the coal states. Because of that — essentially a financial incentive set up by the regulators to encourage power conservation — it may be much more cost-effective to have the panels in low-coal California than in high-coal areas, even if it’s not the greenest thing.
An even better plan would be to find a way for “rich greens” (people willing to spend some money to encourage clean power) to finance conservation in coal-heavy areas. To do this, the cooperation of the power companies would be required. For example, one of the best ways to do this would be to replace old fridges with new ones. (Replacing fridges costs $100 per MWH removed from the grid compared to $250 for solar panels.)
- The rich green would provide money to help buy the new fridge.
- An inspector comes to see the old fridge and confirms it is really in use as the main fridge. Old receipts may be demanded though these may be rare. A device is connected to assure it is not unplugged, other than in a local power failure.
- A few months later — to also assure the old fridge was really the one in use — the new fridge would be delivered by a truck that hauls the old one away. Inspectors confirm things and the buyer gets a rebate on their new fridge thanks to the rich green.
- The new, energy-efficient fridge has built in power monitoring and wireless internet. It reports power usage to the power company.
- The new fridge owner pays the power company 80% of what they used to pay for power for the old fridge. Ie. they pay more than their actual new power usage.
- The excess money goes to the rich green who funded the rebate on the fridge, until the rebate plus a decent rate of return is paid back.
To the person with the old fridge, they get a nice new fridge at a discount price — possibly even close to free. Their power bill on the fridge goes down 20%. The rest of the savings (about 30% of the power, typically) goes to the power company and then to the person who financed the rebate.
A number of the steps above are there to minimize fraud. For example, you don’t want people deliberately digging out an ancient fridge and putting it in place to get a false rebate. You also don’t want them taking the old fridge and moving it into the garage as a spare, which would actually make things worse. The latter is easy to assure by having the delivery company haul away the old one. The former is a bit tricky. The above plan at least demands that the old fridge be in place in their kitchen for a couple of months, and there be no obvious signs that it was just put in place. The metering plan demands wireless internet in the home, and the ability to configure the appliance to use it. That’s getting easier to demand, even of poor people with old fridges. Unless the program is wildly popular, this requirement would not be hard to meet.
Instead of wireless internet, the fridge could also just communicate the usage figures to a device the meter-reader carries when she visits the home to read the regular meter. Usage figures for the old fridge would be based on numbers for the model, not the individual unit.
It’s a bit harder to apply this to light bulbs, which are the biggest conservation win. Yes, you could send out crews to replace incandescent bulbs with CFLs, but it is not cost effective to meter them and know how much power they actually saved. For CFLs, the program would have to be simpler with no money going back to the person funding the rebates.
All of this depends on a program which is popular enough to make the power monitoring chips and systems in enough quantity that they don’t add much to the cost of the fridge at all.
Submitted by brad on Wed, 2009-10-14 14:30.
This press release describes a European research project on various intelligent vehicle technologies which will take place next year. As I outline in the roadmap a number of pre-robocar technologies are making their way into regular cars, so they can be sold as safer and more convenient. This project will actively collect data to learn about and improve the systems.
Today’s systems are fairly simple of course, and will learn a lot from this. This matches my prediction for how a robocar test suite will be developed, by gathering millions and later billions of miles of sample data including all accidents and anomalous events, over time with better and better sensors. Today’s sensors are very simple of course but this will change over time.
Initial reaction to these systems (which will have early flaws) may colour user opinion of them. For example, some adaptive cruise controls reportedly are too eager to decide there is a stopped car and will suddenly stop a vehicle. One of the challenges of automatic vehicle design will be finding ways to keep it safe without it being too conservative because real drivers are not very conservative. (They are also not very safe, but this defines the standards people expect.)
Submitted by brad on Mon, 2009-10-12 15:06.
Just back from a weeklong tour including speaking at Singularity Summit, teaching classes at Cushing Academy and a big Thanksgiving dinner (well, Thanksgiving is actually today but we had it earlier) and drive through fabulous fall colour in Muskoka.
This time United Airlines managed to misplace my luggage in both directions. (A reminder of why I don’t like to check luggage.) The first time the had an “excuse” in that we checked it only about 10 minutes before the baggage check deadline and the TSA took extra time on it. The way back it missed a 1 hour, 30 minute connection — no excuse for that.
However, again, my rule for judging companies is how they handle their mistakes as well as how often they make them. And, in JFK, when we went to baggage claim, they actually had somebody call our name and tell us the bag was not on the flight, so we went directly to file the missing luggage report. However, on the return flight, connecting in Denver to San Jose, we got the more “normal” experience — wait a long time at the baggage claim until you realize no more bags are coming and you’re among the last people waiting, and then go file a lost luggage report.
This made me realize — with modern bag tracking systems, the airline knows your bag is not on the plane at the time they close the cargo hold door, well before takeoff. They need to know that as this is part of the passenger-to-bag matching system they tried to build after the Pan Am 103 Lockerbie bombing. So the following things should be done:
- If they know my mobile number (and they do, because they text me delays and gate changes) they should text me that my luggage did not make the plane.
- The text should contain a URL where I can fill out my lost luggage report or track where my luggage actually is.
- Failing this, they should have a screen at the gate when you arrive with messages for passengers, including lost luggage reports. Or just have the gate agent print it and put it on the board if a screen costs too much.
- Failing this, they should have a screen at the baggage claim with notes for passengers about lost luggage so you don’t sit and wait.
- Failing this, an employee can go to the baggage claim and page the names of passengers, which is what they did in JFK.
- Like some airlines do, they should put a box with “Last Bag, Flight nnn” written on it on the luggage conveyor belt when the last bag has gone through, so people know not to wait in vain.
I might very well learn my luggage is not on before the plane closes the door. In that case I might even elect to not take the flight, though I can see that the airline might not want people to do this as they are usually about to close the door, if they have not already closed it.
Letting me fill out the form on the web saves the airline time and saves me time. I can probably do it right on the plane after it lands and cell phone use is allowed. I don’t even have to go to baggage claim. Make it mobile browser friendly of course.
Submitted by brad on Wed, 2009-09-30 14:47.
I have several sheetfed scanners. They are great in many ways — though not nearly as automatic as they could be — but they are expensive and have their limitations when it comes to real-world documents, which are often not in pristine shape.
I still believe in sheetfed scanners for the home, in fact one of my first blog posts here was about the paperless home, and some products are now on the market similar to this design, though none have the concept I really wanted — a battery powered scanner which simply scans to flash cards, and you take the flash card to a computer later for processing.
My multi-page document scanners will do a whole document, but they sometimes mis-feed. My single-page sheetfed scanner isn’t as fast or fancy but it’s still faster than using a flatbed because the act of putting the paper in the scanner is the act of scanning. There is no “open the top, remove old document, put in new one, lower top, push scan button” process.
Here’s a design that might be cheap and just what a house needs to get rid of its documents. It begins with a table which has an arm coming out from one side which has a tripod screw to hold a digital camera. Also running up the arm is a USB cable to the camera. Also on the arm, at enough of an angle to avoid glare and reflections are lighting, either white LED or CCFL tubes.
In the bed of the table is a capacitive sensor able to tell if your hand is near the table, as well as a simple photosensor to tell if there is a document on the table. All of this plugs into a laptop for control.
You slap a document on the table. As soon as you draw your hand away, the light flashes and the camera takes a picture. Then go and replace or flip the document and it happens again. No need to push a button, the removal of your hand with a document in place causes the photo. A button will be present to say “take it again” or “erase that” but you should not need to push it much. The light should be bright enough so the camera can shoot fairly stopped down, allowing a sharp image with good depth of field. The light might be on all the time in the single-sided version.
The camera can’t be any camera, alas, but many older cameras in the 6MP range would get about 300dpi colour from a typical letter sized page, which is quite fine. Key is that the camera has macro mode (or can otherwise focus close) and can be made to shoot over USB. An infrared LED could also be used to trigger many consumer cameras. Another plus is manual focus. It would be nice if the camera can just be locked in focus at the right distance, as that means much faster shooting for typical consumer digital cameras. And ideally all this (macro mode, manual focus) can all be set by USB control and thus be done under the control of the computer.
Of course, 3-D objects can also be shot in this way, though they might get glare from the lights if they have surfaces at the wrong angles. A fancier box would put the lights behind cloth diffusers, making things bulkier, though it can all pack down pretty small. In fact, since the arm can be designed to be easily removed, the whole thing can pack down into a very small box. A sheet of plexi would be available to flatten crumpled papers, though with good depth of field, this might not strictly be necessary.
One nice option might be a table filled with holes and a small suction pump. This would hold paper flat to the table. It would also make it easy to determine when paper is on the table. It would not help stacks of paper much but could be turned off, of course.
A fancier and bulkier version would have legs and support a 2nd camera below the table, which would now be a transparent piece of plexiglass. Double sided shots could then be taken, though in this case the lights would have to be turned off on the other side when shooting, and a darkened room or shade around the bottom and part of the top would be a good idea, to avoid bleed through the page. Suction might not be such a good idea here. The software should figure if the other side is blank and discard or highly compress that image. Of course the software must also crop images to size, and straighten rectangular items.
There are other options besides the capacitive hand sensor. These include a button, of course, a simple voice command detector, and clever use of the preview video mode that many digital cameras now have over USB. (ie. the computer can look through the camera and see when the document is in place and the hand is removed.) This approach would also allow gesture commands, little hand signals to indicate if the document is single sided, or B&W, or needs other special treatment.
The goal however, is a table where you can just slap pages down, move your hand away slightly and then slap down another. For stacks of documents one could even put down the whole stack and take pages off one at a time though this would surely bump the stack a bit requiring a bit of cleverness in straightening and cropping. Many people would find they could do this as fast as some of the faster professional document scanners, and with no errors on imperfect pages. The scans would not be as good as true scanner output, but good enough for many purposes.
In fact, digital camera photography’s speed (and ability to handle 3-D objects) led both Google Books and the Internet Archive to use it for their book scanning projects. This was of course primarily because they were unwilling to destroy books. Google came up with the idea of using a laser rangefinder to map the shape of the curved book page to correct any distortions in it. While this could be done here it is probably overkill.
One nice bonus here is that it’s very easy to design this to handle large documents, and even to be adjustable to handle both small and large documents. Normally scanners wide enough for large items are very expensive.
Submitted by brad on Mon, 2009-09-28 12:43.
A serious proportion of the computer users I know these days have gone multi-monitor. While I strongly recommend the 30” monitor (Dell 3007WFP and cousins or Apple) which I have to everybody, at $1000 it’s not the most cost effective way to get a lot of screen real estate. Today 24” 1080p monitors are down to $200, and flat panels don’t take so much space, so it makes a lot of sense to have two monitors or more.
Except there’s a big gap between them. And while there are a few monitors that advertise being thin bezel, even these have at least half an inch, so two monitors together will still have an inch of (usually black) between them.
I’m quite interested in building a panoramic photo wall with this new generation of cheap panels, but the 1” bars will be annoying, though tolerable from a distance. But does it have to be?
There are 1/4” bezel monitors made for the video wall industry, but it’s all very high end, and in fact it’s hard to find these monitors for sale on the regular market from what I have seen. If they are, they no doubt cost 2-3x as much as “specialty” market monitors. I really think it’s time to push multi-monitor as more than a specialty market.
I accept that you need to have something strong supporting and protecting the edge of your delicate LCD panel. But we all know from laptops it doesn’t have to be that wide. So what might we see?
- Design the edges of the monitor to interlock, and have the supporting substrate further back on the left and further forward on the right. Thus let the two panels get closer together. Alternately let one monitor go behind the other and try to keep the distance to a minimum.
- Design monitors that can be connected together by removing the bezel and protection/mounting hardware and carefully inserting a joiner unit which protects the edges of both panels but gets them as close together as it can, and firmly joins the two backs for strength. May not work as well for 2x2 grids without special joiners.
- Just sell a monitor that has 2, 3 or 4 panels in it, mounted as close as possible. I think people would buy these, allowing them to be priced even better than two monitors. Offer rows of 1, 2 or 3 and a 2x2 grid. I will admit that a row of 4, which is what I want, is not likely to be as big a market.
- Sell components to let VARs easily build such multi-panel monitors.
When it comes to multi-panel, I don’t know how close you could get the panels but I suspect it could be quite close. So what do you put in the gap? Well, it could be a black strip or a neutral strip. It could also be a translucent one that deliberately covers one or two pixels on each side, and thus shines and blends their colours. It might be interesting to see how much you could reduce visual effect of the gap. The eye has no problem looking through grid windows at a scene and not seeing the bars, so it may be that bars remain the right answer.
It might even be possible to cover the gap with a small thin LCD display strip. Such a strip, designed to have a very sharp edge, would probably go slightly in front of the panels, and appear as a bump in the screen — but a bump with pixels. From a distance this might look like a video wall with very obscured seams.
For big video walls, projection is still a popular choice, other than the fact that such walls must be very deep. With projection, you barely need the bezel at all, and in fact you can overlap projectors and use special software to blend them for a completely seamless display. However, projectors need expensive bulbs that burn out fairly quickly in constant use, so they have a number of downsides. LCD panel walls have enough upsides that people would tolerate the gaps if they can be made small using techniques above.
Anybody know how the Barco wall at the Comcast center is done? Even in the video from people’s camcorders, it looks very impressive.
If you see LCD panels larger than 24” with thin bezels (3/8 inch or less) at a good price (under $250) and with a good quality panel (doesn’t change colour as you move your head up and down) let me know. The Samsung 2443 looked good until I learned that it, and many others in this size, have serious view angle problems.
Submitted by brad on Fri, 2009-09-25 00:33.
Tonight I watched the debut of FlashForward, which is based on the novel of the same name by Rob Sawyer, an SF writer from my hometown whom I have known for many years. However, “based on” is the correct phrase because the TV show features Hollywood’s standard inability to write a decent time travel story. Oddly, just last week I watched the fairly old movie “Deja Vu” with Denzel Washington, which is also a time travel story.
Hollywood absolutely loves time travel. It’s hard to find a Hollywood F/SF TV show that hasn’t fallen to the temptation to have a time travel episode. Battlestar Galactica’s producer avowed he would never have time travel, and he didn’t, but he did have a god who delivered prophecies of the future which is a very close cousin of that. Time travel stories seem easy, and they are fun. They are often used to explore alternate possibilities for characters, which writers and viewers love to see.
But it’s very hard to do it consistently. In fact, it’s almost never done consistently, except perhaps in shows devoted to time travel (where it gets more thought) and not often even then. Time travel stories must deal with the question of whether a trip to the past (by people or information) changes the future, how it changes it, who it changes it for, and how “fast” it changes it. I have an article in the works on a taxonomy of time travel fiction, but some rough categories from it are:
- Calvinist: Everything is cast, nothing changes. When you go back into the past it turns out you always did, and it results in the same present you came from.
- Alternate world: Going into the past creates a new reality, and the old reality vanishes (at varying speeds) or becomes a different, co-existing fork. Sometimes only the TT (time traveler) is aware of this, sometimes not even she is.
- Be careful not to change the past: If you change it, you might erase yourself. If you break it, you may get a chance to fix it in some limited amount of time.
- Go ahead and change the past: You won’t get erased, but your world might be erased when you return to it.
- Try to change the past and you can’t: Some magic force keeps pushing things back the way they are meant to be. You kill Hitler and somebody else rises to do the same thing.
Inherent in several of these is the idea of a second time dimension, in which there is a “before” the past was changed and an “after” the past was changed. In this second time dimension, it takes time (or rather time-2) for the changes to propagate. This is mainly there to give protagonists a chance to undo changes. We see Marty Mcfly slowly fade away until he gets his parents back together, and then instantly he’s OK again.
In a time travel story, it is likely we will see cause follow effect, reversing normal causality. However, many writers take this as an excuse to throw all logic out the window. And almost all Hollywood SF inconsistently mixes up the various modes I describe above in one way or another.
Spoilers below for the first episode of FlashForward, and later for Deja Vu.
Update note: The fine folks at io9 asked FlashForward’s producers about the flaw I raise but they are not as bothered by it. read more »
Submitted by brad on Wed, 2009-09-23 17:50.
It seems that with more and more of the online transactions I engage in — and sometimes even when I don’t buy anything — I will get a request to participate in a customer satisfaction survey. Not just some of the time in some cases, but with every purchase. I’m also seeing it on web sites — sometimes just for visiting a web site I will get a request to do a survey, either while reading, or upon clicking on a link away from the site.
On the surface this may seem like the company is showing they care. But in reality it is just the marketing group’s thirst for numbers both to actually improve things and to give them something to do. But there’s a problem with doing it all the time, or most of the time.
First, it doesn’t scale. I do a lot of transactions, and in the future I will do even more. I can’t possibly fill out a survey on each, and I certainly don’t want to. As such I find the requests an annoyance, almost spam. And I bet a lot of other people do.
And that actually means that if you ask too much, you now will get a self-selected subset of people who either have lots of free time, or who have something pointed to say (ie. they got a bad experience, or perhaps rarely a very good one.) So your survey becomes valueless as data collection the more people you ask to do it, or rather the more refusals you get. Oddly, you will get more useful results asking fewer people.
Sort of. Because if other people keep asking everybody, it creates the same burn-out and even a survey that is only requested from 1 user out of 1000 will still see high rejection and self-selection. There is no answer but for everybody to truly only survey a tiny random subset of the transactions, and offer a real reward (not some bogus coupon) to get participation.
I also get phone surveys today from companies I have actually done business with. I ask them, “Do you have this survey on the web?” So far, they always say no, so I say, “I won’t do it on the phone, sorry. If you had it on the web I might have.” I’m lying a bit, in that the probability is still low I would do it, but it’s a lot higher. I can do a web survey in 1/10th the time it takes to get quizzed on the phone, and my time is valuable. Telling me I need to do it on the phone instead of the web says the company doesn’t care about my time, and so I won’t do it and the company loses points.
Sadly, I don’t see companies learning these lessons, unless they hire better stats people to manage their surveys.
Also, I don’t want a reminder from everybody I buy from on eBay to leave feedback. In fact, remind me twice and I’ll leave negative feedback if I’m in a bad mood. I prefer to leave feedback in bulk, that way every transaction isn’t really multiple transactions. Much better if ebay sends me a reminder once a month to leave feedback for those I didn’t report on, and takes me right to the bulk feedback page.
Submitted by brad on Tue, 2009-09-22 16:05.
I have put up a gallery of panoramas for Burning Man 2009. This year I went with the new Canon 5D Mark II, which has remarkable low-light shooting capabilities. As such, I generated a number of interesting new night panoramas in addition to the giant ones of the day.
In particular, you will want to check out the panorama of the crowd around the burn, as seen from the Esplanade, and the night scene around the Temple, and a twilight shot.
Below you see a shot of the Gothic Raygun Rocket, not because it is the best of the panoramas, but because it is one of the shortest and thus
fits in the blog!
Some of these are still in progress. Check back for more results, particularly in the HDR department. The regular sized photos will also be processed and available in the future.
Finally, I have gone back and rebuilt the web pages for the last 5 years of panoramas at a higher resolution and with better scaling. So you may want to look at them again to see more detail. A few are also up as gigapans including one super high-res 2009 shot in a zoomable viewer.
Submitted by brad on Mon, 2009-09-21 14:40.
Two events I will be at…
Tonight, at 111 Minna Gallery in San Francisco, we at EFF will be hosting a reading by Randall Monroe, creator of the popular nerd comic “xkcd.” There is a regular ticket ($30) and a VIP reception ticket ($100) and just a few are still available. Payments are contributions to the EFF.
In two weeks, on Oct 3-4, I will be speaking on the future of robot cars at the Singularity Summit in New York City. Lots of other good speakers on the podium too.
See you there…
Submitted by brad on Wed, 2009-09-16 13:05.
Two years ago, I discussed solutions for Burning Man Exodus. The problem: Get 45,000 people off the playa in 2 days, 95% of them taking a single highway south which goes through a small town which has a chokepoint capacity of about 450 cars/hour. Quite often wait times to get onto the road are 4 hours or more, though this year things were smoother (perhaps due to a lower attendance) and the number of people with 4 hour waits was lower. In a bad year, we might imagine that 25,000 people wait an average of 3 hours, or 75,000 person hours, almost 40 man-years of human labour.
Some judge my prior solution, with appointments, as too complex. Let’s try something which is perhaps simpler, at least at its base, though I have also thought up some complexities that may have value.
When you are ready to leave, drive to the gate. There, as was tried in 2,000, you would be directed into a waiting lot, shaped with cones at the front. The lot would have perhaps 20 rows of 10 cars (around 150 vehicles as there are so many trailers and RVs.) The lot would have a big number displayed. There you would park. You would then have three options:
- Stay in the lot and party with the other people in the lot, or sit in your car. This is in fact what you would do in the current situation, except there you start your engine and go forward 30 feet every minute. Share leftover food. Give donations to DPW crew. Have a good time.
- Go to the exodus station near the parking lots. Get a padded envelope and write your address on it, and your plate number, DL number and car description. Put your spare set of keys in the envelope. Get on your bike, or walk back to the city, and have a good time there. Listen to Exodus Radio. They will give reports on when your lot is going to move, in particular a 30 minute warning. When you get it, go back, pick up your keys and get ready.
- Volunteer to help with city cleanup. Do that by driving to the premium section of the waiting lot. Park there, wait a bit, and then get on a bus which takes you somewhere to do an hour shift of clean-up. You moop check the playa, clean, take down infrastructure, or spend an hour doing Exodus work which you trained for earlier. At the end of your shift, you are free to take a bus back to the lot, or wait in the city with friends. Listen to Exodus Radio. They will call your premium lot. It will be called well before the regular lot. Ideally give an hour, gain an hour! Get there and be ready.
When your lot is called, the Exodus worker pulls back the cones and the lanes stream out, non-stop (but still 10mph) off the playa. The road does not have to be lengthened to hold more cars. At the blacktop entrance, an Exodus worker with a temporary traffic light has it set to a green left arrow except when other traffic is coming, when it’s red. You turn without hesitation (people do that on green arrows, but slow down for flag workers.) Off you go.
As noted, it seems a good idea that people who want to leave the lot leave a set of keys. Not their only set — it is foolish to bring only one set to the playa anyway, and if you read the instructions you knew this. This allows exodus workers to easily move vehicles for people who don’t return, and there will be some. Even so the lots should be designed so it’s not hard to get around them. If the first lane has a spare lane to the other direction that works. It does require somebody to hand back the keys. If you read the instructions, they will say to bring photos of yourself (and alternate drivers.) Tape that photo to the key envelope and it makes it very quick and easy for the key wrangler to hand you the right keys. Don’t bring a photo and they must confirm a DL number, which they won’t have time to do if time is short — so get there in plenty of time.
If you don’t get there in time to get your keys, you can wait, or you can pay $10 (or whatever it costs) and the BM Org will mail you the keys after the event. Of course with rental vehicles this is not an option, so be there early.
However, it may also be simpler to not do the key system, and tow people who don’t show up, and charge them a fat fee for that. Or tow those who don’t leave a key. People might leave a fake key, which would result in a tow and an even larger fine, perhaps. As such I am not wedded to the key desk idea and it may be simpler to first see if no-shows are a big problem. No-shows can be punished in lots of ways if they signed a contract before leaving.
There is an issue for people who do volunteer work and then head for the city. They need to have left keys before the volunteer shift, or return to the lot to leave them, or not leave them and risk a tow.
Volunteers would get a leader who directs them what they will be doing. A common task will be doing a playa walk/MOOP sweep. The leader will listen to Exodus Radio and know if things move quickly and the volunteers must return. Normally, however, volunteer shifts would be taken only when the line is very long, much longer than a volunteer shift. People can of course offer to do more than one shift when the line is long but in that case they should bring their own portable radio, just as people who leave the lot should bring one or be near one. The shift leaders could also have a radio on loud enough for all to hear, hopefully the DJ will be doing something fun between exodus lot announcements.
As noted, one of the things people can volunteer for is exodus work itself. The offer of early exodus in exchange for an hour of exodus work assures there can never be a shortage of workers as long as there is a base of workers that does it without that reward. You’re helping the people ahead of you in line get out earlier. However, regular (non-leaving) volunteers are needed for when the line is short and first bunches up, and for when it shortens again.
To do exodus work you would have to attend training in advance, and be certified as able to do it. Probably done in SF, but possibly on-playa. Some other volunteer jobs (such as cleanup crew leader) would require some training and approval.
Staff needed are
- An exodus DJ (in a tower overlooking the line and the lots) with assistant or two who are controlling the whole operation.
- Flag worker controlling the traffic light at the blacktop. Possibly others in Gerlach.
- 2 crews of 1-2 workers directing cars into the lot currently filling up. They also prepare the lot, replacing the exit cones and possibly moving no-show cars to the side. May have a golf cart.
- 1-2 workers diverting cars from the main exit lane merge (the “fallopian tubes”) to the staging lots when needed. A cop would be very handy here.
- 1 worker to remove the cones at the lot being emptied and wave cars out of it. (The Exodus DJ is also telling those people to get going.) When only one lane is left, this person moves to the next lot. Worker probably has a scooter or golf cart.
- 1-2 workers to man the key desk.
The police come in huge numbers and spend a lot of time on victimless crimes. Managing traffic is a a great way to make really effective use of their police powers. Police can be there to deal with people who ignore signs, bypass or cut out of lots, or who leave their car without doing a key drop or contract.
How to start the lots
It is an interesting problem how to start the lots close to the city. Initially the volume is low and people exit directly, and will tend to go in multiple
lanes without a lot of work, eager as faster vehicles will be to pass slow ones. Eventually they will bunch up at the forced merge, and then the bunch up will spread backwards, traffic-jam style. However, this is taking place three miles from the city, at least 15 minutes drive at 10mph. There is a magic amount of back-up at which point you should start holding cars, and then a point at which you should release a lot full of them. Fortunately any short gaps you put in the stream are not wasted as they are re-smoothed on the blacktop before Gerlach-Empire, which is believed to be the primary choke point. However, it will take experience to learn the exact right times, so the first year will not do as well as later years. Data has been kept on car counts from the past, presumably broken down by hour, which could help.
Submitted by brad on Tue, 2009-09-15 17:21.
There is a number that should not be horribly hard to calculate by the actuaries of the health insurance companies. In fact, it’s a number that they have surely already calculated. What would alternate health insurance systems cost?
A lot of confusion in the health debate concerns two views the public has of health insurance. On the one hand, it’s insurance. Which means that of course insurers would not cover things like most pre-existing conditions. Insurance is normally only sold to cover unknown risk in every other field. If your neighbours regularly shoot flaming arrows onto your house, you will not get fire insurance to cover that, except at an extreme price. Viewed purely as insurance, it is silly to ask insurance companies to cover these things. Or to cover known and voluntary expenses, like preventative care, or birth control pills and the like. (Rather, an insurance company should decide to raise your prices if you don’t take preventative care, or allocate funds for the ordinary costs of planned events, because they don’t want to cover choices, just risks.)
However, we also seek social goals for the health insurance system. So we put rules on health insurance companies of all sorts. And now the USA is considering a very broad change — “cover everybody, and don’t ding them for pre-existing conditions.”
From a purely business standpoint, if you don’t have pre-existing conditions, you don’t want your insurance company to cover them. While your company may not be a mutual one, in a free market all should be not too far off the range of such a plan. Everything your company covers that is expensive and not going to happen to you raises your premiums. If you are a healthy-living, healthy person, you want to insure with a company that covers only such people’s unexpected illnesses, as this will give you the lowest premiums by a wide margin.
However, several things are changing the game. First of all, taxes are paying for highly inefficient emergency room care for the uninsured, and society is paying other costs for a sick populace, including the spread of disease. Next, insurance companies have discovered that if the application process is complex enough, then it becomes possible to find a flaw in the application of many patients who make expensive claims, and thus deny them coverage. Generally you don’t want to insure with a company that would do this: while your premiums will be lower, it is too hard to predict if this might happen to you. The more complex the policy rules, the more impossible it is to predict. However, it is hard to discover this in advance when buying a policy, and hard to shop on.
But when an insurance company decides on a set of rules, it does so under the guidance of its actuaries. They tell the officers, “If you avoid covering X, it will save us $Y” and they tell it with high accuracy. It is their job.
As such, these actuaries should already know the cost of a system where a company must take any client at a premium decided by some fairly simple factors (age being the prime one) compared to a system where they can exclude or surcharge people who have higher risks of claims. Indeed, one might argue that while clearly older people have a higher risk of claims, that is not their fault, and even this should not be used. Every factor a company uses to deny or surcharge coverage is something that reduces its costs (and thus its premiums) or they would not bother doing it.
On the other hand, elimination of such factors of discrimination would reduce costs in selling policies and enforcing policies, though not enough to make up for it, or they would already do it, at least in a competitive market. (It’s not, since any company that took all comers at the same price would quickly be out of business as it would get only the rejects of other companies.)
Single payer systems give us some suggestion on what this costs, but since they all cost less than the current U.S. system it is hard to get guidance. They get these savings for various reasons that people argue about, but not all of them translate into the U.S. system.
There is still a conundrum in a “sell to everybody” system. Insurance plans will still compete on how good the care they will buy is. What doctors can you go to? HMO or PPO? What procedures will they pay for, what limits will they have? The problem is this: If I’m really sick, it is very cost effective for me to go out and buy a very premium plan, with the best doctors and the highest limits. Unlike a random person, I know I am going to use them. It’s like letting people increase the coverage on their fire insurance after their house is on fire. If people can change their insurance company after they get sick then high-end policies will not work. This leaves us back at trying to define pre-existing conditions, and for example allowing people only to switch to an equivalent-payout plan for those conditions, while changing the quality of the plan on unknown risks. This means you need to buy high-end insurance when you are young, which most people don’t. And it means companies still have an incentive to declare things as pre-existing conditions to cap their costs. (Though at least it would not be possible for them to deny all coverage to such customers, just limit it.)
Some would argue that this problem is really just a progressive tax — the health plans favoured by the wealthy end up costing 3 times what they normally would while poorer health plans are actually cheaper than they should be. But it should put pressure on all the plans up the chain, as many poor people can’t afford a $5,000/month premium plan no matter that it gives them $50,000/month in benefits, but the very wealthy still can. So they will then switch to the $2,000/month plan the upper-middle class prefer, and go broke paying for it, but stay alive.
Or let’s consider a new insurance plan, the “well person’s insurance” which covers your ordinary medical costs, and emergencies, but has a lifetime cap of $5,000 on chronic or slow-to-treat conditions like cancer, diabetes and heart disease. You can do very well on this coverage, until you get cancer. Then you leave the old policy and sign up for premium coverage that includes it, which can’t be denied in spite of your diagnosis.
This may suggest that single-payer may be the only plan which works if you want to cover everybody. But single-payer (under which I lived for 30 years in Canada) is not without its issues. Almost all insurance companies ration care, including single payer ones, but in single payer you don’t get a choice on how much there will be.
However, it would be good if the actuaries would tell us the numbers here. Just what will the various options truly cost and what premiums will they generate? Of course, the actuaries have a self-interest or at least an employer’s interest in reporting these numbers, so it may be hard to get the truth, but the truth is at least out there.
Submitted by brad on Tue, 2009-09-15 14:06.
Yes, any system which is going to engage in some long activity which will freeze up the system for more than a few seconds should offer a way to cancel, abort or undo it. You would think designers would know that by now.
My latest peeve is cell phones and other smart devices which are complex enough to “boot.” now. In many cases if you want to see if they are on or not, you touch the power button — and if they were not on, they start their 30 to 60 second boot process. Which you must wait through so that you can then turn them off again. On some devices there is still a physical power button (and on many laptops you can fake one by holding down the soft power button for 4 seconds) but that’s not a great solution. Sure, at some point the booting device reaches a state where it can’t easily abort the boot as it is writing state, but this usually takes at least several seconds if not much longer to reach, so you should be able to abort right away.
Submitted by brad on Sat, 2009-09-12 15:15.
I just decided to cancel my AAdvantage credit card for a 1% cashback card with no annual fee. Many people have the frequent flyer cards so let’s consider the math on them. They typically come with a high annual fee (around $80) while other cards have no fee and other rewards.
Let’s say you spend $25,000 per year on the card, which is enough for 25,000 miles or one domestic flight on the typical airline. With a typical cashback card you get 1% back though some cards give 2% or even 4% back on certain classes of purchases. I have an Amex from Costco that gives 3% on gasoline and 2% on travel expenses, but Amex is not as accepted as Visa or MC.
- Your cash cost for the 25K miles is $250 plus the $80 annual fee = $320
- There are varying taxes and fees on award tickets, as low as $8 but sometimes much higher
- If you are booking less than 3 weeks in advance, fees of $50 to $100 will apply
- Finding available award seats can be quite difficult, the supply is far lower than for cash seats in most cases. There are also blackouts.
- You will not receive miles for your trip. A typical cross-country return is 5,000 miles, of $50 at the 1% rate, $80-$100 at the rate airlines claim
- Most people use miles long after they earn them, and in fact have a large balance. So a time discount should apply. Miles sitting in accounts earn no interest, cash does.
As such the free trip is harder to get and costs $400 to $500. But that is not far from (and sometimes more than) the cash price of a ticket.
But cash is of course a much more flexible thing — you can use it for anything, not just airline tickets. There are a raft of cards out there
now which tout “miles on any airline” and what they really give you is a 1% cashback that is only good on airlines. General 1% cashback is much better.
There is an argument that upgrades do much better. Upgrading with miles can be cheaper than upgrading with cash, since the cash price of business class seats is very high. However, as you learn if you are not a top elite flyer, upgrades are quite hard to get. Others are ahead of you in line. AA also instituted a cash co-pay on upgrades making them more expensive than before when done with miles.
If you spend less than $25K per year on the card, the math gets even worse. At $12.5K per year, you gave up at least $460 to $550 for your free ticket, and when the tickets are available on miles, the cash fare is often lower. If you spend much more a year, the cost may make some sense.
A common trick for people who have mileage cards is to pick up group checks at restaurants and have everybody pay you cash. However, the cards that give 3% cashback at restaurants like the Amex are much better for this.
Submitted by brad on Wed, 2009-09-09 11:40.
After every RV trip (I’m back from Burning Man) I think of more I want RVs to do. This year, as we have for many years, we built a power distribution system with a master generator rather than having each RV run its own noisy, smelly and inefficient generator. However, this is expensive and a lot of work for a small group, it is cheap and a lot of work for a larger group.
There’s been a revolution in small generator design of late thanks to the declining cost of inverters and other power conversion. A modern quality generator feeds the output of its windings to circuits to step up and step down the voltage to produce the required power. The output power is cleaner and more stable, and the generator is spun at different RPMs based on the power load, making it quieter and more efficient. With many models, you can also combine the internal output of two generators to produce a higher power generator.
RVs have come with expensive old-style generators that are quieter than cheap ones, and which produce better power, but today they are moving to inverter generators. With an inverter generator, it’s also possible to draw on the RV batteries for power surges (such as starting an AC or microwave) beyond what the generator can do.
I’m interested in the potential for smarter power, so what I would like to see is a way for a group of RVs with new generation power systems to plug together. In this way, they could all make use of the power in the other vehicles, and in most cases only a fraction of the generators would need to be running to provide power to all. (For example, at night, only one generator could power a whole cluster. In the day, with ACs running, several would need to run, but it would be very unlikely to have to run all, or even 75% of them.) read more »
Submitted by brad on Tue, 2009-08-25 23:39.
RVs all have a fresh water tank. When you rent one, they will often tell you not to drink that water. That’s because the tanks are being filled up in all sorts of random places, out of the control of the rental company, and while it’s probably safe, they don’t want to promise it, nor disinfect the tank every rental.
I recently got a small “pen” which you put in a cup of water and it shines a UV light for 30 seconds to kill any nasties in the water. While I have not tried to test it on infected water, I presume that it works.
So it seems it makes sense to me to install this sort of UV tube in the fresh water tank of RVs. Run it from time to time, and particularly after a fill, and be sure the water is clean. Indeed, with an appropriate filter, and a 2nd pump, such an RV could happily fill its water tank from clear lakes and streams, allowing longer dry camping which should have a market. Though of course the gray/black water tanks still will get full, but outside showers and drinking do not fill those tanks.
A urination-only toilet could also be done if near a stream or lake.
Submitted by brad on Mon, 2009-08-17 14:32.
The Worldcon (World Science Fiction Convention) in Montreal was enjoyable. Like all worldcons, which are run by fans rather than professional convention staff, it had its issues, but nothing too drastic. Our worst experience actually came from the Delta hotel, which I’ll describe below.
For the past few decades, Worldcons have been held in convention centers. They attract from 4,000 to 7,000 people and are generally felt to not fit in any ordinary hotel outside Las Vegas. (They don’t go to Las Vegas both because there is no large fan base there to run it, and the Las Vegas Hotels, unlike those in most towns, have no incentive to offer a cut-rate deal on a summer weekend.)
Because they are always held where deals are to be had on hotels and convention space, it is not uncommon for them to get the entire convention center or a large portion of it. This turns out to be a temptation which most cons succumb to, but should not. The Montreal convention was huge and cavernous. It had little of the intimacy a mostly social event should have. Use of the entire convention center meant long walks and robbed the convention of a social center — a single place through which you could expect people to flow, so you would see your friends, join up for hallway conversations and gather people to go for meals.
This is one of those cases where less can be more. You should not take more space than you need. The convention should be as initimate as it can be without becoming crowded. That may mean deliberately not taking function space.
A social center is vital to a good convention. Unfortunately when there are hotels in multiple directions from the convention center so that people use different exits, it is hard for the crowd to figure one out. At the Montreal convention (Anticipation) the closest thing to such a center was near the registration desk, but it never really worked. At other conventions, anywhere on the path to the primary entrance works. Sometimes it is the lobby and bar of the HQ hotel, but this was not the case here.
When the social center will not be obvious, the convention should try to find the best one, and put up a sign saying it is the congregation point. In some convention centers, meeting rooms will be on a different floor from other function space, and so it may be necessary to have two meeting points, one for in-between sessions, and the other for general time.
The social center/meeting point is the one thing it can make sense to use some space on. Expect a good fraction of the con to congregate there in break times. Let them form groups of conversation (there should be sound absorbing walls) but still be able to see and find other people in the space.
A good thing to make a meeting point work is to put up the schedule there, ideally in a dynamic way. This can be computer screens showing the titles of the upcoming sessions, or even human changed cards saying this. Anticipation used a giant schedule on the wall, which is also OK. The other methods allow descriptions to go up with the names. Anticipation did a roundly disliked “pocket” program printed on tabloid sized paper, with two pages usually needed to cover a whole day. Nobody had a pocket it could fit in. In addition, there were many changes to the schedule and the online version was not updated. Again, this is a volunteer effort, so I expect some glitches like this to happen, they are par for the course. read more »
Submitted by brad on Sat, 2009-08-15 14:24.
Today, fewer and fewer photos are printed. We usually see them on screen. And more and more commonly, we see them on a widescreen monitor. 16:9 screens are quite common as are 16:10. You can hardly find a 4:3 screen any more, though that is the aspect ratio of most P&S cameras. Most SLRs are 3:2, which still doesn’t fit on the widescreen monitor.
So there should be a standard tag to put in photos saying, “It’s OK to crop this photo to fill aspect ratio X:Y.” Then display programs could know to do this, instead of putting black bars at the center. Since all photos exceed the resolution of the screen by a large margin these days, there is no loss of detail to do this, in fact there is probably a gain.
One could apply this tag (or perhaps its converse, one saying, “please display the entirety of this photo without crop”) in a photo organizer program of course. It could also be applied by cameras. To do this, the camera might display a dim outline of a widescreen aspect ratio, so you can compose the shot to fit in that. Many people might decide to do this as the default, and push a button when they need the whole field of view and want to set a “don’t crop” flag. Of course you can fix this after the fact.
Should sensors just go widescreen? Probably not. The lens produces a circular image, so more square aspect ratios make sense. A widescreen sensor would be too narrow in portrait mode. In fact, there’s an argument that as sensors get cheaper, they should go circular and then the user can decide after the fact if they want landscape, portrait or some particular aspect ratio in either.
The simplest way to start this plan would be to add a “crop top/bottom to fit width” option to photo viewers. And to add a “flag this picture to not do that” command to the photo viewer. A quick run through the slideshow, tagging the few photos that can’t be expanded to fill the screen, would prepare the slideshow for showing to others, or it could be done right during the show.