There are a variety of tools that offer encrypted filesystems for the various OSs. None of them are as easy to use as we would like, and none have reached the goal of “Zero User Interface” (ZUI) that is the only thing which causes successful deployment of encryption (ie. Skype, SSH and SSL.)
Many of these tools have a risk of failure if you don’t also encrypt your swap/paging space, because your swap file will contain fragments of memory, including encrypted files and even in some cases decryption keys. There is a lot of other confidential data which can end up in swap — web banking passwords and just about anything else.
It’s not too hard to encrypt your swap on linux, and the ecryptfs tools package includes a tool to set up encrypted swap (which is not done with ecryptfs, but rather with dm-crypt, the block-device encryptor, but it sets it up for you.)
However, I would propose that swap be encrypted by default, even if the user does nothing. When you boot, the system would generate a random key for that session, and use it to encrypt all writes and reads to the swap space. That key of course would never be swapped out, and furthermore, the kernel could even try to move it around in memory to avoid the attacks the EFF recently demonstrated where the RAM of a computer that’s been turned off for a short time is still frequently readable. (In the future, computers will probably come with special small blocks of RAM in which to store keys which are guaranteed — as much as that’s possible — to be wiped in a power failure, and also hard to access.)
The automatic encryption of swap does bring up a couple of issues. First of all, it’s not secure with hibernation, where your computer is suspended to disk. Indeed, to make hibernation work, you would have to save the key at the start of the hibernation file. Hibernation would thus eliminate all security on the data — but this is no worse than the situation today, where all swap is insecure. And many people never hibernate. read more »
Some recent searches have revealed unusual activity on twitter, and I wonder where it’s going. Narcissus searches on twitter reveal a variety of accounts tweeting links into my blog and sites, for reasons not clearly apparent.
For example, a week ago, a half dozen identical twitter accounts all tweeted my post about electric cars playing music. All the accounts had pictures of models as their icon, and the exact same set of twitter posts, which seem to be a random collection of blog and news URLs with a bit.ly pointer to the item, all posted via twitterfeed. These accounts seem to follow and be followed by about 500, presumably the same list.
Then more recently I see another set of accounts which all follow about 20 people but are followed by about 200 to 500. They are all posting “from API” and again are just posting links, this time with tinyurl.com. The account names are odd, too.
These also seem to to have cute girls as icons. However, strangely, the many followers appear to be real, or at least some of them appear to be. Why are people following a spam robot? Are the followers people who were paid to do it, or are in some twitter-optimization scheme?
What I am curious about is the motive. Are they linking to real sites in the hope of gaining some sort of legitimacy in twitter indexing engines, so that later they can start linking to people who pay for it? (Twitter SEO?) Are they trying to form twitter equivalents of link farms? Are they just hoping that site authors will see the backlinks and look at them for some later purpose? (You would be amazed how many hits on a web server are there just to put a spammer in the “Referer” field, either to get you to look, or to show up in referer logs that some sites post to the web.)
As digital cameras have developed enough resolution to work as scanners, such as in the scanning table proposal I wrote about earlier, some people are also using them to digitize slides. You can purchase what is called a “slide copier” which is just a simple lens and holder which goes in front of the camera to take pictures of slides. These have existed for a long time as they were used to duplicate slides in film days. However, they were not adapted for negatives since you can’t readily duplicate a colour negative this way, because it is a negative and because it has an orange cast from the substrate.
There is at least one slide copier (The Opteka) which offers a negative strip holder, however that requires a bit of manual manipulation and the orange cast reduces the color gamut you will get after processing the image. Digital photography allows imaging of negatives because we can invert and colour adjust the result.
To get the product I want, we don’t have too far to go. First of all, you want a negative strip holder which has wheels in the sprocket holes. Once you have placed your negative strip correctly with one wheel, a second wheel should be able to advance exactly one frame, just like the reel in the camera did when it was shooting. You may need to do some fine adjustments, but it is also satisfactory to have the image cover more than 36mm so that you don’t have to be perfectly accurate, and have the software do some cropping.
Secondly, you would like it so that ideally, after you wind one frame, it triggers the shutter using a remote release. (Remote release is sadly a complex thing, with many different ways for different cameras, including wired cable releases where you just close a contact but need a proprietary connector, infrared remote controls and USB shooting. Sadly, this complexity might end up adding more to the cost than everything else, so you may have to suffer and squeeze it yourself.) As a plus, a little air bulb should be available to blow air over negatives before shooting them.
Next, you want an illuminator behind the negative or slide. For slides you want white of course. For negatives however, you would like a colour chosen to undo the effects of the orange cast, so that the gamut of light received matches the range of the camera sensors. This might be done most easily with 3 LEDs matched to camera sensors in the appropriate range of brightness.
You could also simply make a product out of this light, to be used with existing slide duplicators; that’s the simplest way to do this in the small scale.
Why do all this, when a real negative scanner is not that expensive, and higher quality? Digitizing your negatives this way would be fast. Negative scanners all tend to be very slow. This approach would let you slot in a negative strip, and go wind-click-wind-click-wind-click-wind-click in just a couple of seconds, not unlike shooting fast on an old film camera. You would get quite decent scans with today’s high quality DLSRs. My 5d Mark II with 21 megapixels would effectively be getting around 4000 dpi, though with bayer interpolation. If you wanted a scan for professional work or printing, you could then go back to that negative and do it on a more expensive negative scanner, cleaning it first etc.
Another solution is just to send all the negatives off to one of the services which send them to India for cheap scanning, though these tend to be at a more modest resolution. This approach would let you quickly get a catalog of your negatives.
Of course, to get a really quick catalog, another approach would be to create a grid of 3 rows of negative strip holder which could then be placed on a light table — ideally a light table with a blueish light to compensate for the orange cast. Take a photo of the entire grid to get 12 individual photos in one shot. This will result (on the 5D) in about 1.5 megapixel versions of each negative. Not sufficient to work with but fine for screen and web use, and not too far off the basic service you get from the consumer scanning companies.
I have some of my old negatives in plastic sheets that go in binders, so I could do it directly with them, but it’s work to put negatives into these and would be much easier to slide strips into a plastic holder which keeps them flat. Of course, another approach would be to simply lay the strips on the light table and put a sheet of clear plexiglass on top of them, and shoot in a dim room to avoid reflections.
It would also be useful if digital cameras or video cameras tossed in a “view colour negative” mode which did its best to show an invert of the live preview image with orange cast reverted. Then you could browse your negatives by holding them up to your camera (in macro mode) and see them in their true form, if at lower resolution. Of course you can usually figure out what’s in a negative but sometimes it’s not so easy and requires the loupe, and it might not in this case.
Let me expand those ideas to a more complete list of what a phone and voicemail system could and should do when a call is coming in. My friends Rohit Kare and Salim Ismail recently released a cool product they called Caller ID 2.0, which shows you more advanced screen pops on the incoming caller, such as their recent tweets and facebook status, which is quite cute if a bit spooky. But I refer instead to the choices I might make after seeing their number and other such information.
First of all, as before, I should be offered the ability to answer the call and play a couple of different recordings until I start speaking into the phone. As described, these recordings would be along the lines of, “I’m going to take your call but I am briefly busy, driving or in an audience. Please hold on while I get somewhere that we can talk.” Since the phone should even know (based on rate of change of cell towers or GPS) that I am driving it should be able to figure out which of the two conditions I should report. While there are some minor privacy issues, it is worthwhile to let the other person know you are driving, as you really should have a different sort of conversation. This is so useful it would even be useful to let people know in the ringback that you are driving, but there are privacy issues on doing that, particularly with strangers, but even with spouses.
If the network will cooperate, it also makes sense to have choices that will, like the current “ignore” button, send the call to voice mail. These buttons however would control what sort of greeting is played, and perhaps other actions.
For example, you might send the call to a voice mail saying “Hi, I was too busy to take the call but I am with my phone, and I plan to get back to you within a few minutes. No need to leave a message.” (Though if there was no caller ID, you might indicate that they should enter their phone number for the callback.) You could also have 2 buttons, to describe a longer wait time or different procedures, such as “I will call you back when I get to the office.” As before, one button might make the greeting reveal things to the caller that you want to reveal, such as “Tell my location and speed.” After all, quite often with a trusted caller, the main purpose of the call will be to ask where you are and when you are going to get where you’re going.
I struck a nerve several years ago when I blogged about the horrible beep-beep noise made by heavy equipment when it backs up. Eventually a British company came up with a solution: a pulsed burst of white noise which is very evident when you are near the backing up vehicle but which disperses quickly so it doesn’t travel and annoy people a mile away as the beeps do.
Now I am seeing more and more suggestions that electric cars, which run quite silently when slow, make some noise for safety. This is fine, but there are also suggestions that there will be music and vanity noises, like ringtones or “cartones.” I can certainly see why this would appeal to people. (Already many think that their car is the place to play mind-numbing bass to announce musical taste to all others on the street.) There are even proposed laws.
While the cartones would be quieter than the backup beep or the heavy bass, I really fear that people will overdo what they think is the purpose — being attention grabbing. They will want to distract, and that will create a cacophony on the roads. It’s hard to make sounds that are meant to be attention grabbing (or vanity oriented) not travel beyond the range that you need them for safety.
I don’t want to imagine what it might be like living as I do with a 3-way stop outside my window, with each car singing a different tune or strange noise every time it slows down and starts up again. Who will want to live near intersections or parking lots?
I have a few proposals:
Like the beep-beep solution, use white noise that just doesn’t travel very far, but is easily noticed when close.
Use natural sounds, like waves crashing, birds chirping, wind blowing. We are tuned to hear those sounds in an otherwise silent environment, but our brains also can easily ignore them in background form.
Do indeed tune the volume based on ambient noise. This is suggested in the O’Reilly article linked above. They propose it to be loud enough. It should also be quiet enough.
Don’t do it at a speed where the tires and wind and electric motors are making enough noise already.
As robocar sensors become more common, such as LIDAR and radar, only make the noise when there are people who might come in contact with the vehicle. Otherwise, be silent.
Since robocars will not hit people in any normal operation, even people who don’t know they are there, such vehicles need not make any noise. HOwever, if they see a human or anything else on a collision course, let them make a more loud and useful noise that really gets attention, like a burst of white or pink noise, or even a horn if they ignore that. Start quiet, get louder if it is not reacted to in a human reaction time.
Let’s not give up on this opportunity to return peace to our public spaces as electric cars and robocars become popular.
(Update: I had a formatting error in the original posting, this has been fixed.)
A few weeks ago when I wrote about the non deployment of SSL I touched on an old idea I had to make web transactions vastly more efficient. I recently read about Google’s proposed SPDY protocol which goes in a completely opposite direction, attempting to solve the problem of large numbers of parallel requests to a web server by multiplexing them all in a single streaming protocol that works inside a TCP session.
While calling attention to that, let me outline what I think would be the fastest way to do very simple web transactions. It may be that such simple transactions are no longer common, but it’s worth considering.
Today the way this works is pretty complex:
You do a DNS request for www.example.com via a UDP request to your DNS server. In the pure case this also means first asking where “.com” is but your DNS server almost surely knows that. Instead, a UDP request is sent to the “.com” master server.
The “.com” master server returns with the address of the server for example.com.
You send a DNS request to the example.com server, asking where “www.example.com is.”
The example.com DNS server sends a UDP response back with the IP address of www.example.com
You open a TCP session to that address. First, you send a “SYN” packet.
The site responds with a SYN/ACK packet.
You respond to the SYN/ACK with an ACK packet. You also send the packet with your HTTP “GET” reqequest for “/page.html.” This is a distinct packet but there is no roundtrip so this can be viewed as one step. You may also close off your sending with a FIN packet.
The site sends back data with the contents of the page. If the page is short it may come in one packet. If it is long, there may be several packets.
There will also be acknowledgement packets as the multiple data packets arrive in each direction. You will send at least one ACK.
The other server will ACK your FIN.
The remote server will close the session with a FIN packet.
You will ACK the FIN packet.
You may not be familiar with all this, but the main thing to understand is that there are a lot of roundtrips going on. If the servers are far away and the time to transmit is long, it can take a long time for all these round trips.
It gets worse when you want to set up a secure, encrypted connection using TLS/SSL. On top of all the TCP, there are additional handshakes for the encryption. For full security, you must encrypt before you send the GET because the contents of the URL name should be kept encrypted.
A simple alternative
Consider a protocol for simple transactions where the DNS server plays a role, and short transactions use UDP. I am going to call this the “Web Transaction Protocol” or WTP. (There is a WAP variant called that but WAP is fading.)
You send, via a UDP packet, not just a DNS request but your full GET request to the DNS server you know about, either for .com or for example.com. You also include an IP and port to which responses to the request can be sent.
The DNS server, which knows where the target machine is (or next level DNS server) forwards the full GET request for you to that server. It also sends back the normal DNS answer to you via UDP, including a flag to say it forwarded the request for you (or that it refused to, which is the default for servers that don’t even know about this.) It is important to note that quite commonly, the DNS server for example.com and the www.example.com web server will be on the same LAN, or even be the same machine, so there is no hop time involved.
The web server, receiving your request, considers the size and complexity of the response. If the response is short and simple, it sends it in one UDP packet, though possibly more than one, to your specified address. If no ACK is received in reasonable time, send it again a few times until you get one.
When you receive the response, you send an ACK back via UDP. You’re done.
The above transaction would take place incredibly fast compared to the standard approach. If you know the DNS server for example.com, it will usually mean a single packet to that server, and a single packet coming back — one round trip — to get your answer. If you only know the server for .com, it would mean a single packet to the .com server which is forwarded to the example.com server for you. Since the master servers tend to be in the “center” of the network and are multiplied out so there is one near you, this is not much more than a single round trip. read more »
A proposal is being floated in Europe for computerized convoys or road trains within the next decade. This is a proposal for a system where cars can hand over control to a lead car and follow in a train or convoy, without physical connection.
This idea comes up a lot as an early robocar technology. It is particularly common because it’s much easier to do — a human driver still is in charge, and the robotic control is limited to a very limited and simple environment. It’s safe to say that we could make this work very quickly if we wanted to. There is no navigation or vision required, no recognition of obstacles, no choice of speeds or turns. Cars that come together in a convoy can draft to get a serious boost in fuel efficiency, and of course the un-drivers can now relax and read or work on the trip.
As a robocar booster, people are surprised when I say I am not too thrilled about this idea, at least as an early technology. Rather I think it’s a great idea for later. In spite of the enthusiasm with which I write, the robocar problem is not a simple one. This much simpler problem is tempting but has some snags.
First of all, if you have a bug in a standalone robocar system, it may cause an accident, and that may injure or kill the occupants of the robocar, and perhaps one or two other cars. Death is less likely at urban speeds of course. A problem with a computerized convoy could have terrible results, involving scores of cars. Since most people want this for the highway, the problem would also occur at lethal speed. Convoys are just not the first place we want to test our systems and have our first accidents.
Secondly, forming convoys requires a critical mass of suitably equipped cars. Of course, you don’t need a dozen full robocars to make a train, all you would need is cars with drive-by-wire and some much simpler control circuitry. But even so, the incentive to get a car with this feature has to get over a critical mass hump if it’s going to be worthwhile. It’s not quite as bad as fully ad-hoc trains, since you can have scheduled trains, lead by a bus or truck driver, and cars can see such a lead vehicle and get in behind it. But at first, the odds of many cars all finding one another at the same time is low. If the train is going faster than regular traffic in a carpool lane, as we hope it would, it will not be easy to join a train that moves past you on the highway. If it moves slower than traffic, it is easy to slow down and join it, but then it has to move slower, with all the attendant problems.
Computerized convoys have advantages and disadvantages over physical ones. Physical ones probably can only be formed while stopped, and probably only unformed that way too. One could see the last car in a physical convoy undocking while moving, so with correct ordering it might work out, but it’s a far cry from a virtual convoy which allows anybody to join and leave at any time.
Physical convoys however can transmit power. This is useful if you expect people to be driving short-range electric cars. They would take their short range car and join a convoy, and be powered by the lead locomotive while operating, and even be recharging a bit. After dispersion, the vehicles would only need to go a short destination to their target and back to the evening train.
Physical coupling makes it harder for one car to leave the train due to a failure. On the other hand it means that if the lead car wants to change lanes, all cars must do so. If the lead car leaves the road, they all do. Jack-knifing is a real worry, which is one reason that today even cargo road trains are limited to 2 trailers in urban areas, and 3 trailers in rural areas, if they are allowed at all.
Physical coupling requires specially modified vehicles. This is even more the case if the locomotive will actually be towing the vehicles physically rather than providing them with electricity for their motors and batteries. Either of these is a major modification, while virtual coupling only requires a drive-by-wire car and a small matter of programming.
Even full robocars probably should not form convoys right away. We should wait until our confidence is even higher, in spite of the fuel savings. If one car goes bad, or its occupants try to take over and move to manual driving, the consequences could be nasty in any convoy. And of course, the first robocars on the road will never get to join convoys as they will not meet the others. That’s why you need to solve the solo navigation problem first, and then you get enough on the road to work on the cooperation problems.
Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we’ll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it’s time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.
Even though digital video methods are so fast, the standard DVI cable is not able to drive my 4 megapixel monitor — this requires dual-link DVI, which as the name suggests, runs 2 sets of DVI wires (in the same cable and plug) to double the bandwidth. The expensive 8 megapixel monitors need two dual-link DVI cables.
Now we want enough bandwidth to completely redraw a screen at a suitable refresh (100hz) if we can get it. But we may want to consider how we can get that, and what to do if we can’t get it, either because our equipment is older than our display, or because it is too small to have the right connector, or must send the data over a medium that can’t deliver the bandwidth (like wireless, or long wires.)
Today all video is delivered in “frames” which are an update of the complete display. This was the only way to do things with analog rasterized (scan line) displays. Earlier displays actually were vector based, and the computer sent the display a series of lines (start at x,y then draw to w,z) to draw to make the images. There was still a non-fixed refresh interval as the phosphors would persist for a limited time and any line had to be redrawn again quickly. However, the background of the display was never drawn — you only sent what was white.
Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a “frame buffer” — memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once.
This approach to sending video was common in the early remote terminals that worked over ethernet, such as X windows. In X, the program sends more complex commands to draw things on the screen, rather than sending a complete frame 60 times every second. X can be very efficient when sending things like text, as the letters themselves are sent, not the bitmaps. There are also a number of protocols used to map screens over much slower networks, like the internet. The VNC protocol is popular — it works with frames but calculates the difference and only transmits what changes on a fairly basic level.
We’re also changing how we generate video. Only video captured by cameras has an inherent frame rate any more. Computer screens, and even computer generated animation are expressed as a series of changes and movements of screen objects though sometimes they are rendered to frames for historical reasons. Finally, many applications, notably games, do not even work in terms of pixels any more, but express what they want to display as polygons. Even videos are now all delivered compressed by compressors that break up the scene into rarely updated backgrounds, new draws of changing objects and moves and transformations of existing ones.
So I propose two distinct things:
A unification of our high speed data protocols so that all of them (external disks, SAN, high speed networking, peripheral connectors such as USB and video) benefit together from improvements, and one family of cables can support all of them.
A new protocol for displays which, in addition to being able to send frames, sends video as changes to segments of the screen, with timestamps as to when they should happen.
The case for approach #2 is obvious. You can have an old-school timed-frame protocol within a more complex protocol able to work with subsets of the screen. The main issue is how much complexity you want to demand in the protocol. You can’t demand too much or you make the equipment too expensive to make and too hard to modify. Indeed, you want to be able to support many different levels, but not insist on support for all of them. Levels can include:
Full frames (ie. what we do today)
Rastered updates to specific rectangles, with ability to scale them.
More arbitrary shapes (alpha) and ability to move the shapes with any timebase
VNC level abilities
X windows level abilities
Graphics card (polygon) level abilities
In the unlikely extreme, the abilities of high level languages like display postscript.
I’m not sure the last layers are good to standardize in hardware, but let’s consider the first few levels. When I bought my 4 megapixel (2560x1600) monitor, it was annoying to learn that none of my computers could actually display on it, even at a low frame rate. Technically single DVI has the bandwidth to do it at 30hz but this is not a desirable option if it’s all you ever get to do. While I did indeed want to get a card able to make full use, the reality is that 99.9% of what I do on it could be done over the DVI bandwith with just the ability to update and move rectangles, or to do so at a slower speed. The whole screen never is completely replaced in a situation where waiting 1/30th of a second would not be an issue. But the ability to paint a small window at 120hz on displays that can do this might well be very handy. Adoption of a system like this would allow even a device with a very slow output (such as USB 2 at 400mhz) to still use all the resolution for typical activities of a computer desktop. While you might think that video would be impossible over such a slow bus, if the rectangles could scale, the 400 megabit bus could still do things like paying DVDs. While I do not suggest every monitor be able to decode our latest video compression schemes in hardware, the ability to use the post-compression primatives (drawing subsections and doing basic transforms on them) might well be enough to feed quite a bit of video through a small cable.
One could imagine even use of wireless video protocols for devices like cell phones. One could connect a cell phone with an HDTV screen (as found in a hotel) and have it reasonably use the entire screen, even though it would not have the gigabit bandwidths needed to display 1080p framed video.
Sending in changes to a screen with timestamps of when they should change also allows the potential for super-smooth movement on screens that have very low latency display elements. For example, commands to the display might involve describing a foreground object, and updating just that object hundreds of times a second. Very fast displays would show those updates and present completely smooth motion. Slower displays would discard the intermediate steps (or just ask that they not be sent.) Animations could also be sent as instructions to move (and perhaps rotate) a rectangle and do it as smoothly as possible from A to B. This would allow the display to decide what rate this should be done at. (Though I think the display and video generator should work together on this in most cases.)
It should be noted that HDMI supports a small amount of power (5 volts at 50ma) and in newer forms both it and DisplayPort have stopped acting like digitized versions of analog signals and more like highly specialized digital buses. Too bad they didn’t go all the way.
As noted, it is key the basic levels be simple to promote universal adoption. As such, the elements in such a protocol would start simple. All commands could specify a time they are to be executed if it is not immediate.
Paint line or rectangle with specified values, or gradient fill.
Move object, and move entire screen
Adjust brightness of rectangle (fade)
Load pre-buffered rectangle. (Fonts, standard shapes, quick transitions)
Display pre-buffered rectangle
However, lessons learned from other protocols might expand this list slightly.
This, in theory allows the creation of a single connector (or compatible family of connectors) for lots of data and lots of power. It can’t be just one connector though, because some devices need very small connectors which can’t handle the number of wires others need, or deliver the amount of power some devices need. Most devices would probably get by with a single data wire, and ideally technology would keep pushing just how much data can go down that wire, but any design should allow for simply increasing the number of wires when more bandwidth than a single wire can do is needed. (Presumably a year later, the same device would start being able to use a single wire as the bandwidth increases.) We may, of course, not be able to figure out how to do connectors for tomorrow’s high bandwidth single wires, so you also want a way to design an upwards compatible connector with blank spaces — or expansion ability — for the pins of the future, which might well be optical.
There is also a security implication to all of this. While a single wire that brings you power, a link to a video monitor, LAN and local peripherals would be fabulous, caution is required. You don’t want to be able to plug into a video projector in a new conference room and have it pretend to be a keyboard that takes over your computer. As this is a problem with USB in general, it is worth solving regardless of this. One approach would be to have every device use a unique ID (possibly a certified ID) so that you can declare trust for all your devices at home, and perhaps everything plugged into your home hubs, but be notified when a never seen device that needs trust (like a keyboard or drive) is connected.
To some extent having different connectors helps this problem a little bit, in that if you plug an ethernet cable into the dedicated ethernet jack, it is clear what you are doing, and that you probably want to trust the LAN you are connecting to. The implicit LAN coming down a universal cable needs a more explicit approval.
Final rough description
Here’s a more refined rough set of features of the universal connector:
Shielded twisted pair with ability to have varying lengths of shield to add more pins or different types of pins.
Asserts briefly a low voltage on pin 1, highly current limited, to power negotiator circuit in unpowered devices
Negotiator circuits work out actual power transfer, at what voltages and currents and on what pins, and initial data negotiation about what pins, what protocols and what data rates.
If no response is given to negotiation (ie. no negotiator circuit) then measure resistance on various pins and provide specified power based on that, but abort if current goes too high initially.
Full power is applied, unpowered devices boot up and perform more advanced negotiation of what data goes on what pins.
When full data handshake is obtained, negotiate further functions (hubs, video, network, storage, peripherals etc.)
First: I will be speaking on robocars tomorrow, Tuesday Nov 9, at 6:30 pm for the meeting of the Jewish High Tech Community in Silicon Valley. The talk is at 6:30pm at the conference center of Fenwick and West at Castro & California in Mountain View. The public is welcome to attend, there is a $10 fee for non-members. This will be similar to my talk at Stanford 2 weeks ago, and a bit more extensive than the one in New York early in October, which Forbes said was the audience favorite at the event.
Volvo has built a brand around safe cars, and last year committed that nobody would die in a newer Volvo by 2020. They plan to do much of this with better passenger safety systems, akin to the work they have done on airbags and crumple zones. However, they also intend to use a lot of computerized technologies to make it happen. Other teams are pushing to expand the goal inside Volvo to also stop people from being killed by Volvos. To that end, next year’s Volvo S60 will come with a “Pedestrian Avoidance System” which uses a camera and machine vision to identify pedestrians and calculate if the vehicle is about to hit one. If it sees a potential pedestrian collision it will beep and alert the driver. If the driver does nothing, the car will brake.
Here is a video of the S60 in action:
It’s impressive, though pure machine vision suffers problems as lighting changes, which is one reason most work recently has been on LIDAR. It’s also interesting to see if they will be able to avoid making it too conservative. If the warning goes off all the time, even for a pedestrian who will (to the human eye) clearly slide by the side of the car at places like a crosswalk, drivers may learn to ignore the alarm, or get very annoyed and shut the whole system off it it brakes for them when they know an impact is not imminent. I’m hoping to learn more about Volvo’s efforts in the future. No other company has put as much effort into building a brand around safety, so we can expect Volvo, which has slipped in this status of late, to work very hard to maintain it and adapt robocar technologies to safer human driving and fully autonomous driving
Dense triple parking
I have written of a simple algorithm to allow dense Valet style parking of robocars, such as triple parking on the roadsides. In this algorithm, one gap is left in the outer lanes, and the Robocars are able to move together, as an entire row segment, to “move the gap” as quickly as a single car can move. That way, if a car needs to get out from an inner lane, it can signal, and if the gap is currently ahead of it, for example, all the cars from the one next to it to the gap can move forward one space (at the same time) to put the gap next to the vehicle that needs to leave. This can happen in all the other rows and is easy, quiet and efficient for electric cars. It does not even need radio communication, as robocars will sense a car moving behind them or ahead of them, and immediately move in reaction. This request will move up the chain of cars to the gap. Of course, if one car does not move, the car behind it will only move a very short distance before refusing to go further, which would stop the whole effort (or in the case of an error, cause a very slow impact if the car behind keeps coming) and signal a need for human attention.
It seems like this should be possible even without many gaps, as long as there is enough spare space to allow a vehicle to wiggle out of its space. If there is just one gap, and a bit of wiggle room in the other rows, any car can still get out, just a bit more slowly. This is probably better done with a protocol for communication to assure it works quickly.
In this case, a gap on the outside lane (where there must be at least one) can be temporarily moved to the inside, and then back out. Consider 3 lanes of cars, with a gap in the outer late (lane #3) and a car in lane #1 (the curb lane) wanting out. First the lane #3 cars would adjust to move the gap to the right place, a bit forward of our target car. Next, a car from lane #2 would move into this gap, leaving a gap in lane #2 into which our target car can move. This leaves a gap in lane #3 which can be filled by a car from lane #2 which is willing to move in, ideally right next to our target car. Likewise a car from lane #3 can now move into that gap, and the resulting gap in the outer lane #3 can be moved to allow exit by our target car.
This requires a great deal more car moving, though again with electric cars this may not be too expensive. If the cars can turn all their wheels, they can move horizontally as some concept cars can already do. Even without that, a robotic car can wiggle out without much room, and of course the gap would not be placed exactly in place with the target car, but probably slightly forward to allow transfer with fewer wiggles. The result is a whole valet lot with just one blank space needed to get any car reasonably quickly. Of course, this would only be done when the lot needed to be totally full. For any partially full lot, gaps would be left to minimize the car moves needed to get any car out. However, if space is at a premium — so much so as to justify the extra moving — it can be done.
I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.
The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.
For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.
Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.
The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.
I also noted a number of features for still cameras as well as video ones:
Notes by the photographer, as above
Universal protocol for control of remote flashes
Remote control firing of the camera with all that USB has
At 480mbits, downloading of photos and even live video streams to a master recorder somewhere
It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.
There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes.
Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.
And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)
While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car’s software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.
This brings up an issue I have written about before — the giant dangers of automatic software updates. Automatic software updates are a huge security hole in today’s computer systems. On typical home computers, there are now many packages that do automatic updates. Due to the lack of security in these OSs, a variety of companies have been “given the keys” to full administrative access on the millions of computers which run their auto-updater. Companies which go to all sorts of lengths to secure their computers and networks are routinely granting all these software companies top level access (ie. the ability to run arbitrary code on demand) without thinking about it. Most of these software companies are good and would never abuse this, but this doesn’t mean that they don’t have employees who can’t be bribed or suborned, or security holes in their own networks which would let an attacker in to make a malicious update which is automatically sent out.
I once asked the man who ran the server room where the servers for Pointcast (the first big auto-updating application) were housed, how many fingers somebody would need to break to get into his server room. “They would not have to break any. Any physical threat and they would probably get in,” I heard. This is not unusual, and often there are ways in needing far less than this.
So now let’s consider software systems which control our safety. We are trusting our safety to computers more and more these days. Every elevator or airplane has a computer which could kill us if maliciously programmed. More and more cars have them, and more will over time, long before we ride in robocars. All around the world are electric devices with computer controls which could, if programmed maliciously, probably overload and start many fires, too. Of course, voting machines with malicious programs could even elect the wrong candidates and start baseless wars. (Not that I’m saying this has happened, just that it could.)
However these systems do not have automatic update. The temptation for automatic update will become strong over time, both because it is cheap and it allows the ability to fix safety problems, and we like that for critical systems. While the internal software systems of a robocar would not be connected to the internet in a traditional way, they might be programmed to, every so often, request and accept certified updates to their firmware from the components of the car’s computer systems which are connected to the net.
Imagine a big car company with 20 million robocars on the road, and an automatic software update facility. This would allow a malicious person, if they could suborn that automatic update ability, to load in nasty software which could kill tens of millions. Not just the people riding in the robocars would be affected, because the malicious software could command idle cars to start moving and hit other cars or run down pedestrians. It would be a catastrophe of grand proportions, greater than a major epidemic or multiple nuclear bombs. That’s no small statement.
There are steps that can be taken to limit this. Software updates should be digitally signed, and they should be signed by multiple independent parties. This stops any one of the official parties from being suborned (either by being a mole, or being tortured, or having a child kidnapped, etc.) to send out an update. But it doesn’t stop the fact that the 5 executives who have to sign an update will still be trusting the programming team to have delivered them a safe update. Assuring that requires a major code review of every new update, by a team that carefully examines all source changes and compiles the source themselves. Right now this just isn’t common practice.
However, it gets worse than this. An attacker can also suborn the development tools, such as the C compilers and linkers which build the final binaries. The source might be clean, but few companies keep perfect security on all their tools. Doing so requires that all the tool vendors have a similar attention to security in all their releases. And on all the tools they use.
One has to ask if this is even possible. Can such a level of security be maintained on all the components, enough to stop a terrorist programmer or a foreign government from inserting a trojan into a tool used by a compiler vendor who then sends certified compilers to the developers of safety-critical software such as robocars? Can every machine on every network at every tool vendor be kept safe from this?
We will try but the answer is probably not. As such, one result may be that automatic updates are a bad idea. If updates spread more slowly, with the individual participation of each machine owner, it gives more time to spot malicious code. It doesn’t mean that malicious code can’t be spread, as individual owners who install updates certainly won’t be checking everything they approve. But it can stop the instantaneous spread, and give a chance to find logic bombs set to go off later.
Normally we don’t want to go overboard worrying about “movie plot” threats like these. But when a single person can kill tens of millions because of a software administration practice, it starts to be worthy of notice.
I just returned from Jeff Pulver’s “140 Characters” conference in L.A. which was about Twitter. I asked many people if they get Twitter — not if they understand how it’s useful, but why it is such a hot item, and whether it deserves to be, with billion dollar valuations and many talking about it as the most important platform.
Some suggested Twitter is not as big as it appears, with a larger churn than expected and some plateau appearing in new users. Others think it is still shooting for the moon.
The first value in twitter I found was as a broadcast SMS. While I would not text all my friends when I go to a restaurant or a club, having a way so that they will easily know that (and might join me) is valuable. Other services have tried to do things like this but Twitter is the one that succeeded in spite of not being aimed at any specific application like this.
This explains the secret of Twitter. By being simple (and forcing brevity) it was able to be universal. By being more universal it could more easily attain critical mass within groups of friends. While an app dedicated to some social or location based application might do it better, it needs to get a critical mass of friends using it to work. Once Twitter got that mass, it had a leg up at being that platform.
At first, people wondered if Twitter’s simplicity (and requirement for brevity) was a bug or a feature. It definitely seems to have worked as a feature. By keeping things short, Twitter makes is less scary to follow people. It’s hard for me to get new subscribers to this blog, because subscribing to the blog means you will see my moderately long posts every day or two, and that’s an investment in reading. To subscribe to somebody’s Twitter feed is no big commitment. Thus people can get a million followers there, when no blog has that. In addition, the brevity makes it a good match for the mobile phone, which is the primary way people use Twitter. (Though usually the smart phone, not the old SMS way.)
And yet it is hard not to be frustrated at Twitter for being so simple. There are so many things people do with Twitter that could be done better by some more specialized or complex tool. Yet it does not happen.
However, Twitter, in its latest mode, is something different. It is “sampled.” In normal serial media, you usually consume all of it. You come in to read and the tool shows you all the new items in the stream. Your goal is to read them all, and the publishers tend to expect it. Most Twitter users now follow far too many people to read it all, so the best they can do is sample — they come it at various times of day and find out what their stalkees are up to right then. Of course, other media have also been sampled, including newspapers and message boards, just because people don’t have time, or because they go away for too long to catch up. On Twitter, however, going away for even a couple of hours will give you too many tweets to catch up on.
This makes Twitter an odd choice as a publishing tool. If I publish on this blog, I expect most of my RSS subscribers will see it, even if they check a week later. If I tweet something, only a small fraction of the followers will see it — only if they happen to read shortly after I write it, and sometimes not even then. Perhaps some who follow only a few will see it later, or those who specifically check on my postings. (You can’t. Mine are protected, which turns out to be a mistake on Twitter but there are nasty privacy results from not being protected.)
TV has an unusual history in this regard. In the early days, there were so few stations that many people watched, at one time or another, all the major shows. As TV grew to many channels, it became a sampled medium. You would channel surf, and stop at things that were interesting, and know that most of the stream was going by. When the Tivo arose, TV became a subscription medium, where you identify the programs you like, and you see only those, with perhaps some suggestions thrown in to sample from.
Online media, however, and social media in particular were not intended to be sampled. Sure, everybody would just skip over the high volume of their mailing lists and news feeds when coming back from a vacation, but this was the exception and not the rule.
The question is, will Twitter’s nature as a sampled medium be a bug or a feature? It seems like a bug but so did the simplicity. It makes it easy to get followers, which the narcissists and the PR flacks love, but many of the tweets get missed (unless they get picked up as a meme and re-tweeted) and nobody loves that.
On Protection: It is typical to tweet not just blog-like items but the personal story of your day. Where you went and when. This is fine as a thing to tell friends in the moment, but with a public twitter feed, it’s being recorded forever by many different players. The ephemeral aspects of your life become permanent. But if you do protect your feed, you can’t do a lot of things on twitter. What you write won’t be seen by others who search for hashtags. You can’t reply to people who don’t follow you. You’re an outsider. The only way to solve this would be to make Twitter really proprietary, blocking all the services that are republishing it, analysing it and indexing it. In this case, dedicated applications make more sense. For example, while location based apps need my location, they don’t need to record it for more than a short period. They can safely erase it, and still provide me a good app. They can only do this if they are proprietary, because if they give my location to other tools it is hard to stop them from recording it, and making it all public. There’s no good answer here.
Saturday saw the dedication of a new autonomous vehicle research center at Stanford, sponsored by Volkswagen. VW provided the hardware for Stanley and Junior, which came 1st and 2nd in the 2nd and 3rd Darpa Grand Challenges, and Junior was on display at the event, driving through the parking lot and along the Stanford streets, then parking itself to a cheering crowd.
Junior continues to be a testing platform with its nice array of sensors and computers, though the driving it did on Saturday was largely done with the Velodyne LIDAR that spins on top of it, and an internal map of the geometry of the streets at Stanford.
New and interesting was a demonstration of the “Valet Parking” mode of a new test vehicle, for now just called Junior 3. What’s interesting about J3 is that it is almost entirely stock. All that is added are two lower-cost LIDAR sensors on the rear fenders. It also has a camera at the rear-view mirror (which is stock in cars with night-assist mode) and a few radar sensors used in the fixed-distance cruise control system. J3 is otherwise a Passat. Well, the trunk is filled with computers, but there is no reason what it does could not be done with a hidden embedded computer.
What it does is valet park itself. This is an earlier than expected implementation of one of the steps I outlined in the roadmap to Robocars as robo-valet parking. J3 relies on the fact the “valet” lot is empty of everything but cars and pillars. Its sensors are not good enough to deal well with random civilians, so this technology would only work in an enclosed lot where only employees enter the lot if needed. To use it, the driver brings the car to an entrance marked by 4 spots on the ground the car can see. Then the driver leaves and the car takes over. In this case, it has a map of the garage in its computer, but it could also download that on arrival in a parking lot. Using the map, and just the odometer, it is able to cruise the lanes of the parking lot, looking for an empty spot, which it sees using the radar. (Big metal cars of course show clearly on the radar.) It then drives into the spot.
I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.
While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.
I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?
As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.
In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.
An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.
Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.
I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.
Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)
Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)
Some time ago I proposed the “School of Fish Test” as a sort of turing test for robocars. In addition to being a test for the cars, it is also intended to be a way to demonstrate to the public when the vehicles have reached a certain level of safety. (In the test, a swarm of robocars moves ona track, and a skeptic in a sportscar is unable to hit one no matter what they do, like a diver trying to touch fish when swimming through a school.)
I was interested to read this month that Nissan has built test cars based on fish-derived algorithms as part of a series of experiments based on observing how animals swarm. (I presume this is coincidental, and the Nissan team did not know of my proposed test.)
The Nissan work (building on earlier work on bees) is based upon a swarm of robots which cooperate. The biggest test involves combining cooperating robots, non-cooperating robots and (mostly non-cooperating) human drivers, cyclists and pedestrians. Since the first robocars on the road will be alone, it is necessary to develop fully safe systems that do not depend on any cooperation with other cars. It can of course be useful to communicate with other cars, determine how much you trust them, and then cooperate with them, but this is something that can only be exploited later rather than sooner. In particular, while many people propose to me that building convoys of cars which draft one another is a good initial application of robotics (and indeed you can already get cars with cruise control that follows at a fixed distance) the problem is not just one of critical mass. A safety failure among cooperating cars runs the risk of causing a multi-car collision, with possible multiple injured parties, and this is a risk that should not be taken in early deployments of the technology.
My talk at the Singularity Summit on robocars was quite well received. Many were glad to see a talk on more near-future modest AI after a number of talks on full human level AI, while others wanted only the latter. A few questions raised some interesting issues:
One person asked about the insurance and car repair industries. I got a big laugh by saying, “fuck ‘em.” While I am not actually that mean spirited about it, and I understand why some would react negatively to trends which will obsolete their industries, we can’t really be that backwards-looking.
Another wondered if, after children discover that they nice cars will never hit them, they then travel to less safe roads without having learned proper safety instincts. This is a valid point, though I have already worried about what to do about the disruption to passengers who have to swerve around kids who play in the streets when it is not so clearly dangerous. Certain types of jaywalking that interfere with traffic will need to be discouraged or punished, though safe jaywalking, when no car is near, should be allowed and even encouraged.
One woman asked if we might become disassociated with our environments if we spend our time in cars reading or chatting, never looking out. This is already true in a taxicab city like New York, though only limos offer face-to-face chat. I think the ability to read or work instead of focus on the road is mostly a feature and not a bug, but she does have a point. Still, we get even more divorced from the environment on things like subways.
As expected, the New York audience, unlike other U.S. audiences, saw no problem with giving up driving. Everywhere else I go, people swear that Americans love their cars and love driving and will never give it up. While some do feel that way, it’s obviously not a permanent condition.
Some other (non-transportation) observations from Singularity Summit are forthcoming.
BTW, I will be giving a Robocar talk next Wednesday, Oct 28 at Stanford University for the ME302 - Future of the Automobile class. (This is open to the general Stanford community, affiliates of their CARS institute, and a small number of the public. You can email email@example.com if you would like to go.)
I’m impressed with a great interactive map of the U.S. power grid produced by NPR. It lets you see the location of existing and proposed grid lines, and all power plants, plus the distribution of power generation in each state.
On this chart you can see which states use coal most heavily — West Virginia at 98%, Utah, Wyoming, North Dakota, Indiana at 95% and New Mexico at 85%. You can see that California uses very little coal but 47% natural gas, that the NW uses mostly Hydro from places like Grand Coulee and much more. I recommend clicking on the link.
They also have charts of where solar and other renewable plants are (almost nowhere) and the solar radiation values.
Seeing it all together makes something clear that I wrote about earlier. If you want to put up solar panels, the best thing to do is to put them somewhere with good sun and lots of coal burning power plants. That’s places like New Mexico and Utah. Putting up a solar panel in California will give it pretty good sunlight — but will only offset natural gas. A solar panel in the midwest will offset coal but won’t get as much sun. In the Northeast it gets even less sun and offsets less coal.
Much better than putting up solar panels anywhere, howevever, is actually using the money to encourage real conservation in the coal-heavy areas like West Virginia, Wyoming, North Dakota or Indiana.
While, as I have written, solar panels are a terrible means of greening the power grid from a cost standpoint, people still want to put them up. If that’s going to happen, what would be great would be a way for those with money and a desire to green the grid to make that money work in the places it will do the best. This is a difficult challenge. People sadly are more interested in feeling they are doing the right thing rather than actually doing it, and they feel good when they see solar panels on their roof, and see their meter going backwards. It makes up for the pain of the giant cheque they wrote, without actually ever recovering the money. Writing that cheque so somebody else’s meter can go backwards (even if you get the savings) just isn’t satisfying to people.
It would make even more sense to put solar-thermal plants (at least at today’s prices,) wind or geothermal in these coal-heavy areas.
It might be interesting to propose a system where rich greens can pay to put solar panels on the roofs of houses where it will do the most good. The homeowner would still pay for power, but at a lower price than they paid before. This money would mostly go to the person who financed the solar panels. The system would include an internet-connected control computer, so the person doing the financing could still watch the meter go backwards, at least virtually, and track power generated and income earned. The only problem is, the return would be sucky, so it’s hard to make this satisfying. To help, the display would also show tons of coal that were not burned, and compare it to what would have happened if they had put the panels on their own roof.
Of course, another counter to this is that California and a few other places have very high tiered electrical rates which may not exist in the coal states. Because of that — essentially a financial incentive set up by the regulators to encourage power conservation — it may be much more cost-effective to have the panels in low-coal California than in high-coal areas, even if it’s not the greenest thing.
An even better plan would be to find a way for “rich greens” (people willing to spend some money to encourage clean power) to finance conservation in coal-heavy areas. To do this, the cooperation of the power companies would be required. For example, one of the best ways to do this would be to replace old fridges with new ones. (Replacing fridges costs $100 per MWH removed from the grid compared to $250 for solar panels.)
The rich green would provide money to help buy the new fridge.
An inspector comes to see the old fridge and confirms it is really in use as the main fridge. Old receipts may be demanded though these may be rare. A device is connected to assure it is not unplugged, other than in a local power failure.
A few months later — to also assure the old fridge was really the one in use — the new fridge would be delivered by a truck that hauls the old one away. Inspectors confirm things and the buyer gets a rebate on their new fridge thanks to the rich green.
The new, energy-efficient fridge has built in power monitoring and wireless internet. It reports power usage to the power company.
The new fridge owner pays the power company 80% of what they used to pay for power for the old fridge. Ie. they pay more than their actual new power usage.
The excess money goes to the rich green who funded the rebate on the fridge, until the rebate plus a decent rate of return is paid back.
To the person with the old fridge, they get a nice new fridge at a discount price — possibly even close to free. Their power bill on the fridge goes down 20%. The rest of the savings (about 30% of the power, typically) goes to the power company and then to the person who financed the rebate.
A number of the steps above are there to minimize fraud. For example, you don’t want people deliberately digging out an ancient fridge and putting it in place to get a false rebate. You also don’t want them taking the old fridge and moving it into the garage as a spare, which would actually make things worse. The latter is easy to assure by having the delivery company haul away the old one. The former is a bit tricky. The above plan at least demands that the old fridge be in place in their kitchen for a couple of months, and there be no obvious signs that it was just put in place. The metering plan demands wireless internet in the home, and the ability to configure the appliance to use it. That’s getting easier to demand, even of poor people with old fridges. Unless the program is wildly popular, this requirement would not be hard to meet.
Instead of wireless internet, the fridge could also just communicate the usage figures to a device the meter-reader carries when she visits the home to read the regular meter. Usage figures for the old fridge would be based on numbers for the model, not the individual unit.
It’s a bit harder to apply this to light bulbs, which are the biggest conservation win. Yes, you could send out crews to replace incandescent bulbs with CFLs, but it is not cost effective to meter them and know how much power they actually saved. For CFLs, the program would have to be simpler with no money going back to the person funding the rebates.
All of this depends on a program which is popular enough to make the power monitoring chips and systems in enough quantity that they don’t add much to the cost of the fridge at all.
Today’s systems are fairly simple of course, and will learn a lot from this. This matches my prediction for how a robocar test suite will be developed, by gathering millions and later billions of miles of sample data including all accidents and anomalous events, over time with better and better sensors. Today’s sensors are very simple of course but this will change over time.
Initial reaction to these systems (which will have early flaws) may colour user opinion of them. For example, some adaptive cruise controls reportedly are too eager to decide there is a stopped car and will suddenly stop a vehicle. One of the challenges of automatic vehicle design will be finding ways to keep it safe without it being too conservative because real drivers are not very conservative. (They are also not very safe, but this defines the standards people expect.)
Just back from a weeklong tour including speaking at Singularity Summit, teaching classes at Cushing Academy and a big Thanksgiving dinner (well, Thanksgiving is actually today but we had it earlier) and drive through fabulous fall colour in Muskoka.
This time United Airlines managed to misplace my luggage in both directions. (A reminder of why I don’t like to check luggage.) The first time the had an “excuse” in that we checked it only about 10 minutes before the baggage check deadline and the TSA took extra time on it. The way back it missed a 1 hour, 30 minute connection — no excuse for that.
However, again, my rule for judging companies is how they handle their mistakes as well as how often they make them. And, in JFK, when we went to baggage claim, they actually had somebody call our name and tell us the bag was not on the flight, so we went directly to file the missing luggage report. However, on the return flight, connecting in Denver to San Jose, we got the more “normal” experience — wait a long time at the baggage claim until you realize no more bags are coming and you’re among the last people waiting, and then go file a lost luggage report.
This made me realize — with modern bag tracking systems, the airline knows your bag is not on the plane at the time they close the cargo hold door, well before takeoff. They need to know that as this is part of the passenger-to-bag matching system they tried to build after the Pan Am 103 Lockerbie bombing. So the following things should be done:
If they know my mobile number (and they do, because they text me delays and gate changes) they should text me that my luggage did not make the plane.
The text should contain a URL where I can fill out my lost luggage report or track where my luggage actually is.
Failing this, they should have a screen at the gate when you arrive with messages for passengers, including lost luggage reports. Or just have the gate agent print it and put it on the board if a screen costs too much.
Failing this, they should have a screen at the baggage claim with notes for passengers about lost luggage so you don’t sit and wait.
Failing this, an employee can go to the baggage claim and page the names of passengers, which is what they did in JFK.
Like some airlines do, they should put a box with “Last Bag, Flight nnn” written on it on the luggage conveyor belt when the last bag has gone through, so people know not to wait in vain.
I might very well learn my luggage is not on before the plane closes the door. In that case I might even elect to not take the flight, though I can see that the airline might not want people to do this as they are usually about to close the door, if they have not already closed it.
Letting me fill out the form on the web saves the airline time and saves me time. I can probably do it right on the plane after it lands and cell phone use is allowed. I don’t even have to go to baggage claim. Make it mobile browser friendly of course.
I have several sheetfed scanners. They are great in many ways — though not nearly as automatic as they could be — but they are expensive and have their limitations when it comes to real-world documents, which are often not in pristine shape.
I still believe in sheetfed scanners for the home, in fact one of my first blog posts here was about the paperless home, and some products are now on the market similar to this design, though none have the concept I really wanted — a battery powered scanner which simply scans to flash cards, and you take the flash card to a computer later for processing.
My multi-page document scanners will do a whole document, but they sometimes mis-feed. My single-page sheetfed scanner isn’t as fast or fancy but it’s still faster than using a flatbed because the act of putting the paper in the scanner is the act of scanning. There is no “open the top, remove old document, put in new one, lower top, push scan button” process.
Here’s a design that might be cheap and just what a house needs to get rid of its documents. It begins with a table which has an arm coming out from one side which has a tripod screw to hold a digital camera. Also running up the arm is a USB cable to the camera. Also on the arm, at enough of an angle to avoid glare and reflections are lighting, either white LED or CCFL tubes.
In the bed of the table is a capacitive sensor able to tell if your hand is near the table, as well as a simple photosensor to tell if there is a document on the table. All of this plugs into a laptop for control.
You slap a document on the table. As soon as you draw your hand away, the light flashes and the camera takes a picture. Then go and replace or flip the document and it happens again. No need to push a button, the removal of your hand with a document in place causes the photo. A button will be present to say “take it again” or “erase that” but you should not need to push it much. The light should be bright enough so the camera can shoot fairly stopped down, allowing a sharp image with good depth of field. The light might be on all the time in the single-sided version.
The camera can’t be any camera, alas, but many older cameras in the 6MP range would get about 300dpi colour from a typical letter sized page, which is quite fine. Key is that the camera has macro mode (or can otherwise focus close) and can be made to shoot over USB. An infrared LED could also be used to trigger many consumer cameras. Another plus is manual focus. It would be nice if the camera can just be locked in focus at the right distance, as that means much faster shooting for typical consumer digital cameras. And ideally all this (macro mode, manual focus) can all be set by USB control and thus be done under the control of the computer.
Of course, 3-D objects can also be shot in this way, though they might get glare from the lights if they have surfaces at the wrong angles. A fancier box would put the lights behind cloth diffusers, making things bulkier, though it can all pack down pretty small. In fact, since the arm can be designed to be easily removed, the whole thing can pack down into a very small box. A sheet of plexi would be available to flatten crumpled papers, though with good depth of field, this might not strictly be necessary.
One nice option might be a table filled with holes and a small suction pump. This would hold paper flat to the table. It would also make it easy to determine when paper is on the table. It would not help stacks of paper much but could be turned off, of course.
A fancier and bulkier version would have legs and support a 2nd camera below the table, which would now be a transparent piece of plexiglass. Double sided shots could then be taken, though in this case the lights would have to be turned off on the other side when shooting, and a darkened room or shade around the bottom and part of the top would be a good idea, to avoid bleed through the page. Suction might not be such a good idea here. The software should figure if the other side is blank and discard or highly compress that image. Of course the software must also crop images to size, and straighten rectangular items.
There are other options besides the capacitive hand sensor. These include a button, of course, a simple voice command detector, and clever use of the preview video mode that many digital cameras now have over USB. (ie. the computer can look through the camera and see when the document is in place and the hand is removed.) This approach would also allow gesture commands, little hand signals to indicate if the document is single sided, or B&W, or needs other special treatment.
The goal however, is a table where you can just slap pages down, move your hand away slightly and then slap down another. For stacks of documents one could even put down the whole stack and take pages off one at a time though this would surely bump the stack a bit requiring a bit of cleverness in straightening and cropping. Many people would find they could do this as fast as some of the faster professional document scanners, and with no errors on imperfect pages. The scans would not be as good as true scanner output, but good enough for many purposes.
In fact, digital camera photography’s speed (and ability to handle 3-D objects) led both Google Books and the Internet Archive to use it for their book scanning projects. This was of course primarily because they were unwilling to destroy books. Google came up with the idea of using a laser rangefinder to map the shape of the curved book page to correct any distortions in it. While this could be done here it is probably overkill.
One nice bonus here is that it’s very easy to design this to handle large documents, and even to be adjustable to handle both small and large documents. Normally scanners wide enough for large items are very expensive.
A serious proportion of the computer users I know these days have gone multi-monitor. While I strongly recommend the 30” monitor (Dell 3007WFP and cousins or Apple) which I have to everybody, at $1000 it’s not the most cost effective way to get a lot of screen real estate. Today 24” 1080p monitors are down to $200, and flat panels don’t take so much space, so it makes a lot of sense to have two monitors or more.
Except there’s a big gap between them. And while there are a few monitors that advertise being thin bezel, even these have at least half an inch, so two monitors together will still have an inch of (usually black) between them.
I’m quite interested in building a panoramic photo wall with this new generation of cheap panels, but the 1” bars will be annoying, though tolerable from a distance. But does it have to be?
There are 1/4” bezel monitors made for the video wall industry, but it’s all very high end, and in fact it’s hard to find these monitors for sale on the regular market from what I have seen. If they are, they no doubt cost 2-3x as much as “specialty” market monitors. I really think it’s time to push multi-monitor as more than a specialty market.
I accept that you need to have something strong supporting and protecting the edge of your delicate LCD panel. But we all know from laptops it doesn’t have to be that wide. So what might we see?
Design the edges of the monitor to interlock, and have the supporting substrate further back on the left and further forward on the right. Thus let the two panels get closer together. Alternately let one monitor go behind the other and try to keep the distance to a minimum.
Design monitors that can be connected together by removing the bezel and protection/mounting hardware and carefully inserting a joiner unit which protects the edges of both panels but gets them as close together as it can, and firmly joins the two backs for strength. May not work as well for 2x2 grids without special joiners.
Just sell a monitor that has 2, 3 or 4 panels in it, mounted as close as possible. I think people would buy these, allowing them to be priced even better than two monitors. Offer rows of 1, 2 or 3 and a 2x2 grid. I will admit that a row of 4, which is what I want, is not likely to be as big a market.
Sell components to let VARs easily build such multi-panel monitors.
When it comes to multi-panel, I don’t know how close you could get the panels but I suspect it could be quite close. So what do you put in the gap? Well, it could be a black strip or a neutral strip. It could also be a translucent one that deliberately covers one or two pixels on each side, and thus shines and blends their colours. It might be interesting to see how much you could reduce visual effect of the gap. The eye has no problem looking through grid windows at a scene and not seeing the bars, so it may be that bars remain the right answer.
It might even be possible to cover the gap with a small thin LCD display strip. Such a strip, designed to have a very sharp edge, would probably go slightly in front of the panels, and appear as a bump in the screen — but a bump with pixels. From a distance this might look like a video wall with very obscured seams.
For big video walls, projection is still a popular choice, other than the fact that such walls must be very deep. With projection, you barely need the bezel at all, and in fact you can overlap projectors and use special software to blend them for a completely seamless display. However, projectors need expensive bulbs that burn out fairly quickly in constant use, so they have a number of downsides. LCD panel walls have enough upsides that people would tolerate the gaps if they can be made small using techniques above.
Anybody know how the Barco wall at the Comcast center is done? Even in the video from people’s camcorders, it looks very impressive.
If you see LCD panels larger than 24” with thin bezels (3/8 inch or less) at a good price (under $250) and with a good quality panel (doesn’t change colour as you move your head up and down) let me know. The Samsung 2443 looked good until I learned that it, and many others in this size, have serious view angle problems.