Submitted by brad on Fri, 2008-05-09 16:11.
I learned today that there is an exhibit about my father in the famous creation museum near Cincinnati. This museum is a multi-million dollar project set up by creationists as a pro-bible “natural history” museum that shows dinosaurs on Noah’s Ark, and how the flood carved the Grand Canyon and much more. It’s all completely bullocks and a number of satirical articles about it have been written, including the account by SF writer John Scalzi.
While almost all this museum is about desperate attempts to make the creation story sound like natural history, it also has the “Biblical Authority Room.” This room features my father, Charles Templeton in two sections. It begins with this display on bible enemies which tells the story of how he went to Princeton seminary and lost his faith. (Warning: Too much education will kill your religion.)
However, around the corner is an amazing giant alcove. It shows a large mural of photos and news stories about my father as a preacher and later. On the next wall is an image of a man (clearly meant to be him though the museum denied it) digging a grave with the tombstone “God is Dead.” There are various other tombstones around for “Truth,” “God’s Word” and “Genesis.” There is also another image of the mural showing it a bit more fully.
Next to the painting is a small brick alcove which for the life of me looks like a shrine.
In it is a copy of his book Farewell to God along with a metal plaque with a quote from the book about how reality is inconsistent with the creation story. (You can click on the photo, courtesy Andrew Arensburger, to see a larger size and read the inscription.)
I had heard about this museum for some time, and even contemplated visiting it the next time I was in the area, though part of me doesn’t want to give them $20. However now I have to go. But I remain perplexed that he gets such a large exibit, along with the likes of Darwin, Scopes and Luther.
Today, after all, only older people know of his religious career, though at his peak he was one of the most well known figures of the field. He and his best friend, Billy Graham, were taking the evangelism world by storm, and until he pulled out, many people would have bet that he, rather than Graham, would become the great star. You can read his memoir here online.
But again, this is all long ago, and a career long left behind. But there may be an explanation, based on what he told me when he was alive.
Among many fundamentalists, there is a doctrine of “Once Saved, Always Saved.” What this means is that once Jesus has entered you and become your personal saviour, he would never, ever desert you. It is impossible for somebody who was saved to fall. This makes apostacy a dreadful sin for it creates a giant contradiction. For many, the only way to reconcile this is to decide that he never was truly saved after all. That it was all fake. Only somebody who never really believed could fall.
Except that’s not the case here. He had the classic “religious experience” conversion, as detailed in his memoir. He was fully taken up with it. And more to the point, unlike most, when much later he truly came to have doubts, he debated them openly with his friends, like Graham. And finally decided that he couldn’t preach any more after decades of doing so, giving up fame and a successful career with no new prospects. He couldn’t do it because he could not feel honest preaching to people when he had become less sure himself. Not the act of somebody who was faking it all along.
However, this exhibit in the museum doesn’t try to paint it that way. Rather, it seems to be a warning that too much education by godless scientists can hurt your faith.
So there may be a second explanation. As a big-time preacher, with revival meetings filling sporting arenas, my father converted a lot of people to Christianity. He was one of the founders of Youth for Christ International, which is today still a major religious organization. I meet these converts from time to time. I can see how, if you came to your conversion through him, my father’s renunciation of it must be very hurtful — especially when combined with the once-saved-always-saved doctrine. So I have to wonder if somebody at the Creation Museum isn’t one of his converts, and thus wanted to tell the story of a man that many of the visitors to the museum will have forgotten.
Here are some other Charles Templeton links on my site:
Right now I’m in the process of scanning some of his books and will post when I have done this.
Submitted by brad on Fri, 2008-05-09 00:14.
I’m scanning my documents on an ADF document scanner now, and it’s largely pretty impressive, but I’m surprised at some things the system won’t do.
Double page feeding is the bane of document scanning. To prevent it, many scanners offer methods of double feed detection, including ultrasonic detection of double thickness and detection when one page is suddenly longer than all the others (because it’s really two.)
There are a number of other tricks they could do, I think. I think a paper feeder that used air suction or gecko-foot van-der-waals force pluckers on both sides of a page to try to pull the sides in two different directions could help not just detect, but eliminate such feeds.
However, the most the double feed detectors do is signal an exception to stop the scan. Which means work re-feeding and a need to stand by.
However, many documents have page numbers. And we’re going to OCR them and the OCR engine is pretty good at detecting page numbers (mostly out of desire to remove them.) However, it seems to me a good approach would be to look for gaps in the page numbers, especially combined with the other results of a double feed. Then don’t stop the scan, just keep going, and report to the operator which pages need to be scanned again. Those would be scanned, their number extracted, and they would be inserted in the right place in the final document.
Of course, it’s not perfect. Sometimes page numbers are not put on blank pages, and some documents number only within chapters. So you might not catch everything, but you could catch a lot of stuff. Operators could quickly discern the page numbering scheme (though I think the OCR could do this too) to guide the effort.
I’m seeking a maximum convenience workflow. I think to do that the best plan is to have several scanners going, and the OCR after the fact in the background. That way there’s always something for the operator to do — fixing bad feeds, loading new documents, naming them — for maximum throughput. Though I also would hope the OCR software could do better at naming the documents for you, or at least suggesting names. Perhaps it can, the manual for Omnipage is pretty sparse.
While some higher end scanners do have the scanner figure out the size of the page (at least the length) I am not sure why it isn’t a trivial feature for all ADF scanners to do this. My $100 Strobe sheetfed scanner does it. That my $6,000 (retail) FI-5650 needs extra software seems odd to me.
Submitted by brad on Mon, 2008-05-05 20:08.
I’ve been ranting of late about the dangers inherent in “Data Portability” which I would like to rename as BEPSI to avoid the motherhood word “portability” for something that really has a strong dark side as well as its light side.
But it’s also important to come up with an alternative. I think the best alternative may lie in what I would call a “data deposit box” (formerly “data hosting.”) It’s a layered system, with a data layer and an application layer on top. Instead of copying the data to the applications, bring the applications to the data.
A data deposit box approach has your personal data stored on a server chosen by you. That server’s duty is not to exploit your data, but rather to protect it. That’s what you’re paying for. Legally, you “own” it, either directly, or in the same sense as you have legal rights when renting an apartment — or a safety deposit box.
Your data box’s job is to perform actions on your data. Rather than giving copies of your data out to a thousand companies (the Facebook and Data Portability approach) you host the data and perform actions on it, programmed by those companies who are developing useful social applications.
As such, you don’t join a site like Facebook or LinkedIn. Rather, companies like those build applications and application containers which can run on your data. They don’t get the data, rather they write code that works with the data and runs in a protected sandbox on your data host — and then displays the results directly to you.
To take a simple example, imagine a social application wishes to send a message to all your friends who live within 100 miles of you. Using permission tokens provided by you, it is able to connect to your data host and ask it to create that subset of your friend network, and then e-mail a message to that subset. It never sees the friend network at all. read more »
Submitted by brad on Fri, 2008-04-25 14:00.
I’ve spoken about the Web 2.0 movement that is now calling itself “data portability.” Now there are web sites, and format specifications and plans are underway to make it possible to quickly export the personal data you put on one social networking site to another. While that sounds like a good thing — we like interoperability, and cooperation, and low barriers to entry on new players — I sometimes seem like a lone voice warning about some of the negative consequences of this.
I know I’m not going to actually stop the data portability movement, and nor is that really my goal. But I do have a challenge for it: Switch to a slightly negative name. Data portability sounds like motherhood, and this is definitely not a motherhood issue. Deliberately choosing a name that includes the negative connotations would make people stop and think as they implement such systems. It would remind them, every step of the way, to consider the privacy implications. It would cause people asking about the systems to query what they have done about the downsides.
And that’s good, because otherwise it’s easy to put on a pure engineering mindset and say, “what’s the easiest way we can build the tools to make this happen?” rather than “what’s a slightly harder way that mitigates some of the downsides?”
A name I dreamed up is BEPSI, standing for Bulk Export of Personal and Sensitive Information. This is just as descriptive, but reminds you that you’re playing with information that has consequences. Other possible names include EBEPSI (Easy Bulk Export…) or OBEPSI (One-click Bulk Export…) which sounds even scarier.
It’s rare for people to do something so balanced, though. Nobody likes to be reminded there could be problems with what they’re doing. They want a name that sounds happy and good, so they can feel happy and good. And I know the creator of dataportability.org thinks he’s got a perfectly good name already so there will be opposition. But a name like this, or another similar one, would be the right thing to do. Remind people of the paradoxes with every step they take.
Submitted by brad on Thu, 2008-03-13 16:47.
Earlier I wrote an essay on the paradox of identity management describing some counter-intuitive perils that arise from modern efforts at federated identity. Now it’s time to expand these ideas to efforts for portable personal data, especially portable social networks.
Partly as a reaction to Facebook’s popular applications platform, other social networking players are seeking a way to work together to stop Facebook from taking the entire pie. The Google-lead open social effort is the leading contender, but there are a variety of related technologies, including OpenID, hcard and other microformats. The primary goal is to make it easy, as users move from one system to another, or run sub-abblications on one platform, to make it easy to provide all sorts of data, including the map of their social network, to the other systems.
Some are also working on a better version of this goal, which is to allow platforms to interoperate. As I wrote a year ago interoperation seems the right long term goal, but a giant privacy challenge emerges. We may not get very many chances to get this right. We may only get one.
The paradox I identified goes against how most developers think. When it comes to greasing the skids of data flow, “features” such as portability, ease of use and user control, may not be entirely positive, and may in fact be on the whole negative. The easier it is for data to flow around, the more it will flow around, and the more that sites will ask, and then demand that it flow. There is a big difference between portability between applications — such as OpenOffice and MS Word reading and writing the same files — and portability between sites. Many are very worried about the risks of our handing so much personal data to single 3rd party sites like Facebook. And then Facebook made it super easy — in fact mandatory with the “install” of any application — to hand over all that data to hundreds of thousands of independent application developers. Now work is underway to make it super easy to hand over this data to every site that dares to ask or demand it. read more »
Submitted by brad on Thu, 2008-03-06 00:46.
Earlier on, I identified robot delivery vehicles as one of the steps on the roadmap to robot cars. In fact, these are officially what the DARPA grand challenges really seek, since the military wants robots that can move things through danger zones without putting soldiers at risk.
Deliverbots may well be allowed on the road before fully automated robotaxis for humans because there are fewer safety issues. Deliverbots can go more slowly, as most cargo is not super-urgent. If they go slower, and have a low weight limit, it may be the case that they can’t cause much harm if they go astray. Obviously if a deliverbot crashes into an inanimate object, it just cost money and doesn’t injure people. The deliverbot might be programmed to be extra-cautious and slow around anything like a person. As such, it might be allowed on the road sooner.
I gave a talk on Robot cars at the BIL conference, and an attendee came up to suggest the deliverbots enable a new type of equipment rental. Because they can bring you rental equipment quickly, cheaply and with no hassle, they make renting vastly more efficient and convenient. People will end up renting things they would never consider renting today. Nowadays you only rent things you really need which are too expensive or bulky to own.
By the way, the new NPR morning show the “Bryant Park Project” decided to interview a pair of speakers, one from TED and one from BIL, so I talked about my robot cars talk. You can listen to the segment or follow links to hear the whole show.
It was suggested even something as simple as a vacuum cleaner could become a rental item. Instead of buying a $200 vacuum to clean your floors once a week, you might well rent a super-high quality $2,000 unit which comes to you with short notice via deliverbot. This would also be how you might treat all sorts of specialized, bulky or expensive tools. Few will keep their own lathe, band saw or laser engraver, but if you can get one in 10 minutes, you would never need to.
(Here in silicon valley, an outfit called Tech Shop offers a shop filled with all the tools and toys builders like, for a membership fee and materials cost. It’s great for those who are close to it or want to trek there, but this could be better. This in turn would also let us make better use of the space in our homes, not storing things we don’t really need to have.
Submitted by brad on Thu, 2008-02-28 00:41.
Yesterday I wrote about predictive suspension, to look ahead for bumps on the road and ready the suspension to compensate. There should be more we can learn by looking at the surface of the road ahead, or perhaps touching it, or perhaps getting telemetry from other cars.
It would be worthwhile to be able to estimate just how much traction there is on the road surfaces the tires will shortly be moving over. Traction can be estimated from the roughness of dry surfaces, but is most interesting for wet and frozen surfaces. It seems likely that remote sensing can tell the temperature of a surface, and whether it is wet or not. Wet ice is more slippery than colder ice. It would be interesting to research techniques for estimating traction well in front of the car. This could of course be used to slow the car down to the point that it can stop more easily, and to increase gaps between cars. However, it might do much more.
A truly accurate traction measurement could come by actually moving wheels at slightly different speeds. Perhaps just speeding up wheels at two opposite corners (very slightly) or slowing them down could measure traction. Or perhaps it would make more sense to have a small probe wheel at the front of the car that is always measuring traction in icy conditions. Of course, anything learned by the front wheels about traction could be used by the rear wheels.
For example, even today an anti-lock brake system could, knowing the speed of the vehicle, notice when the front wheels lock up and predict when the rear wheels will be over that same stretch of road. Likewise if they grip, it could be known as a good place to apply more braking force when the rear wheels go over.
In addition, this is something cars could share information about. Each vehicle that goes over a stretch of road could learn about the surface, and transmit that for cars yet to come, with timestamps of course. One car might make a very accurate record of the road surface that other cars passing by soon could use. If for nothing else, this would allow cars to know what a workable speed and inter-car gap is. This needs positioning more accurate that GPS, but that could easily be attained with mile marker signs on the side of the road that an optical scanner can read, combined with accurate detection of the dotted lines marking the lanes. GPS can tell you what lane you're in if you can't figure it out. Lane markers could themselves contain barcodes if desired -- highly redundant barcodes that would tolerate lots of missing pieces of course.
This technology could be applied long before the cars drive themselves. It's a useful technology for a human driven car where the human driver gets advice and corrections from an in-car system. "Slow down, there's a patch of ice ahead" could save lives. I've predicted that the roadmap to the self-driving car involves many incremental improvements which can be sold in luxury human-driven cars to make them safer and eventually accident proof. This could be a step.
Submitted by brad on Tue, 2008-02-26 13:00.
I’m not the first to think of this idea, but in my series of essays on self driving cars I thought it would be worth discussing some ideas on suspension.
Driven cars need to have a modestly tight suspension. The driver needs to feel the road. An AI driven car doesn’t need that, so the suspension can be tuned for the maximum comfort of the passengers. You can start bu just making it much softer than a driver would like, but you can go further.
There are active suspension systems that use motors, electromagnets or other systems to control the ride. Now there are even products to use ferrofluids, whose viscosity can be controlled by magnetic fields, in a shock absorber.
I propose combining that with a scanner which detects changes in the road surface and predicts exactly the right amount of active suspension or shock absorption needed for a smooth ride. This could be done with a laser off the front bumper, or even mechanically with a small probe off the front with its own small wheel in front of the main wheel.
As such systems improve, you could even imagine it making sense to give a car more than 4 wheels. With the proper distribution of wheels, it could become possible, if a bump is coming up for just one or two of the wheels to largely decouple the vehicle from those wheels and put the weight on the others. With this most bumps might barely affect the ride. This could mean a very smooth ride even on a bumpy dirt or gravel road, or a poorly maintained road with potholes. (The decoupling would also stop the pothole from doing much damage to the tire.)
As a result, our self-driving cars could give us another saving, by reducing the need for spending on road maintenance. You would still need it, but not as much. Of course you still can’t get rid of hills and dips.
I predict that some riders at least will be more concerned with ride comfort than speed. If their self-driving car is a comfortable work-pod, with computer/TV and phone, time in the car will not be “downtime” if the ride is comfortable enough. Riders will accept a longer trip if there are no bumps, turns and rapid accelerations to distract them from reading or working.
Now perfect synchronization with traffic lights and other vehicles will avoid starts and stops. But many riders will prefer very gradual accelerations when starts and stops are needed. They will like slower, wider turns with a vehicle which gimbals perfectly into the turn. And fewer turns to boot. They’ll be annoyed at the human driven cars on the road which are more erratic, and force distracting changes of speed or vector. Their vehicles may try to group together, and avoid lanes with human drivers, or choose slightly slower routes with fewer human drivers.
The cars will warn their passengers about impending turns and accelerations so they can look up — the main cause of motion sickness is a disconnect between what your eyes see and your inner ear feels, so many have a problem reading or working in an accelerating vehicle.
People like a smooth, distraction free trip. In Japan, the Shinkansen features the express Nozomi trains which include cars where they do not make announcements. You are responsible for noticing your stop and getting off. It is a much nicer place to work, sleep or read.
Submitted by brad on Tue, 2008-02-19 21:11.
If you have read my articles on power you know I yearn for the days when we get smart power so we have have universal supplies that power everything. This hit home when we got a new Thinkpad Z61 model, which uses a new power adapter which provides 20 volts at 4.5 amps and uses a new, quite rare power tip which is 8mm in diameter. For almost a decade, thinkpads used 16.5 volts and used a fairly standard 5.5mm plug. It go so that some companies standardized on Thinkpads and put cheap 16 volt TP power supplies in all the conference rooms, allowing employees to just bring their laptops in with no hassle.
Lenovo pissed off their customers with this move. I have perhaps 5 older power supplies, including one each at two desks, one that stays in the laptop bag for travel, one downstairs and one running an older ThinkPad. They are no good to me on the new computer.
Lenovo says they knew this would annoy people, and did it because they needed more power in their laptops, but could not increase the current in the older plug. I’m not quite sure why they need more power — the newer processors are actually lower wattage — but they did.
Here’s something they could have done to make it better. read more »
Submitted by brad on Thu, 2008-01-31 22:59.
eBay has announced sellers will no longer be able to leave negative feedback for buyers. This remarkably simple change has caused a lot of consternation. Sellers are upset. Should they be?
While it seems to be an even-steven sort of thing, what is the purpose of feedback for buyers, other than noting if they pay promptly? (eBay will still allow sellers to mark non-paying buyers.) Sellers say they need it to have the power to give negative feedback to buyers who are too demanding, who complain about things that were clearly stated in listings and so on. But what it means in reality is the ability to give revenge feedback as a way to stop buyers from leaving negatives. The vast bulk of sellers don’t leave feedback first, even after the buyer has discharged 99% of his duties just fine.
Fear of revenge feedback was hurting the eBay system. It stopped a lot of justly deserved negative feedback. Buyers came to know this, and know that a seller with a 96% positive rating is actually a poor seller in many cases. Whatever happens on the new system, buyers will also come to notice it. Sellers will get more negatives but they will all get more negatives. What matters is your percentile more than your percentage. In fact, good sellers may get a better chance to stand out in the revenge free world, because they will get fewer negatives than the bad sellers who were avoiding negatives by threat of revenge.
As such, the only sellers who should be that afraid are ones who think they will get more negatives than average.
To help, eBay should consider showing feedback scores before and after the change as well as total. By not counting feedback that’s over a year old they will effectively be doing that within a year, of course.
There were many options for elimination of revenge feedback. This one was one of the simplest, which is perhaps why eBay went for it. I would tweak a bit, and also take a look at a buyer’s profile and how often they leave negative feedback as a fraction of transactions. In effect, make a negative from a buyer who leaves lots and lots of negatives count less than one who never leaves negatives. Put simply, you could give a buyer some number, like 10 negatives per 100 transactions. If they do more than that, their negatives are reduced, so that if they do 20 negatives, each one only counts as a half. That’s more complex but helps sellers avoid worrying about very pesky buyers.
Feedback on buyers was always a bit dubious. After all, while you can cancel bids, it’s hard to pick your winner based on their feedback level. If your winner has a lousy buyer reptutation, there is not normally much you can do — just sit and hope for funds.
If eBay wants to get really bold, they could go a step further and make feedback mandatory for all buyers. (ie. your account gets disabled if you have too many feedbacks not left older than 40 days.) This would make feedback numbers much more trustable by other buyers, though the lack of fear of revenge should do most of this. eBay doesn’t want to go too far. It likes high reputations, they grease the wheels of commerce that eBay feeds on.
One thing potentially lost here is something that never seemed to happen anyway. I always felt that if the seller had very low reputation (few transactions) and the buyer had a strong positive reputation, then the order of who goes first should change. Ie. the seller should ship before payment, and the buyer pay after receipt and satisfaction. But nobody ever goes for that and they will do so less often. A nice idea might be that if a seller offers this, this opens up the buyer to getting negative feedback again, and the seller would not offer it to buyers with bad feedback.
Submitted by brad on Sat, 2008-01-12 16:33.
I’ve written before about both the desire for universal dc power and more simply universal laptop power at meeting room desks.
Today I want to report we’re getting a lot closer. A new generation of cheap “buck and boost” ICs which can handle more serious wattages with good efficiency has come to the market. This means cheap DC to DC conversion, both increasing and decreasing voltages. More and more equipment is now able to take a serious range of input voltages, and also to generate them. Being able to use any voltage is important for battery powered devices, since batteries start out with a high voltage (higher than the one they are rated for) and drop over their time to around 2/3s of that before they are viewed as depleted. (With some batteries, heavy depletion can really hurt their life. Some are more able to handle it.)
With a simple buck converter chip, at a cost of about 10-15% of the energy, you get a constant voltage out to matter what the battery is putting out. This means more reliable power and also the ability to use the full capacity of the battery, if you need it and it won’t cause too much damage. These same chips are in universal laptop supplies. Most of these supplies use special magic tips which fit the device they are powering and also tell the supply what voltage and current it needs. read more »
Submitted by brad on Wed, 2008-01-09 22:17.
I want to enhance two other ideas I have talked about. The first was the early adoption of self-driving cars for parking. As I noted, long before we will accept these cars on the road we’ll be willing to accept automatic parking technology in specially equipped parking lots that lets us get something that’s effectively valet parking.
I also wrote about teleoperation of drive-by-wire cars for valet parking as a way to get this even earlier.
Valet parking has a lot of advantages. (I often joke, “I want to be a Valet. They get all the best parking spots” when I see a Valet Parking Only sign.) We’ve given up to 60% of our real estate to cars, a lot of that to parking. It’s not just denser, though. It can make a lot of sense at transportation hubs like airports, where people are carrying things and want to drive right up close with their car and walk right in. This is particularly valuable in my concept of the minimalist airport, where you just drive your car up to the fence at the back of the airport and walk through a security gate at the fence right onto your plane, leaving a valet to move your car somewhere, since you can’t keep it at the gate.
But valet parking breaks down if you have to move the cars very far, because the longer it takes to do this, the fewer cars you can handle per valet, and if the flow is imbalanced, you also have to get valets back quickly even if there isn’t another car that needs to come back. Valet parking works best of all when you can predict the need for your car a few minutes in advance and signal it from your cell phone. (I stayed at a hotel once with nothing but valet parking. The rooms were far enough from the door, however, that if you called from your room phone, your car was often there when you got to the lobby.)
So I’m now imagining that as cars get more and more drive-by-wire features, that a standardized data connection be created (like a trailer hitch brake connection, but even more standard) so that it’s possible to plug in a “valet unit.” This means the cars would not have any extra costs, but the parking lots would be able to plug in units to assist in the automated moving of the cars. read more »
Submitted by brad on Tue, 2007-11-13 13:20.
Ok, I haven't had a new laptop in a while so perhaps this already happens, but I'm now carrying more devices that can charge off the USB power, including my cell phone. It's only 2.5 watts, but it's good enough for many purposes.
However, my laptops, and desktops, do not provide USB power when in standby or off. So how about a physical or soft switch to enable that? Or even a smart mode in the US that lets you list what devices you want to keep powered and which ones you don't? (This would probably keep all devices powered if any one such device is connected, unless you had individual power control for each plug.)
This would only be when on AC power of course, not on battery unless explicitly asked for as an emergency need.
To get really smart a protocol could be developed where the computer can ask the USB device if it needs power. A fully charged device that plans to sleep would say no. A device needing charge could say yes.
Of course, you only want to do this if the power supply can efficiently generate 5 volts. Some PC power supplies are not efficient at low loads and so may not be a good choice for this, and smaller power supplies should be used.
Submitted by brad on Mon, 2007-11-12 16:49.
There’s a lot of equipment you don’t need to have for long. And in some cases, the answer is to rent that equipment, but only a small subset of stuff is available for rental, especially at a good price.
So one alternative is what I would call a “ReBay” — buy something used, typically via eBay, and then after done with it, sell it there again. In an efficient market, this costs only the depreciation on the unit, along with shipping and transaction fees. Unlike a rental, there is little time cost other than depreciation.
For some items, like DVDs and Books and the like we see companies that cater specially to this sort of activity, like Peerflix and Bookmooch and the like.
But it seems that eBay could profit well from encouraging these sorts of markets (while vendors of new equipment might fear it eats into their sales.)
Here are some things eBay could do to encourage the ReBay.
- By default, arrange so that all listings include a licence to re-use the text and original photographs used in a listing for resale on eBay. While sellers could turn this off, most listings could now be reusable from a copyright basis.
- Allow the option to easily re-list an item you’ve won on eBay, including starting from the original text and photos as above. If you add new text and photos, you must allow your buyer to use them as well.
- ReBays would be marked however, and generally text would be added to the listing to indicate any special wear and tear since the prior listing. In general an anonymised history of the rebaying should be available to the buyer, as well as the feedback history of the seller’s purchase.
- ReBayers would keep the packaging in which they got products. As such, unless they declare a problem with the packaging, they would be expected to charge true shipping (as eBay calculates) plus a very modest handling fee. No crazy inflated shipping or flat rate shipping.
- Since some of these things go against the seller’s interests (but are in the buyer’s) it may be wise for eBay to offer reduced auction fees and paypal fees on a reBay. After all, they’re making the fees many times on such items, and the paypal money will often be paypal balance funded.
- Generally you want people who are close, but for ReBaying you may also prefer to pass on to those outside your state to avoid having to collect sales tax.
- Because ReBayers will be actually using their items, they will have a good idea of their condition. They should be required to rate it. No need for “as-is” or disclaimers of not knowing what if it works.
This could also be done inside something like Craigslist. Craigslist is more popular for local items (which is good because shipping cost is now very low or “free”) though it does not have auctions or other such functionality. Nor is it as efficient a market.
Submitted by brad on Thu, 2007-10-25 12:31.
I have written a few times before about versed, the memory drug and the ethical and metaphysical questions that surround it. I was pointed today to a story from Time about propofol, which like the Men in Black neuralizer pen, can erase the last few minutes of your memory from before you are injected with it. This is different from Versed, which stops you from recording memories after you take it.
Both raise interesting questions about unethical use. Propofol knocks you out, so it’s perhaps of only limited use in interrogation, but I wonder whether more specific drugs might exist in secret (or come along with time) to just zap the memory. (I would have to learn more about how it acts to consider if that’s possible.)
Both bring up thoughts of the difference between our firmware and our RAM. Our real-time thoughts and very short term memories seem to exist in a very ephemeral form, perhaps even as electrical signals. Similar to RAM — turn off the computer and the RAM is erased, but the hard disk is fine. People who flatline or go through serious trauma often wake up with no memory of the accident itself, because they lost this RAM. They were “rebooted” from more permanent encodings of their mind and personality — wirings of neurons or glia etc. How often does this reboot occur? We typically don’t recall the act of falling asleep, or even events or words from just before falling asleep, though the amnesia isn’t nearly so long as that of people who flatline.
These drugs most trigger something similar to this reboot. While under Versed, I had conversations. I have no recollection of after the drug was injected, however. It is as if there was a version of me which became a “fork.” What he did and said was destined to vanish, my brain rebooting to the state before the drug. Had this other me been aware of it, I might have thought that this instance of me was doomed to a sort of death. How would you feel if you knew that what you did today would be erased, and tomorrow your body — not the you of the moment — would wake up with the same memories and personality as you woke up with earlier today? Of course many SF writers have considered this as well as some philosophers. It’s just interesting to see drugs making the question more real than it has been before.
Submitted by brad on Sat, 2007-07-28 17:05.
Ever since the first science fiction about cyberspace (First seen in Clarke’s 1956 “The City and the Stars” and more fully in 1976’s “Doctor Who: The Deadly Assassin”) people have wanted to build online 3-D virtual worlds. Snow Crash gelled it even further for people. 3D worlds have done well in games, including Mmorpgs and recently Second Life has attracted a lot of attention, first for its interesting world and its even more interesting economy, but lately for some of the ways it has not succeeded, such as a site for corporate sponsored stores.
Let me present one take on why 3D is not all it’s cracked up to be. Our real world is 3D of course, but we don’t view it that way. We take it in via our 2D eyes, and our 1.5D ears and then build a model of its 3D elements good enough to work in it. In a way I will call this 2.5D because it’s more than 2D but less than 3. But because we start in two dimensions, and use 2D screens, 3D interfaces on a flat screen are actually worse than ones designed for 2D. Anybody who tired the original VRML experiments that attempted to build site navigation in 3D, where you had to turn around your virtual body in order to use one thing or another, realized that.
Now it turns out the fact that 3D is harder is a good thing when it comes to games. Games are supposed to be a challenge. It’s good that you can’t see everything and can get confused. It’s good that you can sneak up behind your enemy, unseen, and shoot him. Because it makes the game harder to win, 3D works in games.
But for non-games, including second life, 3D can just plain make it harder. We have a much easier time with interfaces that are logical, not physical, and present all the information we need to use the system in one screen we can always see. The idea that important things can be “behind us” makes little sense in a computer environment. And that’s true for social settings. When you sit in a room of people and talk, it’s a bug that some people are behind you and some are in front of you. You want to see everybody, and have everybody see your face, the way the speaker on a podium would. The real 3D world can’t do that for a group of people, but virtual worlds can.
I am not saying 3D can’t have its place. You want and need it for modeling things form the real world, as in CAD/CAM. 3D can be a place to show off certain things, and of course a place to play games.
In making second life, a better choice might have been a 2D interface that has portals to occasional 3D environments for when those environments make sense. That would let those who want to build 3D objects in the environment get the ability to do so. But this would not have been nearly as sexy or as Snow-Crashy, so they didn’t do it. Indeed, it would look too much like an incremental improvement over the web, and that might not have gotten the same excitement, even if it’s the right thing to do. The web is also 2.5D, a series of 2D web pages with an arbitrary network of connections between them that exists in slightly more than 2 dimensions. And it has its 3D enclaves, though they are rare and mostly hard to use.
Another idea for a VR world might be a 3D world with 360 degree vision. You could walk around it but you could always see everything, laid out as a panorama. You would not have to turn, just point where you wish to go. It might be confusing at first but I think that could be worth experimenting with.
Submitted by brad on Thu, 2007-07-12 15:26.
It’s way late, but I finally put captions on my gallery of regular-aspect photos from Burning Man 2006.
Some time ago I put together the 2006 Panoramas but just never got around to doing the regulars. There are many fun ones here, an particular novel are the ones of the burn taken from above it on
I also did another aerial survey, but that remains unfinished. Way too much processing to do, and Google did a decent one in google maps. I did put up a few such photos there.
Enjoy the 2006 Burning Man Photos.
Submitted by brad on Thu, 2007-07-05 01:36.
Another silly lolcat
Based on the common lolcat message, “I’m in ur base, killing ur d00ds.”
Submitted by brad on Fri, 2007-06-29 12:48.
Earlier I wrote about the frenzy buying Plastation 3s on eBay and lessons from it. There’s a smaller scale frenzy going on now about the iPhone, which doesn’t go on sale until 6pm today. With the PS3, many stores pre-sold them, and others lined up. In theory Apple/AT&T are not pre-selling, and limiting people to 2 units, though many eBay sellers are claiming otherwise.
The going price for people who claim they have one, either for some unstated reason, or because they are first in line at some store, is about $1100, almost twice the cost. A tidy profit for those who wait in line, time their auction well and have a good enough eBay reputation to get people to believe them. Quite a number of such auctions have closed at such prices with “buy it now.” If you live in a town without a frenzy and line it might do you well to go down to pick up two iPods. Bring your laptop with wireless access to update your eBay auction. None of the auctions I have seen have gone so far as to show a picture of the seller waiting in line to prove it.
eBay has put down some hard terms on iPhone sellers and pre-sellers. It says it does not allow pre-sales, but seems to be allowing those sellers who claim they can guarantee a phone. It requires a picture of the actual item in hand, with a non-photoshopped sign in the picture with the seller’s eBay name. A number of items show a stock photo with an obviously photoshopped tag. In spite of the publicised limit of 2, a number of people claim they have 4 or more.
It seems Apple may have deliberately tried to discourage this by releasing at 6pm on Friday, too late to get to Fedex in most places. Thus all most sellers can offer is getting the phone Monday, which is much less appealing, since that leaves a long window to learn that there are plenty more available Monday, and loses the all-important bragging rights of having an iPhone at weekend social events. Had they released it just a few hours earlier, I think sales like this would have been far more lucrative. (While Apple would not want to leave money on the table, it’s possible high eBay prices would add to the hype and be in their interest.)
As before, I predict timing of auctions will be very important. At this point even a 1 day auction will close after 18 hours of iPhone sales, adding a lot of rish. The PS3 kept its high value for much of the Christmas season, but the iPhone, if not undersupplied, may drop to retail in as little as a day. A standard 1 week auction would be a big mistake. Frankly I think paying $1200 (or a $300 wait-in-line fee) is pretty silly.
The iPhone, by the way, seems like a cool generalized device. A handheld that has the basic I/O tools including GSM phone and is otherwise completely made of touchscreen seems a good general device for the future. Better with a small bluetooth keyboard. Whether this device will be “the one” remains to be seen, of course.
Update: read more »
Submitted by brad on Mon, 2007-06-25 13:41.
Last week I talked briefly about self-driving delivery vehicles. I’ve become interested in what I’ll call the “roadmap” (pun intended) for the adoption of self-driving cars. Just how do we get there from here, taking the technology as a given? I’ve seen and thought of many proposals, and been ignoring the one that should stare us in the face — delivery. I say that because this is the application the DARPA grand challenge is actually aimed at. They want to move cargo without risks to soldiers. We mostly think of that as a path to the tech that will move people, but it may be the pathway.
Robot delivery vehicles have one giant advantage. They don’t have to be designed for passenger safety, and you don’t have to worry about that when trying to convince people to let them on the road. They also don’t care nearly as much about how fast they get there. Instead what we care about is whether they might hit people, cars or things, or get in the way of cars. If they hit things or hurt their cargo, that’s usually just an insurance matter. In fact, in most cases even if they hit cars, or cars hit them, that will just be an insurance matter.
A non-military cargo robot can be light and simple. It doesn’t need crumple zones or airbags. It might look more like a small electric trike, on bicycle wheels. (Indeed, the Blue Team has put a focus on making it work on 2 wheels, which could be even better.) It would be electric (able to drive itself to charging stations as needed) and mechanically, very cheap.
The first step will be to convince people they can’t hit pedestrians. To do that, the creators will need to make an urban test track and fill it with swarms of the robots, and demonstrate that they can walk out into the swarm with no danger. Indeed, like a school of fish, it should be close to impossible to touch one even if you try. Likewise, skeptics should be able to get onto bicycles, motorcycles, cars and hummers and drive right through the schools of robots, unable to hit one if they try. After doing that for half an hour and getting tired, doubters will be ready to accept them on the roads. read more »