Miscellaneous

Who among us with even a wisp of steampunk in our soul does not love the idea of an airship renaissance? Airships are beautiful and majestic, and modern hybrid airship designs are extraordinarily capable. They far transcend inappropriate fears of Hindenberg-like disaster. No wonder some enthusiasts foresee a coming day when airships will again fly in great numbers as replacements for some fixed-wing aircraft, as new vehicles for air cargo transport, and as floating luxury liners.

Unfortunately, for reasons I explored in a series of posts back in 2011, I’m skeptical of this glorious airship resurgence. Hybrid airships work but to triumph on those terms, they need to make practical, economic sense and be better than the transportation alternatives. I’m not convinced that’s true for most of the listed applications. (The important exception is for luxury cruising: any business that’s built on rich people’s willingness to pay top dollar for great experiences can defy some of the usual constraints.)

Start with my Txchnologist story “Lead Zeppelin: Can Airships Overcome Past Disasters and Rise Again?“, then continue with my Gleaming Retort posts “Does Global Warming Help the Case for Airships?” and “Zeppelin Disappointments, Airship Woes.”

In celebration of the Mars rover Curiosity’s fantastic first year of operations, here’s a look back at a series of posts I did on the unusual, risky, but successful sky crane technology used to deliver the robot to the surface of the Red Planet. In “NASA’s sky crane over Mars” for SmartPlanet, I discussed how the sky crane maneuver would work and why such an unorthodox way of landing was necessary. “Satisfying Curiosity: preparing for the Mars landing” was a primer on that same subject I wrote for PLOS BLOGS just before the descent, including a review of where Curiosity would go and what exactly it would be doing to explore the planet. And in “Why the sky crane isn’t the future for Mars landings,” I offer an opinion about why we’re not likely to see many repeat performances by that technology even though it performed beautifully. (Nothing I’ve heard since publishing that piece has given me reason to reconsider.)

Back in 2009, I patted our dog Newman on the head for what I later calculated was about the 15,000th time. That time proved different from every other, however. My fingers found an unexpected depression half the size of a ping-pong ball above and behind his right eye.

That was how my wife and I discovered that our dear pet had a brain tumor, and it marked the beginning of a nearly two year adventure in learning how dogs are treated for cancer—and how, for better or worse, their treatment differs from what humans receive. Read more about it in my Txchnologist story “Cancer and Dogs: One Pet’s Tale.”

Anahad O’Connor was nice enough to take note of my piece in a “Well Pets” column that she wrote for The New York Times, entitled “Chemotherapy for Dogs.”

This memorial to Newman also wouldn’t be complete without the beautiful video that my wife put together for him:

Irony doesn’t come much easier than this. Saturday’s New York Times featured an article on “The Overconfidence Problem in Forecasting,” about the almost universal tendency for people to think their assessments and decisions are more correct than they really are. The article closes smartly with this quote from Mark Twain:

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

Very tidy. Except there’s no good evidence that Mark Twain ever wrote or said those words. (And neither did humorist Will Rogers.)

The actual author? It may be Charles Dudley Warner, an editorial writer for the Hartford Courant who was a friend of Mark Twain and who might have been paraphrasing him… or someone else… unless he was simply piping an imaginary quote for better copy. We may never know.

Bob Kalsey examined the roots of these fatherless words in 2008, and pointed out that similar sentiments were voiced by Confucius, Socrates and Satchel Paige, among others.

Ray Kurzweil, the justly lauded inventor and machine intelligence pioneer, has been predicting that humans will eventually upload their minds into computers for so long that I think his original audience wondered whether a computer was a type of fancy abacus. It simply isn’t news for him to say it anymore, and since nothing substantive has happened recently to make that goal any more imminent, there’s just no good excuse for Wired to still be running articles like this:

Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near.

It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.

This article doesn’t explicitly refer to Kurzweil’s inclusion of uploading human consciousness into computers as part of his personal plan for achieving immortality. That’s good, because the idea has already been repeatedly and bloodily drubbed—by writer John Pavlus and by Glenn Zorpette, executive editor of IEEE Spectrum, to take just two recent examples. (Here are audio and a transcription of a conversation between Zorpette, writer John Horgan and Scientific American’s Steve Mirsky that further kicks the dog. And here’s a link to Spectrum‘s terrific 2008 special report that puts the idea of the Singularity in perspective.)

Instead, the Wired piece restricts itself to the technological challenge of building a computer capable of simulating a thinking, human brain. As usual, Kurzweil rationalizes this accomplishment by 2030 by pointing to exponential advances in technology, as famously embodied by Moore’s Law, and this bit of biological reductionism:

Read Full Article

Previously, I slightly differed with David Crotty’s good post about why open blogging networks might be incompatible with the business models of established publishing brands, particularly for scientific brands, for which credibility is king. David had diagnosed correctly the very real sources of conflict, I thought, but those problems should only become unmanageable with networks whose pure, principled openness went beyond anything publishers seemed interested in embracing anyway. The more important consideration, in the eyes of the bloggers and the increasingly digital-savvy audience, will be how the brand handles the problems that will inevitably arise—in the same way that how publications manage corrections and mistakes already becomes part of their reputation.

Or to put it another way: the openness of a blogging network doesn’t imperil a brand’s editorial value so much as it helps to define it. (Would that I had thought to phrase it so pithily before!)

Nevertheless, I do think blogging networks potentially pose at least two other types of problems that could be more seriously at odds with commercial brands plans. But neither is exclusive to blogging networks. Rather, they are business problems common to many digital publishing models—the presence of a blogging network just amplifies them.

Read Full Article

Inspired by David Crotty’s post at the Scholarly Kitchen, the indomitable blogfather Bora Z. tweets:

For the most part, I agree with the individual points and criticisms that David raises. Whether I agree with his bottom-line conclusion that open networks are incompatible with established brands, and maybe most especially with brands built on scientific credibility, depends on the purity of one’s definition of open.

Unquestionably, leaving a troop of bloggers to their own scruples while publishing under your banner is fraught with risk, but as problems go, it’s neither unprecedented nor unmanageable in publishing. In fact, I’d say the open blogging network problem is really just a special case of the larger challenge of joining with fast-paced, out-linking (and poorly paying) online publishing culture. Some of the best prescriptions seem to be what David is suggesting or implying, so perhaps any disagreement I have with him is really over definitions rather than views.

Read Full Article

JR Minkel, who blogs as only he can over at A Fistful of Science, recently brought to my attention this Paul Adams article for Popular Science (and indirectly, this news story in the Guardian) about the underappreciated importance of insects as a food source for many people around the world. That prompted me to dig out this recollection of my own foray into eating insects, which I wrote up years ago.

Memoirs of an Entomophage

My reputation in some circles as a person who eats bugs has been blown out of proportion. Yes, I have knowingly and voluntarily eaten insects, but I wish people wouldn’t pluck out that historical detail to epitomize me (“You remember, I’ve told you about John—he’s the bug-eater!”). It was so out of character for me. As a boy, I was fastidious to the point of annoying priggishness; other children would probably have enjoyed making me eat insects had the idea occurred to them, but I wouldn’t have chosen to do so myself. Bug eating was something I matured into, and performed as a professional duty, even a public service.

New York Entomological Society logo

Here’s how it happened. Back in 1992, the New York Entomological Society turned 100 years old, and decided to celebrate with a banquet at the map-and-hunting-trophy bedecked headquarters of the Explorers Club on East 70th Street. Yearning for attention, the Society’s leaders had the inspiration to put insects not only on the agenda but also on the menu. For hors d’oeuvres, you could try the mini fontina bruschetta with mealworm ganoush, or perhaps the wax worm fritters with plum sauce. Would you care for beetle bread with your potatoes, or are you saving room for the chocolate cricket torte? Waiter, could I get more mango dip for my water bug?

Mind you, eating insects is not so bizarre and alien a concept in most of the world. According to Gene DeFoliart, the editor of the Food Insects Newsletter (that’s right, they have a newsletter), societies outside of Europe and North America routinely eat at least some insects, sometimes because they are the closest things to livestock that’s available. Most of the world does not share our squeamishness about eating things with antennae. Moreover, the consequences of our cultural bigotry can be serious. The U.S. and Europe largely drive the budgets for food-related research around the world, which means that most spending on raising better food animals goes to studying cows, chickens and the like. Millions if not billions in Africa, Asia and Latin America however, would get much more direct benefit from knowing how to improve the fauna with six legs (or more) that provide much of their protein.

Then, too, it’s not as though most of us in America haven’t ever eaten insects. Eight million of us live in New York alone, after all, and the Board of Health can’t be everywhere. The key difference is how many insects we’ve eaten, and how aware we were of it at the time.

Read Full Article

Martial arts are my hobby and explaining science is my job, so the recent appearance of “How karate chops break concrete blocks” on io9.com naturally caught my eye. Unfortunately, not only did it fall far short of my hopes of offering a lucid explanation, it parroted misleading statements from an article on exactly this same subject that has been annoying me every time I remembered it for 10 years. (Oh wonderful Web, is there no old error you can’t make new again?) Indulge me while I try belatedly to set the record straight.

The io9 article starts by asking how the squishy human hands of martial artists can break concrete slabs, wooden boards and other considerably harder objects. Reassuringly, it wastes no time on talk of chi and similar Eastern mysticism but instead goes right to a very loose discussion of the biomechanics of hitting efficiently and striking to vulnerable positions in the target. So far, so good, although the descriptions of what to do are probably too vague to be helpful to readers who don’t already know what they’re doing.

Then the article seems to go off the rails (emphasis added):

Read Full Article

Unemployed people have time on their hands, you say? Sure, if by “time” you mean the ability to travel backward through time.

I would have thought it was impossible, too, until esteemed economist Arthur Laffer set me straight with his recent op-ed in the Wall Street Journal. Laffer was arguing that it is economically counterproductive to raise unemployment benefits during economic hard times—say, now, for example—because it only creates disincentives for people to work:

Imagine what the unemployment rate would look like if unemployment benefits were universally $150,000 per year. My guess is we’d have a heck of a lot more unemployment. Common sense and personal experience indicate higher unemployment benefits will make unemployment less unattractive and thereby increase unemployment even in the Great Recession.

Can’t argue with that. True, unemployment benefits aren’t actually $150,000 a year—about three times the median household income in 2008. They’re closer to just $300 a week, or $15,600 a year. This hypothetical argument that supposes the exact opposite of reality, that one could on average make much more money by being unemployed, is nevertheless irrefutably compelling.

What really persuaded me, though, was this graph that Laffer and the WSJ supplied to bolster the argument that “since the 1970s there’s been a close correlation between increased unemployment benefits and an increase in the unemployment rate.”

The correlation is indeed close—with increases in unemployment benefits lagging spikes in unemployment by a year or so. This can only mean one thing. Those unemployed layabouts are using some of their $150,000 a year incomes to lounge around in Hot Tub Time Machines, jump back in time and get themselves laid off and so that they can soak up all that sweet, sweet government gravy.

Laffer is apparently not alone in his profession in believing that time travel is a major influence on the U.S. economy. Nobel laureate economist Edward C. Prescott recently stated at the Society for Economic Dynamics meeting in Montreal that Obama caused the current recession, which is a neat trick because the recession started in December 2007.

In summary, then, the keys to time travel are: (1) Lose your job. (2) Study economics.