Who among us with even a wisp of steampunk in our soul does not love the idea of an airship renaissance? Airships are beautiful and majestic, and modern hybrid airship designs are extraordinarily capable. They far transcend inappropriate fears of Hindenberg-like disaster. No wonder some enthusiasts foresee a coming day when airships will again fly in great numbers as replacements for some fixed-wing aircraft, as new vehicles for air cargo transport, and as floating luxury liners.

Unfortunately, for reasons I explored in a series of posts back in 2011, I’m skeptical of this glorious airship resurgence. Hybrid airships work but to triumph on those terms, they need to make practical, economic sense and be better than the transportation alternatives. I’m not convinced that’s true for most of the listed applications. (The important exception is for luxury cruising: any business that’s built on rich people’s willingness to pay top dollar for great experiences can defy some of the usual constraints.)

Start with my Txchnologist story “Lead Zeppelin: Can Airships Overcome Past Disasters and Rise Again?“, then continue with my Gleaming Retort posts “Does Global Warming Help the Case for Airships?” and “Zeppelin Disappointments, Airship Woes.”

In celebration of the Mars rover Curiosity’s fantastic first year of operations, here’s a look back at a series of posts I did on the unusual, risky, but successful sky crane technology used to deliver the robot to the surface of the Red Planet. In “NASA’s sky crane over Mars” for SmartPlanet, I discussed how the sky crane maneuver would work and why such an unorthodox way of landing was necessary. “Satisfying Curiosity: preparing for the Mars landing” was a primer on that same subject I wrote for PLOS BLOGS just before the descent, including a review of where Curiosity would go and what exactly it would be doing to explore the planet. And in “Why the sky crane isn’t the future for Mars landings,” I offer an opinion about why we’re not likely to see many repeat performances by that technology even though it performed beautifully. (Nothing I’ve heard since publishing that piece has given me reason to reconsider.)

Back in 2009, I patted our dog Newman on the head for what I later calculated was about the 15,000th time. That time proved different from every other, however. My fingers found an unexpected depression half the size of a ping-pong ball above and behind his right eye.

That was how my wife and I discovered that our dear pet had a brain tumor, and it marked the beginning of a nearly two year adventure in learning how dogs are treated for cancer—and how, for better or worse, their treatment differs from what humans receive. Read more about it in my Txchnologist story “Cancer and Dogs: One Pet’s Tale.”

Anahad O’Connor was nice enough to take note of my piece in a “Well Pets” column that she wrote for The New York Times, entitled “Chemotherapy for Dogs.”

This memorial to Newman also wouldn’t be complete without the beautiful video that my wife put together for him:

Irony doesn’t come much easier than this. Saturday’s New York Times featured an article on “The Overconfidence Problem in Forecasting,” about the almost universal tendency for people to think their assessments and decisions are more correct than they really are. The article closes smartly with this quote from Mark Twain:

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

Very tidy. Except there’s no good evidence that Mark Twain ever wrote or said those words. (And neither did humorist Will Rogers.)

The actual author? It may be Charles Dudley Warner, an editorial writer for the Hartford Courant who was a friend of Mark Twain and who might have been paraphrasing him… or someone else… unless he was simply piping an imaginary quote for better copy. We may never know.

Bob Kalsey examined the roots of these fatherless words in 2008, and pointed out that similar sentiments were voiced by Confucius, Socrates and Satchel Paige, among others.

Ray Kurzweil, the justly lauded inventor and machine intelligence pioneer, has been predicting that humans will eventually upload their minds into computers for so long that I think his original audience wondered whether a computer was a type of fancy abacus. It simply isn’t news for him to say it anymore, and since nothing substantive has happened recently to make that goal any more imminent, there’s just no good excuse for Wired to still be running articles like this:

Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near.

It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.

This article doesn’t explicitly refer to Kurzweil’s inclusion of uploading human consciousness into computers as part of his personal plan for achieving immortality. That’s good, because the idea has already been repeatedly and bloodily drubbed—by writer John Pavlus and by Glenn Zorpette, executive editor of IEEE Spectrum, to take just two recent examples. (Here are audio and a transcription of a conversation between Zorpette, writer John Horgan and Scientific American’s Steve Mirsky that further kicks the dog. And here’s a link to Spectrum‘s terrific 2008 special report that puts the idea of the Singularity in perspective.)

Instead, the Wired piece restricts itself to the technological challenge of building a computer capable of simulating a thinking, human brain. As usual, Kurzweil rationalizes this accomplishment by 2030 by pointing to exponential advances in technology, as famously embodied by Moore’s Law, and this bit of biological reductionism:

Read Full Article

Previously, I slightly differed with David Crotty’s good post about why open blogging networks might be incompatible with the business models of established publishing brands, particularly for scientific brands, for which credibility is king. David had diagnosed correctly the very real sources of conflict, I thought, but those problems should only become unmanageable with networks whose pure, principled openness went beyond anything publishers seemed interested in embracing anyway. The more important consideration, in the eyes of the bloggers and the increasingly digital-savvy audience, will be how the brand handles the problems that will inevitably arise—in the same way that how publications manage corrections and mistakes already becomes part of their reputation.

Or to put it another way: the openness of a blogging network doesn’t imperil a brand’s editorial value so much as it helps to define it. (Would that I had thought to phrase it so pithily before!)

Nevertheless, I do think blogging networks potentially pose at least two other types of problems that could be more seriously at odds with commercial brands plans. But neither is exclusive to blogging networks. Rather, they are business problems common to many digital publishing models—the presence of a blogging network just amplifies them.

Read Full Article