Contact Me

You are welcome to email me directly: my first name at JohnRennie.net. Or better yet, use the form at right. Please include appropriate contact information for yourself.

Thanks very much for your interest. I will respond to your message as soon as possible. 

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Blog posts

The Unnatural Habitat of Science Writer John Rennie

Filtering by Category: Miscellaneous

Shooting down future airships

John Rennie

hybridairship.jpg

Who among us with even a wisp of steampunk in our soul does not love the idea of an airship renaissance? Airships are beautiful and majestic, and modern hybrid airship designs are extraordinarily capable. They far transcend inappropriate fears of Hindenberg-like disaster. No wonder some enthusiasts foresee a coming day when airships will again fly in great numbers as replacements for some fixed-wing aircraft, as new vehicles for air cargo transport, and as floating luxury liners. Unfortunately, for reasons I explored in a series of posts back in 2011, I'm skeptical of this glorious airship resurgence. Hybrid airships work but to triumph on those terms, they need to make practical, economic sense and be better than the transportation alternatives. I'm not convinced that's true for most of the listed applications. (The important exception is for luxury cruising: any business that's built on rich people's willingness to pay top dollar for great experiences can defy some of the usual constraints.)

Start with my Txchnologist story "Lead Zeppelin: Can Airships Overcome Past Disasters and Rise Again?", then continue with my Gleaming Retort posts "Does Global Warming Help the Case for Airships?" and "Zeppelin Disappointments, Airship Woes."

Cancer and dogs

John Rennie

That was how my wife and I discovered that our pet Newman had a brain tumor, and it marked the beginning of a nearly two year adventure in learning how dogs are treated for cancer—and how, for better or worse, their treatment differs from what humans receive.

Read More

Never the Twain

John Rennie

samuel-l-clemens-1940-issue-10c.jpeg

Irony doesn't come much easier than this. Saturday's New York Times featured an article on "The Overconfidence Problem in Forecasting," about the almost universal tendency for people to think their assessments and decisions are more correct than they really are. The article closes smartly with this quote from Mark Twain:

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

Very tidy. Except there's no good evidence that Mark Twain ever wrote or said those words. (And neither did humorist Will Rogers.)

The actual author? It may be Charles Dudley Warner, an editorial writer for the Hartford Courant who was a friend of Mark Twain and who might have been paraphrasing him... or someone else... unless he was simply piping an imaginary quote for better copy. We may never know.

Bob Kalsey examined the roots of these fatherless words in 2008, and pointed out that similar sentiments were voiced by Confucius, Socrates and Satchel Paige, among others.

How to Build a Brain Wrong

John Rennie

nerds.jpeg

Ray Kurzweil, the justly lauded inventor and machine intelligence pioneer, has been predicting that humans will eventually upload their minds into computers for so long that I think his original audience wondered whether a computer was a type of fancy abacus. It simply isn’t news for him to say it anymore, and since nothing substantive has happened recently to make that goal any more imminent, there’s just no good excuse for Wired to still be running articles like this:

Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near.

It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.

This article doesn’t explicitly refer to Kurzweil’s inclusion of uploading human consciousness into computers as part of his personal plan for achieving immortality. That’s good, because the idea has already been repeatedly and bloodily drubbed—by writer John Pavlus and by Glenn Zorpette, executive editor of IEEE Spectrum, to take just two recent examples. (Here are audio and a transcription of a conversation between Zorpette, writer John Horgan and Scientific American’s Steve Mirsky that further kicks the dog. And here's a link to Spectrum's terrific 2008 special report that puts the idea of the Singularity in perspective.)

Instead, the Wired piece restricts itself to the technological challenge of building a computer capable of simulating a thinking, human brain. As usual, Kurzweil rationalizes this accomplishment by 2030 by pointing to exponential advances in technology, as famously embodied by Moore’s Law, and this bit of biological reductionism:

A supercomputer capable of running a software simulation of the human brain doesn’t exist yet. Researchers would require a machine with a computational capacity of at least 36.8 petaflops and a memory capacity of 3.2 petabytes ….

<…>

Sejnowski says he agrees with Kurzweil’s assessment that about a million lines of code may be enough to simulate the human brain.

Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

First, quantitative estimates of the information processing and storage capacities of the brain are all suspect for the simple reason that no one yet understands how nervous systems work. Science has detailed information about neural signaling, and technologies such as fMRI and optogenetics are yielding better information all the time about how the brain’s circuitry produces thoughts, memories and behaviors, but these still fall far short of telling us how brains do anything of interest. Models that treat neurons like transistors and action potentials like digital signals may be too deficient for the job.

But let’s stipulate that some numerical estimate is correct, because mental activities do have to come from physical processes somehow, and those can be quantified and modeled. What about Kurzweil’s premise that “The design of the brain is in the genome”?

In short, no. I was gearing up to explain why that statement is wrong, but then discovered that PZ Myers had done a far better job of refuting it than I could. Read it all for the full force of the rebuttal, but here’s a taste that captures the essence of what’s utterly off kilter:

It's design is not encoded in the genome: what's in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. [Kurzweil] has to simulate all of development from his codebase in order to generate a brain simulator, and he isn't even aware of the magnitude of that problem.

We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven't even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil's clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!

Lay Kurzweil’s error alongside the others at the feet of biology’s most flawed metaphor: that DNA is the blueprint for life.

What this episode ought to call into question for reporters and editors—and yet I doubt that it will—is how reliable or credible Kurzweil’s technological predictions are. Others have evaluated his track record in the past, but I’ll have more to say on it later. For now, in closing I’ll simply borrow this final barb from John Pavlus’s wonderfully named Guns and Elmo site (he’s also responsible for the Rapture of the Nerds image I used as an opener.

How to Make a Singularity

Step 1: “I wonder if brains are just like computers?”

Step 2: Add peta-thingies/giga-whatzits; say “Moore’s Law!” a lot at conferences

Step 3: ??????

Step 4: SINGULARITY!!!11!one

Added later (5:10 pm): I should note that Kurzweil acknowledges his numeric extrapolations of engineering capabilities omit that even “a perfect simulation of the human brain or cortex won’t do anything unless it is infused with knowledge and trained.” Translation: we’ll have the hardware, but we won’t necessarily have the software. And I guess his statement that “Our work on the brain and understanding the mind is at the cutting edge of the singularity” is his way of saying that creating the right software will be hard.

No doubt his admission is supposed to make me as a reader feel that Kurzweil is only being forthcoming and honest, but in fact it might be the most infuriating part of the article. Computers without the appropriate software might as well be snow globes. As a technologist, Kurzweil knows that better than most of us. So he should also know that neuroscientists’ still primitive understanding of how the brain solves problems, stores and recalls memories, generates consciousness or performs any of the other feats that make it interesting largely moots his how-fast-will-supercomputers-be argument. And yet he makes it anyway.

Do Open Networks Threaten Brands? (Pt. 2)

John Rennie

Previously, I slightly differed with David Crotty’s good post about why open blogging networks might be incompatible with the business models of established publishing brands, particularly for scientific brands, for which credibility is king. David had diagnosed correctly the very real sources of conflict, I thought, but those problems should only become unmanageable with networks whose pure, principled openness went beyond anything publishers seemed interested in embracing anyway. The more important consideration, in the eyes of the bloggers and the increasingly digital-savvy audience, will be how the brand handles the problems that will inevitably arise—in the same way that how publications manage corrections and mistakes already becomes part of their reputation. Or to put it another way: the openness of a blogging network doesn’t imperil a brand’s editorial value so much as it helps to define it. (Would that I had thought to phrase it so pithily before!)

Nevertheless, I do think blogging networks potentially pose at least two other types of problems that could be more seriously at odds with commercial brands plans. But neither is exclusive to blogging networks. Rather, they are business problems common to many digital publishing models—the presence of a blogging network just amplifies them.

1) Problems with lack of exclusivity. Nearly all commercial publishers’ digital business models are wobbly in the knees, part of which means that the writers and other creators of digital content typically don’t get paid well, if at all. In the wake of the Scienceblogs/PepsiCo mess, when many Sciblings were pouring out their hearts, even some of the well-known bloggers acknowledged that their earnings were meager—maybe enough to pay for a nice dinner every month.

Nonexistent wages don’t seem to discourage science bloggers, who do it mostly for passion, not pay. Certainly it isn’t holding up the formation of new blogging networks like Scientopia, which seemingly have no sources of revenue and where everyone works for free.

But as Bora Z. has discussed more than once in his recent (and amazing) dissections of the science blogging world, precisely because these networks can’t pay their bloggers, they don’t ask for exclusivity. The bloggers are free to write for other sites, and bloggers may even cross-post the same or similar content on multiple sites. It’s only fair, after all: they’re working for free. [Update: To clarify, the prolific Ed Yong of Not Exactly Rocket Science informs me that both Scienceblogs and Discover do ask for some degree of exclusivity, which only makes sense. My point isn't that exclusivity isn't totally absent from the world of science blogging, only that for the blogging networks that can't reward contributors with cash, little or no exclusivity seems to be asked in return at this point. Perhaps that may change.]

You have to love the communitarian rising-web-raises-all-boats spirit of this, and it can be good for the bloggers themselves, but most branding managers, I think, would find that a miserable arrangement. If I publish SuperCoolScience.com and realize that not only do all my bloggers appear elsewhere but so does their best content, what is my blog network doing for me? What unique value proposition does my network have that justifies my taking on whatever overhead keeps it running? And if I don’t have any control over what the bloggers are writing or linking to, I can’t even use them to steer additional traffic my way. It would be easier to put up a blog roll or RSS feeds of the same bloggers on other sites and let the Internet be my network.

This, to my mind, is a real unsolved problem for all the blogging networks: how do the desired synergies of pulling together these individual talents materialize? Blog carnivals, shared memes and the like are all great for a certain kind of community building, but they don’t seem to be enough to build a business proposition around.

All of us online throw around the word community to the point that it’s debased. Too often, what we call communities are really just big open parties: people arrive, hang out with their friends, gossip about those jerks in the corner and split before the drinks run out and somebody asks for help cleaning up. People in genuine communities have more of a shared destiny—they don’t just hang out together, they build something unique.

And that uniqueness is part of what business managers, investors, advertisers and the like are counting on seeing. None of this needs to be a problem for publishers whose models don’t require a measurable financial return on their networks. A good stable of bloggers, even nonexclusive ones, may bring a useful cachet, and the expenses may be so insignificant that it can be run as a loss leader. But for it to make sense to discuss “blogging networks” and serious “business models” in the same sentence, these nascent communities need to periodically put up a barn or something.

2) The corrupting influence of advertising. What brought this to mind was the part of David’s critique of the Scienceblogs/PepsiCo fiasco where he observed that “the bloggers were not the customers, the bloggers were the product.”

He’s right, the bloggers are the product. Or rather, a product. A mistake that many online enthusiasts make is thinking that the real customers are the universe of eyeballs reading blog posts. They’re not; at least not for commercial websites driven primarily by ad revenues. Those advertisers are the true customers because they are paying for access to that audience. In those terms, the site’s visitors are its major product. The bloggers and their posts are intermediary products—bait raised to chum the waters.

That’s not a flattering view of commercial publishing, particularly for any of us involved in creating editorial content. Yet it’s worth considering, especially in light of the old maxim, “Who pays the piper calls the tune.” If your business model depends on drawing advertisers, you can only stay in business by offering content that serves those advertisers’ interests. The editorial needn’t parrot whatever the advertisers want said, because doing so ultimately may not serve their interests, either. But if the advertisers see the content as contrary or irrelevant to their interests, they may see little reason to support it.

Nothing should be surprising about that, and the longtime survival of commercial media proves that this system can work (with occasional ugly lapses). But to the extent that open blog networks or even individual blogs are unpredictable, inconsistent or intermittently antagonistic to chunks of their audience, advertisers will be leery of them. Some advertisers may have the stomach for placing their ads in such environments, but most don’t.

For commercial publishers looking to pull together their own networks of blogs and knowing they don’t have the resources (or the inclination) to police their offerings ahead of time, the trick will be to assemble responsible, autonomous bloggers who collectively deliver a product compatible with the larger brand. They need to avoid the problem that Kent Anderson of The Scholarly Kitchen calls “filter failure.” [Update: Bora points out to me that the term "filter failure" originated with Clay Shirky.]

I imagine it’s not unlike the situation for parents whose teenage kids want to borrow the car. If the foundation for trust is there, granting that liberty can make life better all around. If it’s not, expect some sleepless nights.

Do Open Networks Threaten Brands? (Pt. 1)

John Rennie

Inspired by David Crotty's post at the Scholarly Kitchen, the indomitable blogfather Bora Z. tweets: For the most part, I agree with the individual points and criticisms that David raises. Whether I agree with his bottom-line conclusion that open networks are incompatible with established brands, and maybe most especially with brands built on scientific credibility, depends on the purity of one’s definition of open.

Unquestionably, leaving a troop of bloggers to their own scruples while publishing under your banner is fraught with risk, but as problems go, it’s neither unprecedented nor unmanageable in publishing. In fact, I’d say the open blogging network problem is really just a special case of the larger challenge of joining with fast-paced, out-linking (and poorly paying) online publishing culture. Some of the best prescriptions seem to be what David is suggesting or implying, so perhaps any disagreement I have with him is really over definitions rather than views.

David is on the nose about some of the headaches that unsupervised bloggers can pose to publishing brands. He mentions, for example, a post by neuroscientist R. Douglas Fields, blogging for Scientific American, that too credulously covered a supposed link between bedsprings and cancer. I’ll mention another: psychology blogger Jesse Bering intemperately responded to another blogger who took offense at a column he wrote. (Don’t bother looking for that response now, though, because I believe SciAm has removed it, and Bering may have subsequently apologized for it. [fix added later; see comments.]) Also, though there may not be much about Virginia Heffernan’s notorious article for the New York Times about Scienceblogs to agree with, it’s valid to suggest that the impression some of the more fiery posts make on visitors may not be what Seed Media had originally intended. (Whether that’s actually a problem for Seed Media is for it to say.)

Do those problems undermine the rationale for publishers to back open blogging networks? Let’s face it: even for highly credible, careful publishers, errors of fact and judgment occasionally find their way into print. The reputation of a publishing brand depends not just on how few of those mistakes it makes but on how it handles those that do. Hence, errata and correction pages. With the rise of breakneck publishing on the 24/7 web, lots of publishers have had to accept that some of the meticulous approaches to writing, editing and fact-checking that they used in the past are too slow, and that the advantages of saying something fast often outweigh those of more measured alternatives.

What makes this scary system manageable is a combination of technology and the online culture. Mistakes and controversies that go up online can be flagged and debated in comments, and fixed or deleted as judged appropriate. And experienced online audiences recognize that such discussion and changes occur and may accept them without necessarily losing their respect for the associated brand.

Rambunctious columnists and knowing how to handle them isn’t a new challenge. Editors in print and elsewhere have always sweated over how much to intrude on what columnists write. A reason that you hire a columnist is not just that he or she is good that he or she is reliably good with a minimum of supervision. As an editor, you realize that your columnists may sometimes take positions that the publication as a whole wouldn’t stand beside; you also realize that some of your audience will hold the publication responsible anyway. How and when you step in is part of what defines your editorial identity, but it also reflects how well you trust your audience to recognize and value the diversity of views you are presenting.

Scienceblogs isn’t an anarchically pure open network. It invited certain science bloggers, then let them go with essentially no supervision thereafter. In effect, it did its quality control in advance by choosing whom to invite. Discover does this much more selectively with its far smaller stable of first-rate bloggers, to excellent effect.

Some science publishing brands may only be served by closed networks of staff bloggers, whose every word is parsed and fact-checked by other editors before it goes online. On the face of it, though, such a scheme sounds like it would lack the necessary nimbleness to thrive.

The better middle ground, which I expect we’ll continue to see more of, are essentially open networks in which publishers choose bloggers who seem to embrace a common, compatible ethos or perspective. That’s what the newly formed Scientopia.org, for example, seems to have done, while preserving a good diversity of interests and perspectives.

So I don’t think that networks of bloggers are truly problematic for established commercial brands for any of these reasons. The real challenges lie elsewhere—and I’ll get to those in a separate post.

Memoirs of an Entomophage

John Rennie

foodinsects.jpg

JR Minkel, who blogs as only he can over at A Fistful of Science, recently brought to my attention this Paul Adams article for Popular Science (and indirectly, this news story in the Guardian) about the underappreciated importance of insects as a food source for many people around the world. That prompted me to dig out this recollection of my own foray into eating insects, which I wrote up years ago.

Memoirs of an Entomophage

My reputation in some circles as a person who eats bugs has been blown out of proportion. Yes, I have knowingly and voluntarily eaten insects, but I wish people wouldn’t pluck out that historical detail to epitomize me (“You remember, I’ve told you about John—he’s the bug-eater!”). It was so out of character for me. As a boy, I was fastidious to the point of annoying priggishness; other children would probably have enjoyed making me eat insects had the idea occurred to them, but I wouldn’t have chosen to do so myself. Bug eating was something I matured into, and performed as a professional duty, even a public service.

Here’s how it happened. Back in 1992, the New York Entomological Society turned 100 years old, and decided to celebrate with a banquet at the map-and-hunting-trophy bedecked headquarters of the Explorers Club on East 70th Street. Yearning for attention, the Society’s leaders had the inspiration to put insects not only on the agenda but also on the menu. For hors d’oeuvres, you could try the mini fontina bruschetta with mealworm ganoush, or perhaps the wax worm fritters with plum sauce. Would you care for beetle bread with your potatoes, or are you saving room for the chocolate cricket torte? Waiter, could I get more mango dip for my water bug?

Mind you, eating insects is not so bizarre and alien a concept in most of the world. According to Gene DeFoliart, the editor of the Food Insects Newsletter (that’s right, they have a newsletter), societies outside of Europe and North America routinely eat at least some insects, sometimes because they are the closest things to livestock that’s available. Most of the world does not share our squeamishness about eating things with antennae. Moreover, the consequences of our cultural bigotry can be serious. The U.S. and Europe largely drive the budgets for food-related research around the world, which means that most spending on raising better food animals goes to studying cows, chickens and the like. Millions if not billions in Africa, Asia and Latin America however, would get much more direct benefit from knowing how to improve the fauna with six legs (or more) that provide much of their protein.

Then, too, it’s not as though most of us in America haven’t ever eaten insects. Eight million of us live in New York alone, after all, and the Board of Health can’t be everywhere. The key difference is how many insects we’ve eaten, and how aware we were of it at the time.

I had volunteered to cover this event for Scientific American, ostensibly because it would be a lighthearted addition to our pages. (“C’mon!” I had argued. “It’ll be fun! Lighten up! Don’t you get it, they’ll be eating bugs!”) For me, writing about entomology would be a pleasant change of pace because my beat at the magazine mostly ran toward molecular biology—the study of stuff that was once alive but has been dissected down into pieces so incredibly small no one can see or care about them.

Beyond that, I also had a secret motivation for wanting to go. The truth is that I have always been somewhat afraid of insects. I love to look at them; I love to learn about them; their multilegged, exoskeletal ways fascinate me. But the notion of insects crawling on me, biting me, stinging me gives me the creeps. My hope was that if I went to this Entomology Society banquet and turned the tables on our arthropod pals, I would be free of this phobia forever.

(I must admit that this idea occurred to me after discovering that in his autobiography, former Nixon henchman G. Gordon Liddy explained that as a child, he had been afraid of rats, until one day he caught one, cooked it and ate it. It’s not a good sign, I know, that I was taking self-improvement tips from a man who was a probable sociopath and confirmed radio talk-show host.)

It was in this spirit that I arrived at the Explorer’s Club on May 20 to find that the entomology banquet had exploded into a full-blown, out-of-control media event. Approximately 80 people were attending the dinner as guests. Roughly another 250 were there as representatives of the press, doing a very good impression of a swarm of locusts, bumping into one another’s cameras, jostling the Explorer’s Club decorations, and stealing one another’s interviewees.

They were doing everything, in fact, except eating. That behavior was unusual, because the standard arithmetic that applies to media events is Journalists + Free Food = No More Food. In this case, however, the members of the fourth estate were upholding the principle of being mere observers, not participants.

Yet when the number of people reporting on an event so exceeds the number of people actually participating in it, it suddenly becomes much harder to find anyone to interview. In desperation, reporters on that night had started interviewing other reporters. Of course, the great advantage of interviewing another reporter instead of a real source is that the reporter will immediately give you the quotes you need rather than cluttering their statements with facts or trustworthy information.

Since I was at the banquet as both a diner and a reporter, I was ideal. Thus, a reporter from the New York Daily News hit on me for colorful copy, and I was happy to oblige. “This is a night on which you do a lot of drinking,” I quipped, “and when you get home, you floss.” She happily wrote this down, and then spent much of the rest of the evening bringing other print and radio journalists to my side, to whom I said pretty much the same thing.

But forget all that and ask the question that really matters: How do insects taste? Depending on how they’re prepared, they may not taste like much of anything. If the insects are ground up and used as an ingredient—as they were in the aforementioned mealworm ganoush and the cricket torte—they can be completely unnoticeable. (That’s probably not the most reassuring thing you could hear if you ordinarily worry about insects in your food.) When the insects were served whole, they had a bit more distinctiveness to offer, although not necessarily something too strange.

The first offerings of the night, for example, were small wicker baskets filled with assorted fried crickets and grubs. These tasted primarily of salt and oil, with a slightly nutty aftertaste, not unlike most of the snacks you’d find sitting in a bar. (I’ll grant you this assessment may reflect less on the insects than on the quality of the bars I frequent.) The fried insects were all too recognizable for what they were, but taken individually, they were not too off-putting. Baskets full of them, however, were much more disturbing: every time anyone removed a handful, the rest would shift position in a way that created the illusion the buggy pile was still alive and squirming.

One type of insect being served was in fact still alive: honeypot ants from the Southwest. The pea-sized abdomens of these ants were engorged with peach nectar. In the wild, these swollen workers act as living canteens for their colonies. Pop one in your mouth and bite down fast, and you might not notice that this gumdrop is still wriggling, though you might notice the extra lemony zest of formic acid it spat at you in its last moment.

I had less than favorable reviews for only two dishes from that evening, and I don’t think it’s a coincidence that they were the ones involving the largest insects. The first was listed as a roasted Australian kurrajong grub. Kurrajong is actually another name for the Brachychiton genus of trees and shrubs that can play host to many types of wood-borers; I think the insect in question is what is more commonly called a witchetty (or witjuti) grub, an old staple of the Aboriginal people. Here we have something that looks not unlike a pale link sausage, but with a head like a chive. In flavor, too, the witchetty grub is reminiscent of sausage, though one in which the meat has been stretched with too much mealy filler and the casing has perhaps a bit too much snap. Not to my personal taste, but not inherently awful.

However. That brings us to my Waterloo, the sautéed Thai water bug. This dish was every nightmare I might ever have had about an insect banquet rolled into one. For openers, it is a water bug, which is to say, it looks like a gigantic cockroach. Whatever it was sautéed in did nothing to disguise its fundamentally roachy nature. It was an insect, whole and uninviting, sitting on a plate, looking like it wanted to run away as much as I did.

Furthermore, because this water bug was about three inches long, it posed a problem that none of the other edible insects that night did: I would have to eat it in at least two bites. Which end to eat first? The front half looked spiky and crusty; the back half looked like it held reservoirs of goo.

I stared at the water bug, hoping it was not staring back at me, trying to decide whether I could bring myself to eat it at all. Hadn’t I dined on enough insects already? And wasn’t I already feeling a little less phobic about insects? How much personal growth did I need in one evening, after all? I had very nearly talked myself out of it—

—when I suddenly became aware of a halo of bright lights surrounding me. It was my fellow jackals of the press, circling, smelling my vulnerability. The camera crews had been hoping to catch somebody eating one of these six-legged abominations for the first time, and there sat I: so obliging, so quotable. Feeling unable to back out of the situation, I raised the beetle to my lips (noticing that, dear god, it had a noticeable heft), oriented its head and thorax down my throat and bit down.

It managed to be even less appealing than I’d expected. As my teeth crunched through the brittle exoskeleton, the rapidly disarticulating body parts poked my tongue and palate. Oh my god, I thought, I’ve got a leg stuck between my teeth. Worse, the flavor was grotesque: acrid and somehow reminiscent of lighter fluid. The bug’s feel in my mouth made me think of globs of barbecue sauce that had fallen onto a grill and burned.

(What I’ve subsequently learned by reading an account of the evening in the Food Insects Newsletter is that the water bugs may have been cooked wrong. Everyone I spoke to that night—even those few entomophages there that night who normally relished the creatures—hated what had been done with them. You know the expression, “Misery loves company”? In this case, company doesn’t help.)

“How do you like it?” asked one of the reporters from the enviably insect-free crowd.

Resisting the temptation to spit the mouthful back onto the plate, I managed to mumble, “Mmm, that’s some bug”—not a memorable bon mot, but if Oscar Wilde ever ate insects while in prison, no epigrams marked that occasion either. At that moment I knew with crystalline certainty that my phobia about insects was not going away after that evening, that it might never be going away—and that if anything, my neurosis had picked up extra material to work with.

And no, the second bite was not any better.

Postscript: It really is worth reading that edition of the Food Insects Newsletter from 1992, if only to read this anecdote from "The Chef's View of the New York Banquet," which is a kicker I cannot top:

The kitchen staff swirled around her, carrying containers with masking-tape labels such as "cream cheese mealworm dip" and "bug grub." Behind the stove, a cockroach scurried up the wall.  " Yike's," said Elliot, taking a swat at it with a rag.  "That's not one of ours."

Busted Explanations for Karate Breaking

John Rennie

ericbreak1.jpeg

Martial arts are my hobby and explaining science is my job, so the recent appearance of “How karate chops break concrete blocks” on io9.com naturally caught my eye. Unfortunately, not only did it fall far short of my hopes of offering a lucid explanation, it parroted misleading statements from an article on exactly this same subject that has been annoying me every time I remembered it for 10 years. (Oh wonderful Web, is there no old error you can’t make new again?) Indulge me while I try belatedly to set the record straight. The io9 article starts by asking how the squishy human hands of martial artists can break concrete slabs, wooden boards and other considerably harder objects. Reassuringly, it wastes no time on talk of chi and similar Eastern mysticism but instead goes right to a very loose discussion of the biomechanics of hitting efficiently and striking to vulnerable positions in the target. So far, so good, although the descriptions of what to do are probably too vague to be helpful to readers who don’t already know what they’re doing.

Then the article seems to go off the rails (emphasis added):

It's also important to strike quickly at the surface of the block. Most blows are part connective smack and part push. This delivers the most damage when fighting flesh, but helps protect concrete or wood. Concrete and wood have a good mix of rigidity and elasticity. The materials will bend, and even flex back like a rubber band would, but the limits of their malleability are much lower. Bending and snapping back can do more damage to them than it can to things that flex easier. By making the blow fast and pulling back, the striker hits the block hardest and allows the material to do the maximum amount of bending. A follow-through push will keep the material from snapping back, and snapping itself.

In my experience, and that of every other martial arts practitioner to whom I’ve ever spoken about this subject, that’s just not right.

When you break a board, or concrete, or a Louisville slugger or anything else routinely used these days in demonstrations of tameshiwari (breaking), you have to follow through on the strike. Indeed, advice commonly given to students learning to break is that they should aim at an imaginary target several inches beyond the actual object, for two reasons. First, doing so helps to make sure that the actual strike occurs closer to the movement’s point of peak biomechanical efficiency. Second, it helps to override our natural tendency (partly psychological, partly reflexive) to slow down ballistic movements such as punches and kicks before they reach full extension, which helps to protect the connective tissues around our joints.

This argument is not theoretical for me. I’m not in the same martial arts galaxy as Mas Oyama, and I am anything but a breaking ace. But I have studied karate for 17 years, and during that time I’ve broken my share of boards, concrete and rocks, and helped out with demonstrations by other karateka executing far more powerful breaks. They all followed through.

The reason this faulty description of breaking jumped out at me is because I had read it 10 years ago in Discover magazine’s “The Physics of … Karate,” which is one of the two articles that the io9 piece listed as a source. (The other was an entertaining and apparently blameless piece by The Straight Dope.) The Discover piece was if anything even more confused because it implied that boxers’ punches could not break boards (believe me, they can) and implied that biomechanical efficiency somehow justified its statement that a karate chop “lashes out like a cobra and then withdraws instantly.”

If you’d like more thorough and quantitative explorations of the physics of breaking, you can get them here and here [pdf]. You can also read what are regarded as a couple of classic papers on the subject:

Walker, J. D. “Karate Strikes,” American Journal of Physics, Vol. 43, 10, Oct. 1975

Feld, M. S. et. al. “The Physics of Karate,” Scientific American, pp. 150-158, April 1979

A surprisingly accurate and informative description of the science behind breaking is also available on this page (with video) for the old Newton’s Apple children’s science TV show, which states:

One key to understanding brick breaking is a basic principle of motion: The more momentum an object has, the more force it can generate. When it hit the brick, [karateka Ron] McNair's hand had reached a speed of 11 meters per second (24 miles per hour). At this speed, his hand exerted a whopping force of 3,000 Newton’s --or 675 pounds--on the concrete. A slab of concrete could likely support the weight of a few people weighing a total of 675 pounds (306 kilograms). But apply that amount of force concentrated into an area as small as a fist and the concrete slab will break.

The io9 article correctly points out that when breaking multiple boards or slabs at once for demonstrations, breakers often put small spacers (wooden pegs, pencils, stacks of coins, etc.) between them at the edges so that they do not sit solidly atop one another. Use of spacers does make breaking a given number of boards much easier (see that Straight Dope article to see how much easier). But then it continues:

Hold a piece of paper by both sides in the air and a knife, applied to the middle, will cut right through it. Put it on a smooth concrete floor, and it will be much, much harder to cut the piece of paper using the same knife.

Like everything else in life, breaking is just a primitive, degenerate form of bending. The paper can't bend, and so it doesn't give. A concrete block works the same way.

Trying to explain breaking by drawing analogies with cutting only confuses matters. And what in the world does "Like everything else in life, breaking is just a primitive, degenerate form of bending" even mean? [Update: Good news, everyone! Allan West in comments points out to me what I should have recognized in the first place: the author is quoting Bender of Futurama. Thanks, Allan; your Planet Express gift certificates are on the way.]

Breaking is filled with practices that, depending on your point of view, are either tricks for fooling observers or techniques for maximizing a strike’s visible effect. You don’t just strike with the power in your arm or leg: you organize the movement of your strike to bring in as much power from your legs, hips and upper body as possible, too. When breaking wooden boards, you use pine (not oak, not mahogany) that isn’t marred by dense knots, cut ¾ inch thick and about 12 inches on the diagonal; you hit them to break along the wood’s natural grain. (For some demonstrations, breakers have been known to bake their boards ahead of time to make them more brittle.) One good board, if held securely so that it won’t move on impact, is so easy to break that even those with no training at all can be taught to do it in under five minutes.

When breaking concrete, you use slabs that are relatively narrow and long, so that the strike can hit at a distance from the supports at their edges for best leverage. With some multiple breaks—involving, say, big slabs of stacked, spaced ice—you can count on the weight of the falling, broken pieces on top to help break slabs lower in the stack. And so on.

All these practices are ultimately demonstrations of simple physics, not magical chi, but doing them well—particularly the more demanding ones—also takes strength, training and concentration. The value of tameshiwari to martial arts training is much debated; it has little or no practical relevance to fighting or whatever else people study martial arts for. Done wrong, breaking is an amazingly efficient way of messing up your limbs, potentially permanently. I don’t do it often and I don’t much miss it. But I have seen (and felt firsthand) how much successfully breaking boards can boost a student’s confidence, so who knows.

In closing, Matt, the Skeptical Teacher, demonstrated karate breaking at The Amaz!ing Meeting a couple of weeks ago in Las Vegas, and even showed that he could teach veteran skeptic Joe Nickell to break in under five minutes to prove the lack of mystical woo involved. Here's a clip of a break he did on the main stage.

[youtube=http://www.youtube.com/watch?v=On-CiwL5x80&color1=0xb1b1b1&color2=0xd0d0d0&hl=en_US&feature=player_embedded&fs=1]