Contact Me

You are welcome to email me directly: my first name at JohnRennie.net. Or better yet, use the form at right. Please include appropriate contact information for yourself.

Thanks very much for your interest. I will respond to your message as soon as possible. 

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Blog posts

The Unnatural Habitat of Science Writer John Rennie

Teetering "Chinese Wall" Falls on Scienceblogs

John Rennie

The science-interested blogosphere has buzzed during the past couple of days with outpourings of rage over the decision by the SEED Media/Scienceblogs management to add to the roster a new blog on nutrition and food science sponsored by PepsiCo. The obvious irony of such an offering wasn't lost on anyone, but here's how deep it runs: I was going to joke, "Coming soon: a heart health blog from Taco Bell," but then I learned that Taco Bell's owner, Yum! Foods, is actually a spinoff from PepsiCo. Food Frontiers isn't Sb's first dance with corporate-sponsored blogs—for example, its Collective Imagination blog was backed by G.E. What is raising hackles about this one, however, is that the blog's content will be written entirely by PepsiCo's own scientists or others invited by the company or Sb. That authorship issue turns the blog into a public-relations instrument rather than an independent journalistic one, at least in the eyes of many of Sb's other bloggers, who argue that it impeaches the editorial integrity of the rest of Sb's blog network. As a result, several respected Sb contributors—Brian Switek (Laelaps), David Dobbs (Neuron Culture), Scicurious (Neurotopia), Rebecca Skloot (Culture Dish) and more—have stopped blogging there, either permanently or temporarily.

Alok Jha of the Guardian has summarized the state of the fiasco to date. Sb's editors comment on the controversy here, and the Guardian has published a too-late-and-supposedly-confidential letter that SEED editor-in-chief Adam Bly has sent to its blogger community about it.

As someone who spent almost 15 years as the editor in chief of science magazine where these kinds of conflicts between editorial and advertising always threatened to rear their head, I'm not unsympathetic to the bind that Sb found itself in. The pressure can be tremendous every day to find new sources of revenue as traditional formulas fail. Advertorials, special sections and custom-published issues are all examples of solutions that publishers have found for helping advertisers spread their messages outside of obvious ads without breaching the so-called "Chinese wall" that is supposed to keep the editorial and advertising apart. In digital media, there can be particular avidity to innovate in ways that blur, if not violate, the traditional distinctions between independent editorial and advertising, if only because hyperlinks and other features of the web can so effortlessly pop users between pieces of content. Potential ad clients make demands that push ever harder at the envelope, and editors these days therefore must frequently judge whether some new idea is ethically acceptable.

Unfortunately, in this case, Sb chose poorly, and the uprising among its bloggers is the result. I don't question the motives of the editors in introducing Food Frontiers, nor even the motives of PepsiCo and the scientists it enlists to write it. The kinds of information and insights that workers in that field could offer might be welcomely informative. But none of that is the problem, which is why the seemingly reasonable wait-and-see responses by some Sb bloggers miss the point. Sb's error becomes obvious if one looks at the rules that print editors would apply to the analogous situation.

(And no, I'm by no means implying that digital media should shut up and listen to print media because it has all the answers. Far from it: print media needs to listen to digital these days more often than the reverse. Nevertheless, the ethics of journalism and the implicit covenants that publications have with their readers are topics with which traditional media have decades of experience.)

Two cornerstone principles should rule in all matters of editorial/advertising integrity. First, if a publication is genuinely journalistic, readers should never have cause for doubt that the editors have set some interest above honestly and truthfully informing them. Therefore, second, readers should always be able to distinguish immediately between editorial messages and those sponsored by advertisers.

So when advertiser-sponsored content makes its way into a publication, it should be marked "advertorial." It shouldn't be indexed as part of the table of contents. It shouldn't be written by editorial staff writers or illustrated by staff illustrators. And—of particular relevance in the Sb case—it should not ape the design of the editorial content.

The American Society of Magazine Editors maintains a set of guidelines to help editors (and publishers) stay in the best ethical graces of the profession. Not that editors and publishers don't in fact try to bend those rules, or outright ignore them. (Forbes magazine, for example, rather famously refuses to be bound by ASME's guidelines.) Nor is it always easy to determine whether a particular sponsorship opportunity crosses the line. The guidelines don't end discussion; but they are a place to start it.

(I see that Paul Raeburn at Knight Science Journalism Tracker has touched on many of these same points. [Update:more here, too.])

[Update added moments after first posting: What also makes the confusion worse in this case is something peculiar to the web: indexing by Google. Food Frontiers is indexed along with the rest of the Sb blogs by Google News, which means that whatever distinctions Sb makes within its own site for keeping it apart from the unsponsored blogs, they will be irrelevant for the outside world.]

In this case, with whatever innocent intentions, Sb violated the spirit or letter of too many of those guidelines. Then it compounded the problem by managing the inevitable controversy badly, not letting its stable of bloggers know who their new neighbor would be in advance. Sb's protestations that it wants to fix the presentation and context of Food Frontiers to general satisfaction are sincere, I think, but it's face-palmingly obvious in retrospect that some of those adjustments could have been foreseen before the blog's launch if the Sb blogger community had known more of what was to come. I wish them luck with it now, and hope the Sb blog network survives without too much attrition.

But no one elsewhere in the media landscape should feel too self-righteous about this situation. Without simple, reliable ways of drawing revenue, commercial journalistic organizations of all stripes will be faced with tough choices, and many more editors will lose sleep over whether they made the wrong compromises for the right reasons. The Chinese wall can fall on anyone.

Fighting Words: Sounds Strong or Sounds Specious?

John Rennie

Who sounds stronger: James Earl Jones or Mike Tyson? Who do you think would win in a fight? Most of us might rather have Darth Vader narrate a documentary, but the smart money says that in an actual knockdown drag-out, bet on the guy who bit off Evander Holyfield's ear. But apparently not all evolutionary psychologists would agree.

A new paper published in the Proceedings of the Royal Society by Aaron Sell, Leda Cosmides, John Tooby and their colleagues argues that we humans have evolved a faculty for assessing people's physical strength—and indirectly, their fighting prowess—just from the sounds of their voice. The idea sounds provocative and has a certain prima facie credibility, but unfortunately, my assessment is that this amounts to just one more in the long line of speciously reasoned papers that gives evolutionary psychology a dubious reputation. Here's why.

First he starting premise of the study is that because natural selection would have favored strong fighting abilities among our ancestors, any individuals who could assess others' fighting prowess quickly would have had a big advantage: depending on their inclinations, they could then either hide from the strong, pound on the weak or mate with the muy macho. For most of us still recovering from Post Traumatic Playground Disorder, this may make intuitive sense. And the authors cite a variety of papers representing "[m]ultiple converging lines of evidence," all of which might support the assumption (though their support still sounds circumstantial and debatable to me). Nevertheless, it is still just an assumption, and one of which we should be suspicious precisely because it does conform so easily to our current social experience.

Sure, if one's view of early human evolution is dominated by "nature red in tooth and claw" concepts of competition, in which Fred Flintstone has to wrestle a sabertooth and punch out Barney Rubble before dragging Wilma away by the hair, fighting ability would unquestionably be a plus. But combativeness also has its downsides. People who are good at fighting may be more likely to be in fights, which means that they may be killed or disabled sooner on average than those with more moderate skills and aggression. If early humans hunted in groups with weapons, the best hunters might not actually be the strongest: they might do better by virtue of their leadership, their strategies, their ability to maintain peace in the tribe. Not to mention that the women might be looking for something other than fighting prowess in a mate.

None of these points disqualifies physical strength or fighting ability as an important selective advantage. But they do mean that one can't simply say that strength and fighting ability automatically constitute important criteria for our ancestors' success.

Next, the researchers focus on upper body strength as a proxy for fighting ability. They offer reasons why the two correlate reasonably, and I don't have any objection to using upper body strength that way. Bear in mind, however, that this strength is only a proxy: any statement the researchers ultimately make about the relation of the voice to fighting ability is no stronger than that connection.

So then the researchers asked test subjects to rank unseen speakers by perceived strength, and found that the subject were indeed adept at doing so, even controlling for height and weight, even if the voices were speaking foreign languages. In a further apparent corroboration of their hypothesis, the researchers also found that people were better at assessing the strengths of males' voices than females'—because, of course, it would have been so much less important to assess women's physical strength or combativeness. (Of course, if there had been no difference in the facilities for judging men's and women's strengths from the sound of their voices, that might have only established just how fundamental and important this capability must be. So....)

Yet, again, what do these results truly show? At best, they establish only that people's voices do provide good cues about how others may perceive their physical strength. But the leap from physical strength to an evolutionary just-so story about natural selection favoring people's abilities to make this discrimination is still a big leap. People's voices may correlate well with their perceived strength, but let's face it, people's voices correlate well with lots of characteristics that may also correlate (causally or otherwise) with fighting prowess. For example, perhaps our perceptions of others' strength also corresponds to our sense of their self-confidence, or their social standing. Those may or may not be driven in part by their physical strength; but one therefore doesn't know for certain which of many such traits might have figured into natural selection (if any did).

I might be inclined to give this study more benefit of a doubt if it did not fall so squarely in line with many other evolutionary psychology papers—including ones from Leda Cosmides and John Tooby—that gravitate back to support for a view of human origins that heavily emphasizes the overwhelming dominance of violent conflicts, aggressive hunter females, more passive females and other clichés. Almost every other exploration of evolutionary biology seems to turn up surprises hidden from us in part by our cultural blinders; and yet human evolutionary psychology usually seems to shore up the cultural status quo. Doesn't that seem the least bit suspicious?

I'll repeat something I've said before: I completely want to believe in evolutionary psychology. I fully believe that someday we will have a strong, sound science of evolutionary psychology that offers real insights into human behavior. I support and encourage researchers' efforts today to start making that science a reality. But unfortunately, right now, I don't think we know enough in detail about either human psychology or human evolution to produce very persuasive theories in this area.

Posted via email from Rennie's Other Last Nerve

Marmaduke and Other Box-Office Dogs

John Rennie

The scathing review of the movie Marmaduke on io9.com begins by describing it as "so self-evidently bad that slamming it would be tautological," and then it just gets nasty. (The justification for reviewing Marmaduke on a science fiction site? "A sentient Great Dane who holds his family hostage falls well within the boundaries of speculative fiction — it's more or less like Harlan Ellison's A Boy and His Dog, but without the semen farming.") But never mind that. More intriguing was the link to a page on Box Office Mojo listing the fifty "Worst Wide Openings" since 1982.
I'm not sure what strikes me as most remarkable about this list. Obviously, it consists entirely of films that did awful business; even so, I don't think I'd ever imagined just how badly some of them had done. For example, if I think of science fiction films starring Eddie Murphy that did abysmally, The Adventures of Pluto Nash first leaps to mind. But that's not even on this list. Instead, Meet Dave—you remember, the one with Murphy playing an Eddie Murphy-shaped spaceship full of little people?—lands at number three: it opened with $5.2 million in 3,011 theaters and ultimately grossed only $11.8 million. Those are slit-your-wrist awful numbers.
On the other hand... when you consider that these are the worst movie openings in 30 years, what shocks me (as a complete outsider to the movie biz) is just how much money most of these films made. No doubt most of them did lose money—probably great buckets of money. But it also seems as though if producers can manage to get a film into 3,000 theaters for its opening weekend, they are almost guaranteed to open with $10 million and to gross at least $35 million.
Again, those are clearly not earnings sufficient to offset gigantic losses. But what seems more amazing? That even something like The Bad News Bears remake could earn about $33 million? Or that Battlefield Earth couldn't even bring in $22 million?

Posted via email from Rennie's Other Last Nerve

“Stasists”? No. “Deniers”? Yes.

John Rennie

My previous post about the polar bear photo embarrassment, which Andy Revkin wrote about, gives me occasion to comment on Andy’s use of “stasists” for the people in opposition to the climate warming arguments. He favors that term, I gather, because it is more neutral than names like “denier,” and because he thinks it speaks to the goal common to the diverse camps of people on that side of the argument: to keep on doing things as we have been. Much though I appreciate Andy’s ongoing good-faith journalistic efforts to keep an even keel in these contentious discussions, after some consideration, I still disagree about the appropriateness of that label. (But then again, I’m no fan of the “pro-life” label for people who are objectively anti-choice, either.) First, let’s acknowledge that the terminology for both sides stinks. James Hansen, Stephen Schneider, Rajendra Pachauri, Al Gore and others warning about the evidence for anthropogenic global warming (AGW) are obviously not “warmists” in the sense of wanting more warming any more than Marc Morano and Bjorn Lomborg are "coolists" who want more cooling.

The problem is compounded by the range of nonexclusive reasons why (ostensibly) people are in the stasist/denier camp. First, there are the many who may start off with no particular opposition to the idea of AGW but who sincerely do not know or do not understand the evidence for it (I would like to think they constitute much of the U.S. public). Then there are the ones who outright deny the climate is changing; the ones who do not think human activity could affect the climate; the ones who recognize changes in the climate but insist it must be natural; the ones who concede the possibility of human influence but doubt the warming will continue; the ones who accept the fact of significant warming but doubt it will be disruptive; the ones who grant AGW could be bad but say responding to it in the future makes more sense than trying to prevent it; and the ones who simply argue that More Research Is Needed and refuse to act until their agnosticism is satisfied. And of course within each of those segments, one could split out still smaller shadings of opinion. (Baskin-Robbins doesn’t have as many flavors of ice cream as Andy’s stasists have flavors of opposition.)

Finding one name that honestly and accurately reflects all those points of view is difficult. Yet with the exception of those who can plead genuine ignorance (and who are almost never the ones publicly arguing against climate science or policy), pretty much all of those other positions involves some level of active denial or disagreement with copious scientific evidence or risk-management precedent. So at a purely existential level, they define themselves to my mind as the “uninformed” and the “deniers.”

(I suppose the latter could also be called “resisters,” but frankly, the dishonesty many of them exhibit in repeatedly using debunked arguments persuades me that the overtones of “denier” are more accurate. Moreover, since I suspect that many of them are simply committed a priori to rejection of the climate science and its consequences out of their own beliefs, the term “denialist,” with its creedlike associations, also makes considerable sense.)

But here is the crux of my argument: I think it’s a mistake when characterizing these people as deniers to try to homogenize what they think about AGW or climate science because they are all over the place on those. Rather, what they all deny is any need for a pro-active response to AGW.

Isn’t that essentially Andy’s argument for calling them stasists? It’s close but here is where I fault his terminology. First, these critics are not all calling for stasis in policies: those like Bjorn Lomborg are happy to see any number of policy responses but only after the damage of climate change is more evident. They simply don’t want anti-AGW activism to rock the boat now. Second, I think “stasist” is Orwellianly misleading as a label for a position synonymous with acceptance of massive change. Put it this way: would you consider someone a “biodiversity stasist” if he advocated continuing to cut down rainforests only at the current rate?

As so many climate scientists and others have already said, the time for action on global warming is already upon us. Let’s be clear, then, that the division in the arguments is between those committed either to climate action activism or to climate action denial (or resistance or opposition). Forget trying to find some neutral, anodyne term in the interest of reconciliation. The weight of evidence runs against most of the deniers' positions, so don’t hesitate to use a term that puts them on the defensive.

Polar Bear Pic was Bad... But So What?

John Rennie

Andy Revkin, on his Dot Earth blog, has already done a fine job of summarizing the self-defeating gaffe of Science publishing the new letter from 255 National Academy of Sciences members, which rebukes the misleading political assaults on climate research and its investigators, with a Photoshopped image of a polar bear on an ice block. Because of this frustrating error, the attention that the authoritative scientific statement deserves is instead diverting to the flawed rhetoric of its presentation (a mistake introduced by the publication, not by the NAS). The incident has become a perfect cameo of the larger climate-change issue: scientists speak out on the state of the research with facts and substantive arguments, and opponents jump on any small defects in what’s said to argue, honestly or otherwise, that the climate science is wrong, corrupt or both. Of course, the irony of us criticizing Science’s use of the polar bear artwork is we forget that in the eyes of the people most incensed by it, literally no effective image would have been acceptable. I’m not arguing that the polar bear picture wasn’t a particularly bad choice: it was, because it made the critics’ job much too easy. But what images would have been above reproach? Photos of shorelines racked by hurricanes or floods? Icebergs calving off polar glaciers? As individual incidents, none of those can be pinned definitively to global warming, so the critics will always call the images sensationalizing. How about a photo of a polar bear on a larger ice sheet? The critics don’t think polar bears are endangered, wouldn’t really care if they were, and don’t accept that global warming is the real reason for their problems. Ditto for any other climate-endangered species. Care to show photos of people whose livelihoods are jeopardized by climate change? Surely then you are ignoring the flexibility and ingenuity of human economies.

How about just a presentation of the scientific data, then? Maybe, say, a nice hockey-stick graph of rising temperatures over the past millennium? Hmm, apparently that’s not acceptable either, no matter how well vindicated its conclusions might be. Or maybe something showing the rapidly rising levels of CO2 in the atmosphere? But surely then you’ll be ignoring the fact that CO2 levels were higher during the Carboniferous, and if it was good enough for Apatosaurus, it will be good enough for us, too.

No, none of those is beyond controversy. Here’s what you need to show to keep the critics happy: Big, empty photos of the sun. Photos of scientists pulling ice cores out of the ground or otherwise engaged in bland, unintelligible, wonkish busywork. Maybe a big group photo of those 255 NAS members standing on the steps of a building in Washington. Photos of the IPCC reports (riveting!). Maybe some artwork of a thermometer creeping into the 90s with a big question mark beside it (awesome!).

Face it: to the people committed to rejecting your message, no image that helps you sell your message persuasively will ever be acceptable—because accepting it would be tantamount to conceding some point in your favor. And if the climate deniers have proven anything, it’s that they are willing to oppose the global warming issue at every possible level, from denying the fact or possibility of global warming right up through dismissing the possibility or desirability of responding to it proactively.

By the way, just as an experiment, consider if the shoe were on the other foot. Suppose 255 scientists released an official statement dismissing global warming as a sham. Are there any images they could use to illustrate that argument forcefully that would elicit an equal sense of outrage and disappointment from others on their own side? Somehow, I doubt it; these are people who have embraced “Al Gore is fat!” as a rallying cry.

Blood Simple: Mammoths, mice, malaria and hemoglobin

John Rennie

Oh, the allure of a misleading headline. When I saw this story on BBC news, “Mammoths had 'anti-freeze blood', gene study finds,” I thought for a moment that scientists had discovered those Ice Age behemoths had secreted some glycerol-like compound into their tissues to prevent freezing. Such a finding would have been genuinely astounding because, as far as I know, that trick is seen only among certain polar fish and overwintering insects; it would have been a first among mammals. In reality, the surprise is not that the mammoths’ blood resisted freezing but rather that it cleverly continued to do what blood vitally must do: transport oxygen to needy tissues, even at low temperatures. Kevin L. Campbell of the University of Manitoba and his colleagues reached this conclusion through a nifty piece of paleogenomic molecular biology, as they reported in Nature Genetics. Their technique’s fascinating potential to help biologists learn about the physiologies of extinct creatures has already drawn considerable attention, but the mechanism of the hardiness of the mammoths’ blood also helps to highlight a common way in which evolution innovates.

Mammoths display obvious features that must have helped them stay warm in the brutal subzero temperatures of the Pleistocene ice ages, such as long, shaggy coats and small ears. They may well also have had less obvious ones, too, like the arrangement of blood vessels in the legs of caribou that allows countercurrent exchange to minimize the loss of body heat from their legs while they stand in snow. Nevertheless, Campbell had wondered about whether the mammoths’ blood might have been adapted, too, because of hemoglobin releases oxygen into tissues only sluggishly at low temperatures.

By extracting the hemoglobin gene from DNA in well-preserved mammoth remains and inserting it into bacteria, Campbell and Alan Cooper of the University of Adelaide were able to replicate samples of the mammoth’s hemoglobin. And sure enough, in subsequent tests, the resurrected hemoglobin proved to release oxygen much more consistently across a wide range of temperatures—even glacially low ones.

Perhaps it sounds surprising that something so fundamental to mammalian physiology as its hemoglobin chemistry would be subject to evolutionary revision. Surely the mammoths might have survived the cold just as well by evolving more hair or thicker insulation. Yet hemoglobin chemistry is actually a feature particularly well suited to modification—and that has been modified many times throughout evolutionary history. The key is that the genes making the globin proteins have leant themselves to frequent duplication throughout evolutionary history, which opens up the opportunity of variation among the copies and specialization in their activities.

Humans, for example, have several different types of globin genes (designated alpha, beta, gamma and so on) on chromosomes 11 and 16 that may be used in various combinations to manufacture variant forms of the tetrameric (four-chain) oxygen-transport protein in red blood cells. For our first 12 weeks or so in utero, our bodies make embryonic hemoglobin, then switch to fetal hemoglobin, which can continue to be a major component of newborn babies for six months. Mammalian fetuses need hemoglobin that takes up oxygen very avidly because they need to steal it away from their mothers’ blood as it circulates through the placenta (see graph). So fetal hemoglobin does not respond to a chemical (2,3-bisphosphoglycerate) that reduces the oxygen affinity of adult hemoglobin.

Fetal hemoglobin has actually been the salvation of many adults who suffer from sickle-cell anemia. These people make a defective form of adult hemoglobin that distorts their red blood cells into the elongated, sickle shape that gives their condition its name; these sickle cells can clump together and block blood vessels, leading to painful and frequently fatal complications. A treatment for sickle-cell anemia, however, is to give a patient hydroxyurea and recombinant erythropoietin, which stimulate the body to begin making fetal hemoglobin again, alleviating some of the problems.

Of course, sickle-cell anemia is itself the result of a mutant variation in the copies of the genes that make the beta subunit of hemoglobin. As is well known today, the genetic trait for sickle cell seems to have originally taken root in populations in sub-Saharan Africa and other parts of the world where mosquito-borne malaria is endemic: people who are carriers of the trait are somewhat more resistant to malaria infection. The prevalence of sickle-cell genes in those populations therefore may represent an adaptive, evolved response—albeit it a cruel one—to the harsh burden of malaria. Indeed, the genetic blood disorders called thalassemias, which are prevalent in many Mediterranean ethnic groups, may similarly represent the result of natural selection for improved survival against malaria (although unlike sickle-cell anemia, which involves an abnormal hemoglobin, thalassemias are caused by underproduction of a normal hemoglobin).

Looking outside the human species, one finds that the globin proteins have a staggeringly wide range of forms and chemistries, representing an incredible amount of evolutionary experimentation that long preceded the development of our species’ own tidy system. Horseshoe crabs aren’t aristocrats but they are literally blue bloods, with a blood pigment that carries copper instead of iron. Sea squirts have a green blood pigment based on vanadium.

Many animals (including humans), in addition to some form of hemoglobin, also have myoglobin, a large globular protein that helps muscle tissue hang onto the oxygen it needs while working. (Myoglobin is the reason that meat is red, my fellow steak lovers.) Some dopaminergic neurons and glia in the brain also seem to contain hemoglobin, possibly to insure the brain’s own access to oxygen under suffocating conditions. And hemoglobin itself also seems to have functions quite apart from oxygen transport: its antioxidant properties and interactions with iron seem to be used by cells in the kidneys and immune system.

All this variation suggests that hemoglobin genetics might not have been such an odd target for natural selection in the evolution of mammoths: it is an extremely malleable protein with diverse capabilities, and because organisms often contain multiple copies of it, variants can creep into a population and survive long enough to be tested by selection.

In fact, another example not unlike the mammoths was reported in the scientific literature last year. Jay Storz of the University of Nebraska-Lincoln and an international team of collaborators reported last August in the Proceedings of the National Academy of Sciences on their genetic comparison of deer mice living in lowland prairies and in the mountains. The mice were identical in every respect except for just four genes—all of which boosted the oxygen capacity of the mountain mice’s hemoglobin.

Mammoths and deer mice may be at opposite ends of the spectrum for mammalian size, but similar principles of evolution applies at all scales.

Principled Rightwing Absurdity

John Rennie

Nothing I could write could better capture the preposterous incoherence of current U.S. conservatism than this passage by Digby at Hullaballoo:

Next time someone in a tri-corner hat starts waving the constitution in your face, ask them about today's Senate Homeland Security hearings, where the conservatives had a complete fit at Mayor Michael Bloomberg's complaint that people on the terrorist watch list can buy any gun they like and there's nothing anyone can do about it --- while at the same time they all thoughtfully pondered whether or not we should strip them of their citizenship.

Energetically Batty

John Rennie

A recent discovery about the evolutionary origins of bats relates, at least indirectly, to a problem causing the extinction of many bat species throughout North America.

As they reported in the Proceedings of the National Academy of Sciences, Ya-Ping Zhang of the Kunming Institute of Zoology and colleagues have found evidence that mitochondrial genes in bats, which are responsible for their metabolic use of energy, have been under heavy selection pressure since early in their evolutionary history. Such a conclusion makes considerable sense on its face because bats’ way of life is highly energy intensive.

Nevertheless, the finding also helps to sketch in some of the mysteries of the evolution of bats. Bats are hard to look at in fossil record because they don’t fossilize well; they suddenly appeared in Eocene strata from about 50 million years ago looking fairly much as they do now, which left cryptic some of the steps up to that point.

Over the past decade, several discoveries have helped to firm up the view that the ancestors of bats spent their lives leaping and hopping through vegetation before they could fly; that the evolution of their wings seemed associated with key changes in certain genes for digits; that their ability to echolocate seems to have emerged after their capacity for flight; that the larger bats called Megachiroptera are indeed descendants of the smaller Microchiroptera and not the convergently evolved cousins of primates. The place of the metabolic changes in that scheme, though, were largely speculative. The demands on bats’ mitochondria could have emerged only once they started flying, but instead it seems that their ancestors made a living as an extra hyperkinetic group of insectivores leaping through the trees and shrubs.

All that energy expenditure comes at some cost to the bats, however. Most obviously, they need to eat colossal numbers of insects to satisfy their energy needs. During the cold winter months when insects become more sparse, bat species that don’t migrate to warmer climes must overwinter in caves, suspended in a state of torpor with their metabolisms at a crawl.

Unexpectedly, those energy demands may also figure into the catastrophic epidemic of white nose syndrome that has been killing bats in North America in recent years. The bodies of the dead bats studied by researchers have commonly been marked by heavy accumulations of this fungal infection, which in severe cases can damage the overwintering animals' skin and wings, leaving them unfit. Yet the observed level of infection and the timing of the bats' deaths suggest that this obvious explanation isn't adequate: in many cases, the fungus should have been only about enough to pose an itchy nuisance, maybe akin to aggressive athlete’s foot in humans.

That benign comparison may break down when applied to bats in torpor, however. Bats going into hibernation carry stockpiles of fat to sustain them, but those stockpiles are often just sufficient to get them through the winter season. Moreover, the bats do not consume those fat reserves at a consistently slow pace: every two or three weeks throughout their hibernation, the bats wake out of torpor, excrete, groom themselves, settle into genuine sleep for a while, then slow down their metabolisms again. These periods of activity seem to be essential to hibernating mammals, perhaps because they help to reinvigorate their immune systems. Eighty percent or more of the energy that the bats consume during their hibernation is spent during these brief intervals.

One theory therefore being investigated by researchers such as Craig Willis at the University of Winnipeg is that the fungus makes the animals rouse from torpor more often to groom themselves, and in the process makes them burn through their crucial fat reserves too quickly. Long before winter is over and insects are plentiful again, the bats starve.

North American bat species may be more vulnerable to white nose fungus than their European counterparts are because they roost in far larger colonies and so may transmit the fungus more widely throughout their population. It’s entirely possible, however, that the smaller European colonies are in fact an evolutionary consequence of white nose fungus (or some equivalent affliction) selecting against larger gatherings in the distant past.

This is only my own speculation, but if a highly amped metabolism has long been an adaptive feature of bats and their immediate ancestors, then perhaps for as long as these creatures have been using torpor to survive the cold months, something like white nose fungus has been cropping back colony sizes periodically. Unfortunately, given the difficulty of finding good bat fossils, verifying such a hypothesis may never really be feasible. But who’s to say what other deductions might yet be possible from the DNA record?

(For the information about white nose syndrome and its possible mechanisms, I'm indebted to science writer Erik Ortlip, now of Environmental Health News, who wrote about this subject while he was my student in the Science, Health and Environmental Reporting Program at the Arthur L. Carter Journalism Institute of New York University. Thanks, Erik!)

Gaaah!!! Nature! Bicephalous edition

John Rennie

Via SFgate:

Two heads are not necessarily better than one: A double-headed bobtail lizard born in Perth, Australia, has problems crawling because its hindlegs get conflicting signals from each brain. Also the bigger head keeps trying to eat the smaller one.

Update: By the way... "Bicephalous"? "Dicephalous?" Sources I've consulted seem to accept both, and if anything accept the former as more common, but the latter seems more consistent in its use of a Greek prefix. Thoughts, my little wordbugs?

Why Stephen Hawking is Wrong

John Rennie

(If this post had a subhead, it would be: Dr. Strangehawking, or How I Learned to Stop Worrying and Love the Aliens.)

The Discovery Channel’s new series Into the Universe with Stephen Hawking debuted last night with an episode about Aliens. I’ll plead guilty to not having seen it—the clips appearing on Discovery’s web site seemed full of both CGI goodness and speculation relatively unencumbered by annoying facts, so I’m not sure what to make of it. But one of the claims that Hawking apparently makes in the program has drawn considerable media attention, as in this BBC story:

Aliens almost certainly exist but humans should avoid making contact, Professor Stephen Hawking has warned.

In a series for the Discovery Channel the renowned astrophysicist said it was "perfectly rational" to assume intelligent life exists elsewhere.

But he warned that aliens might simply raid Earth for resources, then move on.

Prof Hawking thinks that, rather than actively trying to communicate with extra-terrestrials, humans should do everything possible to avoid contact.

He explained: "We only have to look at ourselves to see how intelligent life might develop into something we wouldn't want to meet."

Hawking’s concern seems to be the Independence Day scenario: that spacefaring species might roam the universe in gigantic motherships, pillaging planets for their resource wealth and exterminating those worlds’ inhabitants in the process. (So for the rumored sequel to Independence Day in the works, does this mean Hawking could take Jeff Goldblum’s role? Or Brent Spiner’s?)

Naturally, Stephen Hawking’s speculations on most matters are probably far more likely to be right than are mine, but in this case I’ll argue that he’s most probably wrong about alien conquests of this sort being a meaningful danger. First, though, I have to make two asides.

  • If we’re suppose to start worrying about dangers from space, I refuse to give one second’s attention to the possibly dubious threat from aliens until after humanity starts taking seriously the statistical certainty that an asteroid will eventually cause a catastrophe on earth again unless we avert it.
  • Back in the ‘70s, Carl Sagan argued in The Cosmic Connection, as other scientist-authors did elsewhere, that any alien civilization advanced enough to contact and visit us would surely have evolved past its own barbaric, warlike phase and would therefore meet us in peace. That view may seem naïve in retrospect, but given how little the available facts have changed, the current bent toward a more paranoid outlook probably says something about how society has changed in recent decades.

Most biologists and astronomers would probably agree with Hawking’s analysis that, in so far as we can make informed guesses about the odds, alien life is probably out there: the Milky Way alone probably brims with at least tens of billions of planets and unguessable numbers of moons; niches conducive to life’s evolution are likely to be legion; the galaxy’s 14 billion year age offers plenty of time for civilizations to rise (and fall). In fact, the numbers so lopsidedly favor extraterrestrial life that the greater conundrum is the Fermi paradox: why haven’t we already seen clear evidence of alien intelligence? But put that question away for now.

Let’s also assume that Sagan was wrong and that an advanced alien civilization can be hostile to us, either through active malevolence and desire for conquest or through the utter disregard we would show for an anthill on a lot zoned for a shopping mall. Let’s completely stipulate the existence of Hawking’s resource-hungry aliens. When would they be trouble?

If the hypothetical aliens are motivated purely by malice and must on principle exterminate us if they know about us—Daleks, ahoy!—then by definition they are a threat. But that argument begs the question: it just posits the existence of the baleful aliens we’re wondering about. They’re not impossible, but we know nothing about their probability a priori.

On the other hand, if these aliens are motivated by a need for resources, we can ask, “What resources do we Earthlings have that they would want so much?” And in those terms, it’s not obvious we have much to offer.

Water? The Oort cloud is lousy with the stuff. Earth’s oceans of water probably came from cometary bombardments during the early days of the solar system. Why wouldn’t they go to the original aquifer?

Minerals? University of Arizona planetary scientist John S. Lewis has estimated [pdf] that just one asteroid about a kilometer wide would contain about 30 million tons of nickel, 1.5 million tons of metal cobalt and 7,500 tons of platinum. The solar system may have a million asteroids of roughly that size. One good-sized metallic asteroid could contain dozens of times as much metal as has ever been mined out of the earth. True, the asteroids are dispersed across a huge volume of interplanetary space—but for an alien civilization capable of getting to our solar system, finding, capturing and mining an asteroid should be elementary. And remember, all this mineral wealth is instantly available to them without the trouble of pulling it all back out of Earth’s gravity well.

Uranium? The inner solar system may be richer in uranium than the asteroids and outer planets are. But that would mean the aliens would have the options of going to Mercury, Mars and our moon, all of which are smaller and lower gravity and where they wouldn’t be faced with a pest eradication problem, as on earth.

Would the aliens want to conquer earth so they could eat our biomass? That’s hard to imagine. I’m not at all sure that truly alien life would be able to metabolize terrestrial meat or vegetation: their biology could be based on different amino acids, sugars with different stereochemistry or countless other subtle differences.

Would they want humans as slave labor? Surely they would have the ability to build robots or other machines (or genetically engineer new organisms, if you prefer) that could work for them much more efficiently and under a far wider range of environmental conditions.

That seems to leave the possibility that the aliens might just earth for its real estate value. Might they be driven by uncontrollable population pressures? Maybe, but it seems like special pleading to imagine that the advanced aliens would see conquering other planets as a simpler, better solution than either reining in their reproduction or just building more of those giant spaceships.

Yet once again, even if we grant hypothetically that the aliens would want earth for its land (or for its other resources), I think it only highlights where Hawking is most incorrect: in recommending that “humans should do everything possible to avoid contact.” If the aliens want earth for its resources, contact with us will be irrelevant to them. The aliens won’t bother to listen for radio signals or other evidence of intelligence in the universe; they will scan the cosmos for earthlike planets around yellow suns, ones with atmospheres that show evidence of photosynthesis and oxygen production, and so on. After all, primitive as we humans are, we’re already on the verge of building telescopes and other instruments capable of finding other earths. The aliens will have detected earth long before our own communications technology existed; if they want our resources, they will have target our planet no matter what we do.

So perversely, Hawking’s reasoning leads me to almost the opposite conclusion: we should definitely be trying to contact any aliens within the sound of our radio voice. Because if those resource-hungry, planet-poaching alien invaders are out there, we may want to line up some other alien allies in a hurry.

Update: Phil Plait and Sean Carroll have their own reactions to Hawking's argument. Oh, and also Ethan Siegel at Starts with a Bang.

Update, Part Deux: This story has more legs than a Denebian sandspider, my friends. Here's still more divergent  commentary on Hawking and the aliens from Chad Orzel, PZ Myers, and ERV. I'd agree with the possibilities of alien contact having disruptive or destructive effects on our species in various incidental ways, but those differ mightily from the misplaced concern that maybe Mars Needs Coal.