3D printing might seem poised to realize the replicator economy of Star Trek: at virtually the touch of a button, people could have printers whip up anything they might desire. But even if the technology of 3D printing continues to evolve rapidly, there are important limitations on how thoroughly it will replace good old fashioned manufacturing.Read More
Methane hydrates represent a hugely abundant energy source that could help power the global economy as it shifts away from dirtier coal and oil. That is, the hydrates could become all of those things if engineers and scientists can develop a cost-competitive way to use them.Read More
Irony doesn't come much easier than this. Saturday's New York Times featured an article on "The Overconfidence Problem in Forecasting," about the almost universal tendency for people to think their assessments and decisions are more correct than they really are. The article closes smartly with this quote from Mark Twain:
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
Very tidy. Except there's no good evidence that Mark Twain ever wrote or said those words. (And neither did humorist Will Rogers.)
The actual author? It may be Charles Dudley Warner, an editorial writer for the Hartford Courant who was a friend of Mark Twain and who might have been paraphrasing him... or someone else... unless he was simply piping an imaginary quote for better copy. We may never know.
Bob Kalsey examined the roots of these fatherless words in 2008, and pointed out that similar sentiments were voiced by Confucius, Socrates and Satchel Paige, among others.
Over at Neurotic Physiology—one of the spiffy new Scientopia blogs, as you surely already know, right, pardner?—Scicurious offers a helpful primer, in text and diagrams, on the basics of neurotransmission. In closing, she remarks:
What boggles Sci’s mind is the tiny scale on which this is happening (the order of microns, a micron is 0.000001m), and the SPEED. This happens FAST. Every movement of your fingers requires THOUSANDS of these signals. Every new fact you learn requires thousands more. Heck, every word you are looking at, just the ACT of LOOKING and visual signals coming into your brain. Millions of signals, all over the brain, per second. And out of each tiny signal, tiny things change, and those tiny changes determine what patterns are encoded and what are not. Those patterns can determine something like what things you see are remembered or not. And so, those millions of tiny signals will determine how you do on your calculus test, whether you swerve your car away in time to miss the stop sign, and whether you eat that piece of cake. If that’s not mind-boggling, what IS?!
Indeed so. The dance of molecule-size entities and their integration into the beginning or end of a neural signal happens so dizzyingly fast it defies the imagination. And yet what happens in between a neuron's receipt of a signal and its own release of one—the propagation of an action potential along the length of a neuron's axon—can be incredibly slow by comparison. Witness a wonderful description by Johns Hopkins neuroscientist David Linden, which science writer JR Minkel calls "The most striking science analogy I've ever heard." I won't steal the thunder of JR's brief post by quoting Linden's comment, but read it and you'll see: that's slow!
Remember, too, that in Linden's thought experiment, the giant's transatlantic nerves are presumably myelinated (because I'm assuming that even planet-spanning giants still count as mammals). That is, her nerves are segmentally wrapped in fatty myelin tissue, which electrically insulates them and has the advantage of accelerating the propagation speed of action potentials.
Most invertebrates have unmyelinated axons: in them, action potentials move as a smooth, unbroken wave of electrical activity along the axon's length. In the myelinated axons of mammals and other vertebrates—which apparently have more need of fast, energy-efficient neurons—the action potential jumps along between the nodes separating myelinated stretches of membrane: the depolarization of membrane at one node triggers depolarization at the next node, and so. Because of this jumpy mode of transmission (or saltatory conductance, to use the technical term), the action potentials race along in myelinated neurons at speeds commonly between 10 and 120 meters per second, whereas unmyelinated neurons can often only manage between 5 and 25 meters per second.
Roughly speaking, myelination seems to increase the propagation speed of axon potentials about tenfold. The speed of an action potential also increases with the diameter of an axon, however, which is why neurons that need to conduct signals very rapidly tend to be fatter. (Myelinless invertebrates can therefore compensate for the inefficiency of their neurons by making them thicker.) I suppose that someone who wants to be a killjoy for Linden's great giant analogy could question whether his calculation of the neural transmission speeds took into account how much wider the giant's neurons should be, too: presumably, they should be orders of magnitude faster than those of normal-size humans.
So I'll throw out these two questions for any enterprising readers who would like to calculate the answers:
- If the width of the nerves in Linden's giant scaled up with their length, how much faster than normal human nerves should they be, and how might that affect the giant's reaction time?
- Suppose the giant was not a fee-fi-fo-fum human giant but an ultramega-giant squid with unmyelinated neurons? How fast would the nerve signals travel in it?
Update (added 8/24): Noah Gray, senior editor at the Nature Publishing Group, notes that Sci accidentally overstated the size of the synaptic cleft: it's actually on the order of 20 nanometers on average, not microns. Point taken, and thanks for the correction! Sci may well have fixed that by now in her original, but I append it here for anyone running into the statement here first. Further amendment to my update: Actually, on reflection, in her reference to microns, Sci may have been encompassing events beyond the synaptic cleft itself and extending into the presynaptic and postsynaptic neurons (for example, the movement of the vesicles of neurotransmitter). So rather than my referring to this as a mistake, I'll thank Noah for the point about the size of the cleft and leave it to Sci to clarify or not, as she wishes.
Next time someone seems skeptical of the idea that anthropogenic climate change represents not just an environmental threat, or a threat to economies, or even a human catastrophe, but an actual threat to global security and political security... point to this on Pakistan's horrific floods by Robert Reich at Salon.com:
Flooding there has already stranded 20 million people, more than 10 percent of the population. A fifth of the nation is underwater. More than 3.5 million children are in imminent danger of contracting cholera and acute diarrhea; millions more are in danger of starving if they don’t get help soon. More than 1,500 have already been killed by the floods.
This is a human disaster.
It’s also a frightening opening for the Taliban.
If you’re not moved by the scale of the disaster and its aftermath, consider that our future security is inextricably bound up with the future for Pakistan. Of 175 million Pakistanis, some 100 million are under age 25. In the years ahead they’ll either opt for gainful employment or, in its absence, may choose Islamic extremism.
We are already in a war for their hearts and minds, as well as those of young people throughout the Muslim world.
Right now, Islamic insurgents are using the chaos as an opportunity, attacking police posts in Pakistan’s northwest while police have been occupied in rescue and relief work. Meanwhile, lacking help and losing hope, many Pakistanis are becoming increasingly hostile toward President Asif Ali Zardari.
And, of course, Pakistan has the bomb.
Oh, yes, I know, scientists can't pin Pakistan's floods definitively to global warming. But global warming increases the odds of such disruptive events in Pakistan and elsewhere, and the dangers only multiply as the warming increases. If climate change only makes such disasters a little more probable, consider the repercussions. And consider how the unwillingness of the U.S. and the rest of the industrialized world to curb their greenhouse emissions significantly will sit with people in the countries most likely to bear the brunt of the climate effects.
Update (added 9 a.m.): For a more domestic, constructive and cogent post on a similar theme, go read JR Minkel's "Addressing climate change is about preserving freedom." JR is of course ignoring America's most important freedoms: to drive massive gashogs and leave the garage lights on all night. But maybe he has some kind of crazy hippie point.
Kentucky's Creationist Museum, notwithstanding its stated goal of reaching out to those beyond the Young Earth Biblical creationist community, can make many nonbelievers uncomfortable, according to a study by Bernadette Barton of Morehead State University, as presented Sunday at the American Sociological Association meeting and reported by LiveScience. As she described, ex-fundamentalists, skeptics, gays and others in the groups that she brought on field trips to the museum reported feeling uncomfortable there, fearing that if their beliefs or orientations were revealed, they would be ejected or otherwise persecuted.
This pressure is a form of "compulsory Christianity" that is common in a region known for its fundamentalism, Barton said. People who don't ascribe to fundamentalism often report the need to hide their thoughts for fear of being judged or snubbed. At one point, Barton reported in her paper, a guard with a dog circled a student pointedly twice without saying anything. When he left, a museum patron approached the student and said, "The reason he did that is because of the way you're dressed. We know you're not religious; you just don't fit in." (The student was wearing leggings and a long shirt, Barton writes.)
The pressures were particularly tough for gay members of the group, thanks to exhibits discussing the sinfulness of homosexuality and same-sex marriage. A lesbian couple became paranoid about being near or touching one another, afraid they would be "found out," Barton writes.
I can sympathize with the discomfort of Barton's student group, because it is never pleasant to be surrounded by people (not to mention guards with dogs) who regard everything you represent to be deluded and sinful. Within the bounds of lawful civil liberties, of course, the Creationist Museum, as a private institution, has the right to convey whatever messages and attract whatever clientele it wishes. Most of us in the normal course of our lives would simply avoid places so hostile to us—if we have that choice. (Of course, many people in regions dominated by Christian fundamentalist culture don't have that choice.) The Creationist Museum may fail at outreach, but that's hardly a surprise, because no one who has been there could think it is sincerely meant to convert unbelievers: it exists solely to whip up the faith of the already Christian base.
I know this, and something of how uncomfortable the Creationist Museum can be, because I went there late in 2008. That Christmas, my wife and I were visiting her family in southeastern Indiana, and because the Creationist Museum was only an hour's drive away, some of us decided to made an expedition there.
The potential for trouble seemed real. I had argued against creationism on television and radio and had written a widely distributed article for Scientific American with the gentle title "15 Answers to Creationist Nonsense." Larry, my father-in-law, is not only an avowed and combative atheist but seems to have taken it as a personal goal to try to bring down the Catholic Church during his lifetime as one step toward the total elimination of all religion. On our drive there, I imagined various scenarios in which either or both of us could be drawn into some messy confrontation.
No fireworks occurred, however. The museum was professionally and slickly assembled—not what I'd consider state of the art for science museums because of its heavy reliance on diorama-style displays but perfectly respectable. I tried to be discreet about snapping photos, both because I wasn't sure of the museum's policy on photography and because I didn't want to seem rude about gawking at the hokum. Horrible distortions of science were everywhere, but trying to argue about them with no one in particular would have felt obnoxious. (I'd have been better justified than a creationist loon railing at the Museum of Natural History, but probably no more persuasive to anyone.) Larry shook his head a lot and laughed under his breath as we toured the exhibits, but at no time did he try to pull a plank out of Noah's ark or knock over one of the human mannequins shown cavorting with dinosaurs. Meanwhile, the guards and other staff we encountered seemed friendly and welcoming to everyone there that day.
None of the amiability we experienced, however, mitigates the fact that the museum's message is scientifically insane and morally repulsive to most of us with sensibilities shaped by the Enlightenment. Even in the absence of aggressive acts by guards, dogs or other visitors, you can't be at ease in a place that says you will be consigned to eternal torment if you fail to embrace its creed. Never mind the high-minded burble about outreach: if you are skeptical, gay, non-Christian or otherwise outside the target audience, this museum does not want you there. The museum exists to let fundamentalist Christians dramatically experience their own faith through a reenactment of the Bible as a literal document, from creation through their own personal salvation, while giving them allegedly "scientific" reasons to treat the mythology as fact.
Just yesterday, Chris Anderson and Michael Wolff's article "The Web is Dead. Long Live the Internet" was officially posted on Wired.com. In it, they argue that the rise of apps for smartphones and iPads, RSS feeds, proprietary platforms like the Xbox and so on signal the end of the Web as the center of most people's online lives. In many ways, it argues loosely for a vindication of Wired's notorious "PUSH!" cover story from 1997, which also argued for the end of browsers' relevance. But honestly, between Alexis Madrigal's beautiful rebuttal at TheAtlantic.com, Rob Beschizza's devastating graphics on Boing Boing (reworking Anderson and Wolff's own choice of data), and other quick rebuttals springing up, has any ambitious piece of Internet-related punditry died a faster, more ignominious death? It seems as though the plausibility of this idea has been drained away even before issues of the paper magazine could have reached subscribers.
I don't think Anderson and Wolff's argument is entirely without merit, but the somewhat more nuanced version of it that seems more resilient is one that Farhad Manjoo endorsed in a couple of columns for Slate early this year, "Computers Should Be More Like Toasters" and "I Love the iPad" (both concurrent with the debut of Apple's iPad, which I don't see as even remotely coincidental). Manjoo was arguing for a simpler, more appliance-level interface for computers rather than the death of something as rich and vital as the Web still is. In effect, unlike Anderson and Wolff, Manjoo avoided the trap of arguing for the historically disproven "zero sum game" model of newer technologies driving older ones to extinction, which Madrigal debunks. Even if interfaces and operating systems become more app-like, the browser-mediated Web may continue to be a crucial part of most users' lives.
There's no reason to think the Web as we know it won't eventually die; most things do. And apps seem destined to play a more ubiquitous role in all our lives for some time to come; terrific. It seems doubtful those two propositions are causally linked, however.
Update (added 8/21): Far more sophisticated discussion of the points raised in the Wired article, including additional smart rebuttals, is available in this series of exchanges between Chris Anderson, Tim O'Reilly and John Batelle—served up by Wired.com itself, to its credit.
Ray Kurzweil, the justly lauded inventor and machine intelligence pioneer, has been predicting that humans will eventually upload their minds into computers for so long that I think his original audience wondered whether a computer was a type of fancy abacus. It simply isn’t news for him to say it anymore, and since nothing substantive has happened recently to make that goal any more imminent, there’s just no good excuse for Wired to still be running articles like this:
Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near.
It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.
This article doesn’t explicitly refer to Kurzweil’s inclusion of uploading human consciousness into computers as part of his personal plan for achieving immortality. That’s good, because the idea has already been repeatedly and bloodily drubbed—by writer John Pavlus and by Glenn Zorpette, executive editor of IEEE Spectrum, to take just two recent examples. (Here are audio and a transcription of a conversation between Zorpette, writer John Horgan and Scientific American’s Steve Mirsky that further kicks the dog. And here's a link to Spectrum's terrific 2008 special report that puts the idea of the Singularity in perspective.)
Instead, the Wired piece restricts itself to the technological challenge of building a computer capable of simulating a thinking, human brain. As usual, Kurzweil rationalizes this accomplishment by 2030 by pointing to exponential advances in technology, as famously embodied by Moore’s Law, and this bit of biological reductionism:
A supercomputer capable of running a software simulation of the human brain doesn’t exist yet. Researchers would require a machine with a computational capacity of at least 36.8 petaflops and a memory capacity of 3.2 petabytes ….
Sejnowski says he agrees with Kurzweil’s assessment that about a million lines of code may be enough to simulate the human brain.
Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.
About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.
First, quantitative estimates of the information processing and storage capacities of the brain are all suspect for the simple reason that no one yet understands how nervous systems work. Science has detailed information about neural signaling, and technologies such as fMRI and optogenetics are yielding better information all the time about how the brain’s circuitry produces thoughts, memories and behaviors, but these still fall far short of telling us how brains do anything of interest. Models that treat neurons like transistors and action potentials like digital signals may be too deficient for the job.
But let’s stipulate that some numerical estimate is correct, because mental activities do have to come from physical processes somehow, and those can be quantified and modeled. What about Kurzweil’s premise that “The design of the brain is in the genome”?
In short, no. I was gearing up to explain why that statement is wrong, but then discovered that PZ Myers had done a far better job of refuting it than I could. Read it all for the full force of the rebuttal, but here’s a taste that captures the essence of what’s utterly off kilter:
It's design is not encoded in the genome: what's in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. [Kurzweil] has to simulate all of development from his codebase in order to generate a brain simulator, and he isn't even aware of the magnitude of that problem.
We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven't even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil's clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!
Lay Kurzweil’s error alongside the others at the feet of biology’s most flawed metaphor: that DNA is the blueprint for life.
What this episode ought to call into question for reporters and editors—and yet I doubt that it will—is how reliable or credible Kurzweil’s technological predictions are. Others have evaluated his track record in the past, but I’ll have more to say on it later. For now, in closing I’ll simply borrow this final barb from John Pavlus’s wonderfully named Guns and Elmo site (he’s also responsible for the Rapture of the Nerds image I used as an opener.
How to Make a Singularity
Step 1: “I wonder if brains are just like computers?”
Step 2: Add peta-thingies/giga-whatzits; say “Moore’s Law!” a lot at conferences
Step 3: ??????
Step 4: SINGULARITY!!!11!one
Added later (5:10 pm): I should note that Kurzweil acknowledges his numeric extrapolations of engineering capabilities omit that even “a perfect simulation of the human brain or cortex won’t do anything unless it is infused with knowledge and trained.” Translation: we’ll have the hardware, but we won’t necessarily have the software. And I guess his statement that “Our work on the brain and understanding the mind is at the cutting edge of the singularity” is his way of saying that creating the right software will be hard.
No doubt his admission is supposed to make me as a reader feel that Kurzweil is only being forthcoming and honest, but in fact it might be the most infuriating part of the article. Computers without the appropriate software might as well be snow globes. As a technologist, Kurzweil knows that better than most of us. So he should also know that neuroscientists’ still primitive understanding of how the brain solves problems, stores and recalls memories, generates consciousness or performs any of the other feats that make it interesting largely moots his how-fast-will-supercomputers-be argument. And yet he makes it anyway.
Previously, I slightly differed with David Crotty’s good post about why open blogging networks might be incompatible with the business models of established publishing brands, particularly for scientific brands, for which credibility is king. David had diagnosed correctly the very real sources of conflict, I thought, but those problems should only become unmanageable with networks whose pure, principled openness went beyond anything publishers seemed interested in embracing anyway. The more important consideration, in the eyes of the bloggers and the increasingly digital-savvy audience, will be how the brand handles the problems that will inevitably arise—in the same way that how publications manage corrections and mistakes already becomes part of their reputation. Or to put it another way: the openness of a blogging network doesn’t imperil a brand’s editorial value so much as it helps to define it. (Would that I had thought to phrase it so pithily before!)
Nevertheless, I do think blogging networks potentially pose at least two other types of problems that could be more seriously at odds with commercial brands plans. But neither is exclusive to blogging networks. Rather, they are business problems common to many digital publishing models—the presence of a blogging network just amplifies them.
1) Problems with lack of exclusivity. Nearly all commercial publishers’ digital business models are wobbly in the knees, part of which means that the writers and other creators of digital content typically don’t get paid well, if at all. In the wake of the Scienceblogs/PepsiCo mess, when many Sciblings were pouring out their hearts, even some of the well-known bloggers acknowledged that their earnings were meager—maybe enough to pay for a nice dinner every month.
Nonexistent wages don’t seem to discourage science bloggers, who do it mostly for passion, not pay. Certainly it isn’t holding up the formation of new blogging networks like Scientopia, which seemingly have no sources of revenue and where everyone works for free.
But as Bora Z. has discussed more than once in his recent (and amazing) dissections of the science blogging world, precisely because these networks can’t pay their bloggers, they don’t ask for exclusivity. The bloggers are free to write for other sites, and bloggers may even cross-post the same or similar content on multiple sites. It’s only fair, after all: they’re working for free. [Update: To clarify, the prolific Ed Yong of Not Exactly Rocket Science informs me that both Scienceblogs and Discover do ask for some degree of exclusivity, which only makes sense. My point isn't that exclusivity isn't totally absent from the world of science blogging, only that for the blogging networks that can't reward contributors with cash, little or no exclusivity seems to be asked in return at this point. Perhaps that may change.]
You have to love the communitarian rising-web-raises-all-boats spirit of this, and it can be good for the bloggers themselves, but most branding managers, I think, would find that a miserable arrangement. If I publish SuperCoolScience.com and realize that not only do all my bloggers appear elsewhere but so does their best content, what is my blog network doing for me? What unique value proposition does my network have that justifies my taking on whatever overhead keeps it running? And if I don’t have any control over what the bloggers are writing or linking to, I can’t even use them to steer additional traffic my way. It would be easier to put up a blog roll or RSS feeds of the same bloggers on other sites and let the Internet be my network.
This, to my mind, is a real unsolved problem for all the blogging networks: how do the desired synergies of pulling together these individual talents materialize? Blog carnivals, shared memes and the like are all great for a certain kind of community building, but they don’t seem to be enough to build a business proposition around.
All of us online throw around the word community to the point that it’s debased. Too often, what we call communities are really just big open parties: people arrive, hang out with their friends, gossip about those jerks in the corner and split before the drinks run out and somebody asks for help cleaning up. People in genuine communities have more of a shared destiny—they don’t just hang out together, they build something unique.
And that uniqueness is part of what business managers, investors, advertisers and the like are counting on seeing. None of this needs to be a problem for publishers whose models don’t require a measurable financial return on their networks. A good stable of bloggers, even nonexclusive ones, may bring a useful cachet, and the expenses may be so insignificant that it can be run as a loss leader. But for it to make sense to discuss “blogging networks” and serious “business models” in the same sentence, these nascent communities need to periodically put up a barn or something.
2) The corrupting influence of advertising. What brought this to mind was the part of David’s critique of the Scienceblogs/PepsiCo fiasco where he observed that “the bloggers were not the customers, the bloggers were the product.”
He’s right, the bloggers are the product. Or rather, a product. A mistake that many online enthusiasts make is thinking that the real customers are the universe of eyeballs reading blog posts. They’re not; at least not for commercial websites driven primarily by ad revenues. Those advertisers are the true customers because they are paying for access to that audience. In those terms, the site’s visitors are its major product. The bloggers and their posts are intermediary products—bait raised to chum the waters.
That’s not a flattering view of commercial publishing, particularly for any of us involved in creating editorial content. Yet it’s worth considering, especially in light of the old maxim, “Who pays the piper calls the tune.” If your business model depends on drawing advertisers, you can only stay in business by offering content that serves those advertisers’ interests. The editorial needn’t parrot whatever the advertisers want said, because doing so ultimately may not serve their interests, either. But if the advertisers see the content as contrary or irrelevant to their interests, they may see little reason to support it.
Nothing should be surprising about that, and the longtime survival of commercial media proves that this system can work (with occasional ugly lapses). But to the extent that open blog networks or even individual blogs are unpredictable, inconsistent or intermittently antagonistic to chunks of their audience, advertisers will be leery of them. Some advertisers may have the stomach for placing their ads in such environments, but most don’t.
For commercial publishers looking to pull together their own networks of blogs and knowing they don’t have the resources (or the inclination) to police their offerings ahead of time, the trick will be to assemble responsible, autonomous bloggers who collectively deliver a product compatible with the larger brand. They need to avoid the problem that Kent Anderson of The Scholarly Kitchen calls “filter failure.” [Update: Bora points out to me that the term "filter failure" originated with Clay Shirky.]
I imagine it’s not unlike the situation for parents whose teenage kids want to borrow the car. If the foundation for trust is there, granting that liberty can make life better all around. If it’s not, expect some sleepless nights.
Inspired by David Crotty's post at the Scholarly Kitchen, the indomitable blogfather Bora Z. tweets: For the most part, I agree with the individual points and criticisms that David raises. Whether I agree with his bottom-line conclusion that open networks are incompatible with established brands, and maybe most especially with brands built on scientific credibility, depends on the purity of one’s definition of open.
Unquestionably, leaving a troop of bloggers to their own scruples while publishing under your banner is fraught with risk, but as problems go, it’s neither unprecedented nor unmanageable in publishing. In fact, I’d say the open blogging network problem is really just a special case of the larger challenge of joining with fast-paced, out-linking (and poorly paying) online publishing culture. Some of the best prescriptions seem to be what David is suggesting or implying, so perhaps any disagreement I have with him is really over definitions rather than views.
David is on the nose about some of the headaches that unsupervised bloggers can pose to publishing brands. He mentions, for example, a post by neuroscientist R. Douglas Fields, blogging for Scientific American, that too credulously covered a supposed link between bedsprings and cancer. I’ll mention another: psychology blogger Jesse Bering intemperately responded to another blogger who took offense at a column he wrote. (Don’t bother looking for that response now, though, because I believe SciAm has removed it, and Bering may have subsequently apologized for it. [fix added later; see comments.]) Also, though there may not be much about Virginia Heffernan’s notorious article for the New York Times about Scienceblogs to agree with, it’s valid to suggest that the impression some of the more fiery posts make on visitors may not be what Seed Media had originally intended. (Whether that’s actually a problem for Seed Media is for it to say.)
Do those problems undermine the rationale for publishers to back open blogging networks? Let’s face it: even for highly credible, careful publishers, errors of fact and judgment occasionally find their way into print. The reputation of a publishing brand depends not just on how few of those mistakes it makes but on how it handles those that do. Hence, errata and correction pages. With the rise of breakneck publishing on the 24/7 web, lots of publishers have had to accept that some of the meticulous approaches to writing, editing and fact-checking that they used in the past are too slow, and that the advantages of saying something fast often outweigh those of more measured alternatives.
What makes this scary system manageable is a combination of technology and the online culture. Mistakes and controversies that go up online can be flagged and debated in comments, and fixed or deleted as judged appropriate. And experienced online audiences recognize that such discussion and changes occur and may accept them without necessarily losing their respect for the associated brand.
Rambunctious columnists and knowing how to handle them isn’t a new challenge. Editors in print and elsewhere have always sweated over how much to intrude on what columnists write. A reason that you hire a columnist is not just that he or she is good that he or she is reliably good with a minimum of supervision. As an editor, you realize that your columnists may sometimes take positions that the publication as a whole wouldn’t stand beside; you also realize that some of your audience will hold the publication responsible anyway. How and when you step in is part of what defines your editorial identity, but it also reflects how well you trust your audience to recognize and value the diversity of views you are presenting.
Scienceblogs isn’t an anarchically pure open network. It invited certain science bloggers, then let them go with essentially no supervision thereafter. In effect, it did its quality control in advance by choosing whom to invite. Discover does this much more selectively with its far smaller stable of first-rate bloggers, to excellent effect.
Some science publishing brands may only be served by closed networks of staff bloggers, whose every word is parsed and fact-checked by other editors before it goes online. On the face of it, though, such a scheme sounds like it would lack the necessary nimbleness to thrive.
The better middle ground, which I expect we’ll continue to see more of, are essentially open networks in which publishers choose bloggers who seem to embrace a common, compatible ethos or perspective. That’s what the newly formed Scientopia.org, for example, seems to have done, while preserving a good diversity of interests and perspectives.
So I don’t think that networks of bloggers are truly problematic for established commercial brands for any of these reasons. The real challenges lie elsewhere—and I’ll get to those in a separate post.