This is what we do here: we comment on all that is awe-inspiring, passing judgment where it seems appropriate on what seems hopeful, perilous, beautiful, or doomed.
Posts from the ‘Miscellaneous’ Category
Irony doesn’t come much easier than this. Saturday’s New York Times featured an article on “The Overconfidence Problem in Forecasting,” about the almost universal tendency for people to think their assessments and decisions are more correct than they really are. The article closes smartly with this quote from Mark Twain:
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
Very tidy. Except there’s no good evidence that Mark Twain ever wrote or said those words. (And neither did humorist Will Rogers.)
The actual author? It may be Charles Dudley Warner, an editorial writer for the Hartford Courant who was a friend of Mark Twain and who might have been paraphrasing him… or someone else… unless he was simply piping an imaginary quote for better copy. We may never know.
Bob Kalsey examined the roots of these fatherless words in 2008, and pointed out that similar sentiments were voiced by Confucius, Socrates and Satchel Paige, among others.
Ray Kurzweil, the justly lauded inventor and machine intelligence pioneer, has been predicting that humans will eventually upload their minds into computers for so long that I think his original audience wondered whether a computer was a type of fancy abacus. It simply isn’t news for him to say it anymore, and since nothing substantive has happened recently to make that goal any more imminent, there’s just no good excuse for Wired to still be running articles like this:
Reverse-engineering the human brain so we can simulate it using computers may be just two decades away, says Ray Kurzweil, artificial intelligence expert and author of the best-selling book The Singularity is Near.
It would be the first step toward creating machines that are more powerful than the human brain. These supercomputers could be networked into a cloud computing architecture to amplify their processing capabilities. Meanwhile, algorithms that power them could get more intelligent. Together these could create the ultimate machine that can help us handle the challenges of the future, says Kurzweil.
This article doesn’t explicitly refer to Kurzweil’s inclusion of uploading human consciousness into computers as part of his personal plan for achieving immortality. That’s good, because the idea has already been repeatedly and bloodily drubbed—by writer John Pavlus and by Glenn Zorpette, executive editor of IEEE Spectrum, to take just two recent examples. (Here are audio and a transcription of a conversation between Zorpette, writer John Horgan and Scientific American’s Steve Mirsky that further kicks the dog. And here’s a link to Spectrum‘s terrific 2008 special report that puts the idea of the Singularity in perspective.)
Instead, the Wired piece restricts itself to the technological challenge of building a computer capable of simulating a thinking, human brain. As usual, Kurzweil rationalizes this accomplishment by 2030 by pointing to exponential advances in technology, as famously embodied by Moore’s Law, and this bit of biological reductionism:
Previously, I slightly differed with David Crotty’s good post about why open blogging networks might be incompatible with the business models of established publishing brands, particularly for scientific brands, for which credibility is king. David had diagnosed correctly the very real sources of conflict, I thought, but those problems should only become unmanageable with networks whose pure, principled openness went beyond anything publishers seemed interested in embracing anyway. The more important consideration, in the eyes of the bloggers and the increasingly digital-savvy audience, will be how the brand handles the problems that will inevitably arise—in the same way that how publications manage corrections and mistakes already becomes part of their reputation.
Or to put it another way: the openness of a blogging network doesn’t imperil a brand’s editorial value so much as it helps to define it. (Would that I had thought to phrase it so pithily before!)
Nevertheless, I do think blogging networks potentially pose at least two other types of problems that could be more seriously at odds with commercial brands plans. But neither is exclusive to blogging networks. Rather, they are business problems common to many digital publishing models—the presence of a blogging network just amplifies them.
For the most part, I agree with the individual points and criticisms that David raises. Whether I agree with his bottom-line conclusion that open networks are incompatible with established brands, and maybe most especially with brands built on scientific credibility, depends on the purity of one’s definition of open.
Unquestionably, leaving a troop of bloggers to their own scruples while publishing under your banner is fraught with risk, but as problems go, it’s neither unprecedented nor unmanageable in publishing. In fact, I’d say the open blogging network problem is really just a special case of the larger challenge of joining with fast-paced, out-linking (and poorly paying) online publishing culture. Some of the best prescriptions seem to be what David is suggesting or implying, so perhaps any disagreement I have with him is really over definitions rather than views.
JR Minkel, who blogs as only he can over at A Fistful of Science, recently brought to my attention this Paul Adams article for Popular Science (and indirectly, this news story in the Guardian) about the underappreciated importance of insects as a food source for many people around the world. That prompted me to dig out this recollection of my own foray into eating insects, which I wrote up years ago.
Memoirs of an Entomophage
My reputation in some circles as a person who eats bugs has been blown out of proportion. Yes, I have knowingly and voluntarily eaten insects, but I wish people wouldn’t pluck out that historical detail to epitomize me (“You remember, I’ve told you about John—he’s the bug-eater!”). It was so out of character for me. As a boy, I was fastidious to the point of annoying priggishness; other children would probably have enjoyed making me eat insects had the idea occurred to them, but I wouldn’t have chosen to do so myself. Bug eating was something I matured into, and performed as a professional duty, even a public service.
Here’s how it happened. Back in 1992, the New York Entomological Society turned 100 years old, and decided to celebrate with a banquet at the map-and-hunting-trophy bedecked headquarters of the Explorers Club on East 70th Street. Yearning for attention, the Society’s leaders had the inspiration to put insects not only on the agenda but also on the menu. For hors d’oeuvres, you could try the mini fontina bruschetta with mealworm ganoush, or perhaps the wax worm fritters with plum sauce. Would you care for beetle bread with your potatoes, or are you saving room for the chocolate cricket torte? Waiter, could I get more mango dip for my water bug?
Mind you, eating insects is not so bizarre and alien a concept in most of the world. According to Gene DeFoliart, the editor of the Food Insects Newsletter (that’s right, they have a newsletter), societies outside of Europe and North America routinely eat at least some insects, sometimes because they are the closest things to livestock that’s available. Most of the world does not share our squeamishness about eating things with antennae. Moreover, the consequences of our cultural bigotry can be serious. The U.S. and Europe largely drive the budgets for food-related research around the world, which means that most spending on raising better food animals goes to studying cows, chickens and the like. Millions if not billions in Africa, Asia and Latin America however, would get much more direct benefit from knowing how to improve the fauna with six legs (or more) that provide much of their protein.
Then, too, it’s not as though most of us in America haven’t ever eaten insects. Eight million of us live in New York alone, after all, and the Board of Health can’t be everywhere. The key difference is how many insects we’ve eaten, and how aware we were of it at the time.
Martial arts are my hobby and explaining science is my job, so the recent appearance of “How karate chops break concrete blocks” on io9.com naturally caught my eye. Unfortunately, not only did it fall far short of my hopes of offering a lucid explanation, it parroted misleading statements from an article on exactly this same subject that has been annoying me every time I remembered it for 10 years. (Oh wonderful Web, is there no old error you can’t make new again?) Indulge me while I try belatedly to set the record straight.
The io9 article starts by asking how the squishy human hands of martial artists can break concrete slabs, wooden boards and other considerably harder objects. Reassuringly, it wastes no time on talk of chi and similar Eastern mysticism but instead goes right to a very loose discussion of the biomechanics of hitting efficiently and striking to vulnerable positions in the target. So far, so good, although the descriptions of what to do are probably too vague to be helpful to readers who don’t already know what they’re doing.
Then the article seems to go off the rails (emphasis added):
Unemployed people have time on their hands, you say? Sure, if by “time” you mean the ability to travel backward through time.
I would have thought it was impossible, too, until esteemed economist Arthur Laffer set me straight with his recent op-ed in the Wall Street Journal. Laffer was arguing that it is economically counterproductive to raise unemployment benefits during economic hard times—say, now, for example—because it only creates disincentives for people to work:
Imagine what the unemployment rate would look like if unemployment benefits were universally $150,000 per year. My guess is we’d have a heck of a lot more unemployment. Common sense and personal experience indicate higher unemployment benefits will make unemployment less unattractive and thereby increase unemployment even in the Great Recession.
Can’t argue with that. True, unemployment benefits aren’t actually $150,000 a year—about three times the median household income in 2008. They’re closer to just $300 a week, or $15,600 a year. This hypothetical argument that supposes the exact opposite of reality, that one could on average make much more money by being unemployed, is nevertheless irrefutably compelling.
What really persuaded me, though, was this graph that Laffer and the WSJ supplied to bolster the argument that “since the 1970s there’s been a close correlation between increased unemployment benefits and an increase in the unemployment rate.”
The correlation is indeed close—with increases in unemployment benefits lagging spikes in unemployment by a year or so. This can only mean one thing. Those unemployed layabouts are using some of their $150,000 a year incomes to lounge around in Hot Tub Time Machines, jump back in time and get themselves laid off and so that they can soak up all that sweet, sweet government gravy.
Laffer is apparently not alone in his profession in believing that time travel is a major influence on the U.S. economy. Nobel laureate economist Edward C. Prescott recently stated at the Society for Economic Dynamics meeting in Montreal that Obama caused the current recession, which is a neat trick because the recession started in December 2007.
In summary, then, the keys to time travel are: (1) Lose your job. (2) Study economics.
Physicist Michael Faraday’s journal from 1849 provided the epigram currently perched on the welcome page of my personal site, and I suspect it may already be familiar to most readers of popular science:
“Nothing is too wonderful to be true if it be consistent with the laws of nature, and in such things as these, experiment is the best test of such consistency.”
The first clause has developed an epigrammatic life of its own; science writers often roll it out when they want to convey appreciative awe for some brilliant intricacy or unexpected beauty of the natural world. That was certainly the intention of the late, great polymathic team of Philip and Phylis Morrison when they introduced me to Faraday’s quote during the 1990s while planning the reinvention of their long-running book reviews as a monthly essay column, “Wonders,” for Scientific American. (Originally, they wanted the title of the column to be “Nothing is Too Wonderful to Be True,” until I gently pointed out to them that such a lengthy phrase might not even fit across the top of the page in the new design.)
Far be it from me to knock anyone’s sense of natural wonder, or the power of science to uncover glories that inspire it. Yet I notice that most casual uses of the quotation leave out “…and in such things as these, experiment is the best test of such consistency.” And in so doing, I think, they are unfortunately omitting the most important part of Faraday’s reflection.
Faraday is, after all, not just cheering for us to marvel at nature. He is cautioning us to test our most marvelous hypotheses through rigorous experiment to see if they hold true and consistent with the rest of physical reality. The universe’s inventiveness can far surpass anything we might imagine, but we therefore should not let either our own incredulity or rapture at the amazing possibilities lead us astray.
The first half of Faraday’s quote makes it poetry. The second half makes it science. The union of the two yields the richest human experience from engaging the universe with all our faculties. As such, I couldn’t think of a more fitting sentiment to take as a slogan.
And with this opening solemnity out of the way, away we go….