About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Saturday, May 28, 2011

Michael’s Picks

by Michael De Dora
* For the first time in Gallup’s tracking of same-sex marriage, the polling organization has found a majority of Americans (53 percent) support marriage equality.
* Miranda Celeste Hale has examined the U.S. Conference of Catholic Bishops’ recent report on priest sex abuse, and found a host of flaws with its methodology, data, and conclusions. 
* A recent poll shows many Americans (49 percent) identify themselves largely as pro-choice — yet in the same poll, 51 percent said they believe abortion is morally wrong. 
* Freakonomics author Steven Levitt says “the primary determinant of where I stand with respect to government interference in activities comes down to the answer to a simple question: How would I feel if my daughter were engaged in that activity?”
* Kevin Drum correctly notes that Levitt’s argument isn’t very good, but argues that “the daughter test” is the way most people think about morality. 
* Wisconsin State Superintendent Tony Evers told Gov. Scott Walker this week that Walker’s plan to expand the state’s voucher program is “morally wrong.”
* The Vatican is holding a conference this weekend on the morality and effectiveness of using condoms to prevent HIV/AIDS.  
* The Economist discusses Sen. Chuck Grassley’s (R-Iowa) report that finds “The Constitution does not require the government to exempt churches from federal income taxation or from filing tax and information returns.”

Monday, May 23, 2011

Lena's Picks

by Lena Groeger
* Quirky professors and annual hacks — could MIT be the “beacon of inspiration” for creating a brighter future around science and technology? 
* Don’t worry, it’s not the end of the world. What happens when the rapture doesn’t. 
* The Stone is back! Looking forward to this fantastic NY Times philosophy series.
* “Discovering that hunter-gatherers had constructed Göbekli Tepe was like finding that someone had built a 747 in a basement with an X-Acto knife.” What this ancient site tells us about the role of religion in the rise of civilization. 
* The forgotten sanctuaries of learning: John Wilford on the golden age of Arabic science.
* The wonders of modern technology brought us food that can grow quicker, last longer, taste tastier, and look better. Ironic we would need a robot to test its safety.
* How do education, economics and religion fit together? Very colorfully in this NY Times chart
* We needn’t be so afraid of memory loss
* An Islamic studies scholar outlines the challenges of explaining Islam to the public in a post 9/11 world. 
* Synesthesia is a condition of mixed sensations — so that you might taste the number four or hear blue. This bizarrely surreal video attempts to capture the experience.

Friday, May 20, 2011

Massimo's Picks

by Massimo Pigliucci
* Mom says to daughter that she will be left behind and go to hell this Saturday. I wonder about their Sunday morning conversation...
* If you read this and you don't at least consider starting a revolution there may be something seriously wrong with you.
* Sleep deprivation can make you unethical.
* Podcasts as Socratic dialogues.
* The new religion of experimental monotheism, where belief in god comes with statistical confidence intervals...
* Why brains are not like computers.
* Philosophy Talk on beliefs gone wild, how the human mind can be filled to the brim with all kinds of falsehoods.
* Google has an in-house philosopher, tending to their moral operating system.
* My recent talk on the philosophical and biological aspects of the concept of race.
* My Amazon review of Science Fiction and Philosophy.
* The poor quality of undergraduate education in the United States.
* The Synthese / Intelligent Design controversy makes it to the New York Times.
* The Academy Shrugged: Charles Koch buys economics department at FSU and destroys academic freedom.
* Very nice piece on nature vs nurture by one of the most influential scholars in my early career: Richard Lewontin.

Wednesday, May 18, 2011

Who dunnit? The not-so-insignificant quirks of language

by Lena Groeger
When it comes to cognitive processes like memory, judgment and decision-making, humans are subject to all sorts of biases and seemingly trivial influences. Now, add one more to that list: peculiar habits of language.
Several studies in the past year have hinted at the many subtle ways in which the language you speak can play a role in how you remember events, make judgments of blame and responsibility, and dole out punishment. Specifically, psychologists and linguists have looked at how different languages construct agency, and the implications that follow.
First, let’s take a look at how speakers of different languages actually describe actions and outcomes in which an “agent” is involved. English speakers typically use agentive expressions to describe accidents: “I broke the vase.” Non-agentive expressions, like “mistakes were made” often sound evasive. Spanish speakers, on the other hand, typically describe those same accidents as passive occurrences: “se me rompió el florero,” or translated literally: “the vase broke itself to me.” Spanish or English speakers are clearly not locked into only one way of saying things, but these general patterns of language often make certain expressions sound more natural.
To demonstrate these patterns, psychologists Lera Boroditsky and Caitlin Fausey had English and Spanish speakers watch videos of various events in which a man interacts with an object. In some cases, the event is clearly intentional — he picks up a pencil, deliberately snaps it in half, and then smiles contentedly. In other cases, it is clearly an accident — he is in the midst of writing when the pencil breaks and he throws his hands up in surprise. After watching these videos, subjects were asked to describe what had just happened.
When describing intentional events, English and Spanish speakers used agentive expressions like “he broke the pencil” equally. But when describing accidental events, English speakers used agentive expressions much more often than Spanish speakers. So an accidental event would be described in English as “he popped the balloon,” but in Spanish as “the balloon popped.” Same event, different description, based entirely on the language of the subject.
Mere description is one thing — memory is quite another. To test whether language would play a role in how well subjects could remember agents, Boroditsky conducted a second study. She had subjects watch videos of events featuring an actor in a blue shirt or a different actor in a yellow shirt. Later, they saw the same event performed by a third actor, and had to recall who (blue or yellow) had performed the original. English and Spanish speakers remembered who performed the intentional events equally well, but not so with the accidental events. In those cases, Spanish speakers had a much harder time remembering who did it. Spanish speakers didn’t have worse memory overall — the discrepancy only showed up with accidental events. In other words, memories about who did what seem to be influenced by how much emphasis a language places on the who.
In both the previous examples, subjects produced their own descriptions of events. But what happens, as it often does in the real world, when those descriptions are provided by others? A few studies suggest that descriptions can have a profound influence on another cognitive process in which agency is of utmost importance: judgments of guilt or blame.
Remember the Justin Timberlake and Janet Jackson “wardrobe malfunction” of 2004? In an amusing study, subjects (all English speakers this time) read one of two versions of a description of the event, containing either the phrase “he tore the bodice” or “the bodice tore.” People who read the first version blamed Timberlake more and fined him 53% more heavily than those who read the second version. This was true even when, in addition to reading a written description, subjects watched a video of the incident. In other words, even after witnessing the tearing with their own eyes, subjects’ judgments of blame and punishment were dependent on the phrasing used to describe it.
Which raises an interesting question. Could it be that speakers of different languages dole out more or less severe punishments depending on the frequency of agentive expressions in their language? The research isn’t there yet, but it remains an intriguing possibility.
All of these findings are not just entertaining factoids about language use. They suggest that patterns in language might actually shape how people construe and reason about events. And that has real world consequences, particularly in legal contexts. The specific language used in police reports, legal statements, court testimony, and public discourse is full of descriptions that influence not only verdicts of guilt or innocence but also the sentencing process.
Regardless of how the nuances of language shape our judgments and memory, there is one very practical instance in which description of agency makes a huge difference: in translation. Linguist Luna Filipovic describes a case in California in which a Spanish-speaking suspect was accused of manslaughter. He told the interrogator: “se me cayó,” which translates literally: “to me it happened that she fell.” It was translated for the court as "I dropped her." Do these two phrases really mean the same thing? Would they mean the same thing to a juror? Not so clear.
What is clear is how susceptible we are to habits of expression or twists of translation. And we’re only just beginning to understand the consequences.

Monday, May 16, 2011

Podcast double teaser: what is philosophy of science good for, and why should we care about teaching the humanities?

by Massimo Pigliucci
Julia and I are about to tape two episodes of the Rationally Speaking podcast, the first on the topic of philosophy of science, the second on the somewhat related subject of recent assaults on the teaching of humanities in American (and British) universities.
There is of course much to be said about philosophy of science, a topic which we have touched on before and will undoubtedly again. Still, a good point of departure for this discussion is a recent interview with Alex Rosenberg, author of Philosophy of Science: A Contemporary Introduction, published by Routledge. I don't necessarily subscribe to all of Rosenberg's specific views, naturally, but he is a prominent philosopher of science, and he is addressing the sort of questions we will debate during the show.
These questions include: what is philosophy of science about? Should philsci matter to scholars in other disciplines, particularly scientists? Why is there a certain degree of animosity between philosophers and scientists? For instance, Richard Feynman famously said that philsci is as relevant to scientists as ornithology is to birds, but Daniel Dennett quipped that there is no such thing as philosophy-free science, only science whose philosophical baggage is taken on-board unexamined. And more: how does philsci relate to philosophy more broadly? Which philosophers of science have had the most impact during the past century, and why? (Here is where my own views will diverge sharply from Rosenberg's.) And what are the current areas of investigation in philsci?
Next we will turn to what is rapidly becoming a war on the humanities in many universities, fueled at least in part by the increasing widespread attitude that higher education should be treated as a business, and that programs that bring in money (in the form of high tuitions from students or external grants) should be prioritized, with the rest put on the chopping block.
A recent example is the closing of several language departments at SUNY-Albany, which has been roundly criticized, among others by molecular biologist Gregory A Petsko on his blog (witty, as well as incisive). This comes at a general time of crisis in academia, when entire departments can be literally bought by outsiders with overt political agendas, and when people begin to seriously question whether a degree for which one spends tens or hundreds of thousands of dollars is actually worth the price tag.
So, what is the point of teaching languages, literature, history or philosophy? Can we seriously have universities that focus only on science and marketable skills? Is the ideal of a liberal education an antiquated leftover of bygones eras, or a necessary foundation for any open democratic society? Chime in, and then download the episodes!

Friday, May 13, 2011

Jonathan Haidt does it again, unfortunately

by Massimo Pigliucci
I have criticized social psychologist Jonathan Haidt before, specifically for what I think is his badly researched and argued contention that the Academy discriminates against conservatives (I rather think it is many conservatives who are not attracted to the academy — with all that open inquiry and low salaries). On the other hand, I do like his more nuanced research on the different sets of moral criteria assumed by liberals and conservatives, though even there he has a tendency to step over from “is” to “ought” in the sort of seamless way that rightly annoyed David Hume.
And now he has done it again. In a flabbergasting editorial published in the New York Times after the news of bin Laden’s death came out, Haidt once more begins with interesting science — a mix of (as we shall see, a bit sloppy) evolutionary biology and sociology — and ends up into moral philosophical territory, where he predictably blunders.
Before I tell you what he wrote and where I think he went wrong, let me make clear my own position on the complicated issue of bin Laden’s killing. First, I rejoiced, as any decent human being, I think, ought to do on that occasion. Second, I did not “celebrate,” i.e. went to a party, shouted in the streets, drank beer or sang God Bless America. Third, I do think the US did the right thing, all things considered. Fourth, however, the US did indeed act in defiance of international law and used its usual double standard based on American exceptionalism (just imagine what would have happened if another country had conducted a commando raid on American soil to kill an international criminal that had somehow escaped the FBI’s attention...). As I hope you can see, I hold complex and perhaps even partially contradictory views on this, which I think are appropriate to the complexity of the situation itself.
Okay, now here is Haidt. He starts out by wondering why so many people were critical of, even disturbed by, the street celebrations that spontaneously erupted in the US after the news of bin Laden’s death. And he says (emphasis mine):
“Why are so many Americans reluctant to join the party? As a social psychologist I believe that one major reason is that some people are thinking about this national event using the same moral intuitions they’d use for a standard criminal case. For example, they ask us to imagine whether it would be appropriate for two parents to celebrate the execution, by lethal injection, of the man who murdered their daughter. Of course the parents would be entitled to feel relief and perhaps even private joy. But if they threw a party at the prison gates, popping Champagne corks as the syringe went in, that would be a celebration of death and vengeance, not justice. And is that not what we saw last Sunday night when young revelers, some drinking beer, converged on Times Square and the White House?”
To which very reasonable question he astoundingly answers: “No, it is not”! And why not? Because according to Haidt “you can’t just scale up your ideas about morality at the individual level and apply them to groups and nations.” One wonders whether that “can’t” is a principle of logic, a scientific law, or what, because I thought that actually the idea that what is decent for an individual to do is also decent for a group of individuals to do is one of the cornerstones of what we like to call civilization.
But Haidt has different ideas, informed by his (mis)understanding of evolutionary theory. He proceeds to tell his readers that humans evolved by a two-step process: individual selection for selfishness and group selection for cooperativeness, just like “bees, ants and termites.” First off, the hypothesis that group selection had anything to do with human evolution is just that, a (controversial) hypothesis, far from having been established (pace my good colleague David Sloan Wilson). Second, bees, ants and termites did not evolve their social behavior by group selection, but by a different mechanism known as kin selection, which is actually closer in nature to individual selection (because it acts on “extended fitness,” i.e. the fitness you enjoy by means of propagating your genes not just on your own, but also by way of your relatives’ survival and reproduction). This, incidentally, agrees with the obvious observation that human cooperation and societal structure is nothing like that of eusocial insects.
Haidt then moves on to territory that is more familiar to him, and where the actual insightful contribution of his op-ed piece is more clearly visible. He tells us that sociologists since Émile Durkheim have written about different levels of social sentiments. At a lower level we show affection and respect for individuals, but we also engage in group-level, “collective” emotions (an oxymoron, really, since emotions are by definition experienced by individuals, not groups, but let’s let that slide), which according to Durkheim and sociologists since, explain a variety of human phenomena from team sports to warfare.
Here is how Haidt elaborates on the dynamics of collective emotions: “One such emotion [Durkheim] called ‘collective effervescence’: the passion and ecstasy that is found in tribal religious rituals when communities come together to sing, dance around a fire and dissolve the boundaries that separate them from each other. The spontaneous celebrations of last week were straight out of Durkheim.”
I’m sure they were. But were they a good thing? Haidt too asks this question, and that’s were things go badly again, as he steps from sociology (what is) to moral philosophy (what ought to be), and makes a predictable blunder. He distinguishes between nationalism and patriotism, arguing that the former is bad (because it leads to hostility toward other countries), while the latter is good (because...?, he doesn’t really say). You can think of the difference as waging aggressive war against another nation vs celebrating your team’s winning the World Cup. Clearly, the first one is bad, the second one is morally neutral (which is not at all the same as saying that it is morally good, by the way).
Now, how do we know that last week’s street celebrations were a matter of patriotism and not nationalism? We don’t, actually, but Haidt performs a nice slight of hand and tells us that research has shown that the coming together of people after the attacks of 9/11 (for instance, in donating blood for the victims) was motivated more by the former than the latter. I completely believe that, but I don’t see how it licenses the extension of the same findings to the new situation. Is blood donation on the same moral level as shouting and drinking beer?
More damning of all, Haidt concludes his piece by writing: “This is why I believe that last week’s celebrations were good and healthy. America achieved its goal — bravely and decisively — after 10 painful years. People who love their country sought out one another to share collective effervescence.” Besides the already noted fact that the “brave and decisive” action was, while justified, a bit marred by hypocrisy and the flaunting of international law, it seems to me that the people who rejoiced without celebrating were showing patriotism and compassion for the victims of 9/11, while those who were chanting “USA, USA” in the streets while holding beer cans were engaging in the most obvious and deplorable type of nationalism — again, that behavior is appropriate after winning the World Cup, not after killing someone.
Perhaps the best criticism of Haidt’s piece came from one of his own readers, Grigori Guitchounts (interestingly, a neuroscientist), from Cambridge, MA, who wrote: “Just because something is natural doesn’t mean that it is morally acceptable. This is obvious when it comes to an issue like sexual predation: men may have strong sexual urges, but most of those cannot be acted on in a morally defensible way. Science can guide our morality, but it does not determine it. Morality must be determined by philosophy rather than facts alone. We can choose whether we want to celebrate the killing of a monster, but no science will ever justify that decision.” Amen to that.

Wednesday, May 11, 2011

Lena's Picks

by Lena Groeger
* “The distinction between organisms and their environments remains deeply embedded in our consciousness.” Evelyn Fox Keller tries to dispel the nature/nurture dichotomy.
* Rationally Speaking’s Julia Galef on life’s big questions (and what The Hitchhiker’s Guide has to say about ‘em).
* The (impossible?) endeavor to simulate the human brain…
* One of my favorite philosophers turned 300 this past weekend. A quote and a story remembering David Hume.
* On his life, his book, and the Large Hadron Collider: an interview with Stephen Hawking.
* If you’re going to vote, vote well. Otherwise, you’re doing it wrong. So says Jason Brennan in his new book, The Ethics of Voting. 
* It’s pretty relaxing in the armchair. This survey of careers with the lowest levels of stress puts philosophers in 7th place. And that means… what exactly?
* Accuracy is not what it used to be (at least, it shouldn’t). On improving accuracy in the news.
* “Technology can be, but is not always, the answer. Ideas about nature matter.” Alexis Madrigal on his new book about green technology and the ideas that drive policy and products.
* And this one just for fun: the 2011 guide to making people feel old. Toy Story was really that long ago?!

Tuesday, May 10, 2011

Michael’s Picks

by Michael De Dora
* Rumor has it President Obama will issue an executive order that would require companies bidding for federal contracts to disclose political contributions now secret under the Citizen’s United ruling.
* The U.S. Court of Appeals has lifted the ban on federal funding for stem cell research. 
* How surprising: research shows atheists are decent people — perhaps even more ethical than the religious. 
* Patricia Churchland talks with the Boston Globe about her new book, Braintrust: What Neuroscience Tells Us about Morality. This one is on my to-read list.
* An interview with primatologist and ethologist Frans de Waal on the biological basis of morality.
* A New Jersey Transit worker who had been fired for burning a Quran has been given his job back, plus damages, and back-pay.
* Benjamin Nelson, on the blog Talking Philosophy, gives three reasons why John Stuart Mills’ utilitarianism should be taken more seriously.

Sunday, May 08, 2011

Barbara Bradley Hagerty does it again, unfortunately

by Massimo Pigliucci
I’m really getting irritated with NPR reporter Barbara Bradley Hagerty. Recently I wrote about her inane piece on miracles, after which I found out that she wrote a book entitled Fingerprints of God: The Search for the Science of Spirituality (oh boy), and that — to no one’s surprise — she has been awarded a fellowship by the infamous Templeton Foundation as part of their Journalism Programme in Science & Religion. And now she’s done it again. On Saturday, May 7, she broadcasted an incredibly uncritical and uninformative piece on people who are predicting the end of the world (coming soon: May 21!). Let me give you a taste of the piece, then I’ll comment on why this is the sort of garbage I expect from Faux News, not NPR.
Hagerty starts out by featuring two poor deluded fellows from New Jersey, Brian Haubert, a 33-year-old actuary, and Kevin Brown, the owner of a nutrition and wellness business. We find them handing out pamphlets and trying to convince people that Judgment Day is around the corner. Their prediction is pretty specific:
“[On May 21] starting in the Pacific Rim at around the 6 p.m. local time hour, in each time zone, there will be a great earthquake, such as has never been in the history of the Earth.” This will result in the “rapture” of “true” Christians, while the rest of us will await behind for another 153 days, after which “the entire universe and planet Earth will be destroyed forever.” All right, then, time to convert and pack, not necessarily in that order.
How do these people know any of this? Naturally, because it says so in the Bible, even though theologians and most other (apparently “not true”) Christians disagree. Hagerty puts it this way, talking about Haubert: “Noah's flood to May 21, 2011, is exactly 7,000 years. Revelations like this have changed his life.” Well, he ought to know, he is an actuary, must be good with numbers.
Of course, doomsday predictions aren’t new (and guess what? They have all failed!), but apparently the latest craze was inspired by Harold Camping, the 89-year old founder of the Family Radio network (because as is well known, non-fundies really really hate families). Hagerty reports — without comment, of course — that scores of people have given up their jobs and their families to follow this crazy son of a gun, who’ll likely be dead before too many of his followers can get really upset at the bullshit he pulled on them.
Bradley features one such poor soul, 27-year old Adrienne Martinez, who listened to Family Radio with her husband and decided to quit everything and just wait out the end of the world in a rented place in Florida. And here’s the kicker: they budgeted things so that their money will run out on May 21, despite the fact that they have a 2-year old daughter and that Adrienne is pregnant. Oh well, the baby is due in June, after the Rapture (do they have good health care in heaven? One can’t help but wonder).
Now, as it turns out (surprise, surprise!) Family Radio’s Camping had already predicted the end of the world, for September 6, 1994. As you might recall, it didn’t happen. His excuse? Well, he hadn’t managed to finish reading Jeremiah, because, you know, it’s a big book (and one that to a large extent is concerned precisely with the end of time). Wait, he didn’t do his fracking homework and still went on the air and told the world to prepare for the end? Can we sue him for theological malpractice?
That’s it, that’s all Bradley says in her piece. Now, should journalists not cover end-of-time stories? Or stories about miracles? Of course they should. But they should also put them in perspective: explain to their readers or listeners that most sane people don’t actually believe these things. That faith in miracles undermines trust in science and medicine, and that faith in the coming end of the world can seriously hamper your retirement plans, not to mention the college savings for your children. (Here’s is a free idea to Hagerty for a follow-up story: go back and interview the same people on May 22. That ought to be, ahem, informative.)
Superstition hurts and kills, it is no joke. And serious journalism should at least try to put these human stories in a proper (i.e., sane) perspective. But I suspect if you do that, you won’t be getting much funding from the Templeton Foundation, nor would you be able to sell books about your “spiritual evolution.” Barbara, NPR, please, you really owe us something better than this.

Friday, May 06, 2011

Razoring Ockham’s razor


by Massimo Pigliucci
Scientists, philosophers and skeptics alike are familiar with the idea of Ockham’s razor, an epistemological principle formulated in a number of ways by the English Franciscan friar and scholastic philosopher William of Ockham (1288-1348). Here is one version of it, from the pen of its originator:
Frustra fit per plura quod potest fieri per pauciora. [It is futile to do with more things that which can be done with fewer] (Summa Totius Logicae)
Philosophers often refer to this as the principle of economy, while scientists tend to call it parsimony. Skeptics invoke it every time they wish to dismiss out of hand claims of unusual phenomena (after all, to invoke the “unusual” is by definition unparsimonious, so there).
There is a problem with all of this, however, of which I was reminded recently while reading an old paper by my colleague Elliot Sober, one of the most prominent contemporary philosophers of biology. Sober’s article is provocatively entitled “Let’s razor Ockham’s razor” and it is available for download from his web site.
Let me begin by reassuring you that Sober didn’t throw the razor in the trash. However, he cut it down to size, so to speak. The obvious question to ask about Ockham’s razor is: why? On what basis are we justified to think that, as a matter of general practice, the simplest hypothesis is the most likely one to be true? Setting aside the surprisingly difficult task of operationally defining “simpler” in the context of scientific hypotheses (it can be done, but only in certain domains, and it ain’t straightforward), there doesn’t seem to be any particular logical or metaphysical reason to believe that the universe is a simple as it could be.
Indeed, we know it’s not. The history of science is replete with examples of simpler (“more elegant,” if you are aesthetically inclined) hypotheses that had to yield to more clumsy and complicated ones. The Keplerian idea of elliptical planetary orbits is demonstrably more complicated than the Copernican one of circular orbits (because it takes more parameters to define an ellipse than a circle), and yet, planets do in fact run around the gravitational center of the solar system in ellipses, not circles.
Lee Smolin (in his delightful The Trouble with Physics) gives us a good history of 20th century physics, replete with a veritable cemetery of hypotheses that people thought “must” have been right because they were so simple and beautiful, and yet turned out to be wrong because the data stubbornly contradicted them.
In Sober’s paper you will find a discussion of two uses of Ockham’s razor in biology, George Williams’ famous critique of group selection, and “cladistic” phylogenetic analyses. In the first case, Williams argued that individual- or gene-level selective explanations are preferable to group-selective explanations because they are more parsimonious. In the second case, modern systematists use parsimony to reconstruct the most likely phylogenetic relationships among species, assuming that a smaller number of independent evolutionary changes is more likely than a larger number.
Part of the problem is that we do have examples of both group selection (not many, but they are there), and of non-parsimonious evolutionary paths, which means that at best Ockham’s razor can be used as a first approximation heuristic, not as a sound principle of scientific inference.
And it gets worse before it gets better. Sober cites Aristotle, who chided Plato for hypostatizing The Good. You see, Plato was always running around asking what makes for a Good Musician, or a Good General. By using the word Good in all these inquiries, he came to believe that all these activities have something fundamental in common, that there is a general concept of Good that gets instantiated in being a good musician, general, etc. But that, of course, is nonsense on stilts, since what makes for a good musician has nothing whatsoever to do with what makes for a good general.
Analogously, suggests Sober, the various uses of Ockham’s razor have no metaphysical or logical universal principle in common — despite what many scientists, skeptics and even philosophers seem to think. Williams was correct, group selection is less likely than individual selection (though not impossible), and the cladists are correct too that parsimony is usually a good way to evaluate competitive phylogenetic hypotheses. But the two cases (and many others) do not share any universal property in common.
What’s going on, then? Sober’s solution is to invoke the famous Duhem thesis.** Pierre Duhem suggested in 1908 that, as Sober puts it: “it is wrong to think that hypothesis H makes predictions about observation O; it is the conjunction of H&A [where A is a set of auxiliary hypotheses] that issues in testable consequences.”
This means that, for instance, when astronomer Arthur Eddington “tested” Einstein’s General Theory of Relativity during a famous 1919 total eclipse of the Sun — by showing that the Sun’s gravitational mass was indeed deflecting starlight by exactly the amount predicted by Einstein — he was not, strictly speaking doing any such thing. Eddington was testing Einstein’s theory given a set of auxiliary hypotheses, a set that included independent estimates of the mass of the sun, the laws of optics that allowed the telescopes to work, the precision of measurement of stellar positions, and even the technical processing of the resulting photographs. Had Eddington failed to confirm the hypotheses this would not (necessarily) have spelled the death of Einstein’s theory (since confirmed in many other ways). The failure could have resulted from the failure of any of the auxiliary hypotheses instead.
This is both why there is no such thing as a “crucial” experiment in science (you always need to repeat them under a variety of conditions), and why naive Popperian falsificationism is wrong (you can never falsify a hypothesis directly, only the H&A complex can be falsified).
What does this have to do with Ockham’s razor? The Duhem thesis explains why Sober is right, I think, in maintaining that the razor works (when it does) given certain background assumptions that are bound to be discipline- and problem-specific. So, for instance, Williams’ reasoning about group selection isn’t correct because of some generic logical property of parsimony (as Williams himself apparently thought), but because — given the sorts of things that living organisms and populations are, how natural selection works, and a host of other biological details — it is indeed much more likely than not that individual and not group selective explanations will do the work in most specific instances. But that set of biological reasons is quite different from the set that cladists use in justifying their use of parsimony to reconstruct organismal phylogenies. And needless to say, neither of these two sets of auxiliary assumptions has anything to do with the instances of successful deployment of the razor by physicists, for example.
So, Ockham’s razor is a sharp but not universal tool, and needs to be wielded with the proper care due to the specific circumstances. For skeptics, this means that one cannot eliminate flying saucers a priori just because they are an explanation less likely to be the correct than, say, a meteor passing by (indeed, I go in some detail into precisely this sort of embarrassing armchair skepticism in Chapter 3 of Nonsense on Stilts). There is no shortcut for a serious investigation of the world, including the spelling out of our auxiliary, and often unexplored, hypotheses and assumptions.
—-
** Contra popular opinion even among philosophers, this is not the same as the Duhem-Quine thesis, which is a conflation of two separate but related theses, one advanced by Duhem (discussed here) and one — later on — by Quine (to be set aside for another discussion).

Wednesday, May 04, 2011

Talking to the media, a cautionary tale

by Massimo Pigliucci
So, a few days ago I and other members of New York City Skeptics, including Julia, were approached by an affable young journalist named Jonathan Liu, who writes for the New York Observer. He asked if he could join one of our meetup discussion groups — which happened to focus on the question of whether there is something special and unique about humans when compared to other animals — as well as our Drinking Skeptically event. He also followed up with both Julia and me asking us a number of questions via email about skepticism and our personal take on it.
We were all pretty pleased with the experience until the article actually came out. What follows is a brief analysis of Mr. Liu’s writing, to give you a flavor of how a journalist can easily distort things to suit whatever agenda he has, for whatever reason he happens to have it.
Liu starts out by writing: “It's a well-trod truism of folk science that you can’t prove a negative. But can you build a popular movement — or at least a well-received dinner party — around one?”
Well, it may be a truism of folk science, but it is wrong. There are plenty of situations where proving a negative is very easy. Not only both logic and mathematics abound with proof of the impossibility of X (where X can be a conjecture, theorem or whatever), but there is a number of empirical negatives that are also easily provable. For instance, if I claim that I do not have a million dollars in my bank account, it is child's play to verify my (negative) statement in a matter of minutes.
But never mind that. Contra Liu, the skeptical movement isn’t built around proving negatives. It is built around the positive value of critical thinking (which you would think journalists would make their own), and the simple Humean idea that “a wise man proportions his belief to the evidence.”
The reporter then mentioned that I was sitting in the middle of the dinner table, at the “Da Vincian midpoint” (as in The Last Supper), while managing to get my name misspelled throughout the article (Pigulucci), despite our email correspondence, which included the correct spelling. I guess the Observer is short in the editorial and fact checking departments these days.
After having spent an inane amount of time complaining about the neighborhood of the restaurant (Kips Bay, midtown east Manhattan), and commenting on a cast I was wearing because of a recent surgery (and even on the exact type of pain killers I was using) — all clearly relevant to the issue at hand, Liu described me as approaching the Platonic ideal, “or at least the Wachowskian archetype,”** of a modern epistemologist of science. Okay, I can live with that.
Liu must have been desperately searching for mythological figures to analogize me with, because immediately afterward I was compared to Jesus (!!), who apparently used to gesticulate like an Italian (Liu did not disclose the source of this phenomenal piece of historical information, probably reserving it for another sensational article in the Observer).
Why the parallel with Jesus? Because apparently the people participating at the dinner discussion were (almost) my “disciples,” and Pigulucci [sic] “finds the idea of God and believers in God — not to mention homeopathy — insipid, violently ignorant and begging for forcible conversion.” Actually, I find those ideas anything but insipid, since they affect the lives of millions; though they are indeed the result of ignorance, but the phrase “violently ignorant” is a category mistake (ignorance is not the sort of thing that can be violent, though it can lead to violent actions). As for forcibly converting homeopaths, or Christians, I haven’t the foggiest notion of where Mr. Liu got that one from. Perhaps Jesus told him.
The Observer piece then continues by labeling New York City Skeptics as a cult. Now a cult is often defined as “a relatively small group of people having religious beliefs or practices regarded by others as strange or sinister.” Hmm, let’s see. Well, NYCS is indeed a small group, and it probably isn’t impossible to find someone somewhere who considers our activities “strange” (though “sinister” would be pushing it). At least as strange as New Yorkers might find a group of people getting together for dinner and talking about things they are interested in — that is, not at all. But “having religious beliefs”? By what sort of distorted conception of religious belief does what Mr. Liu observed that night qualify as such? We are not told, though inquiring minds (apparently not those of Liu’s editors) wish to know.
For Liu “Skepticism starts with the feeling of being under siege by the nonthinking. It becomes Skepticism with the faith that there must be people out there who think like you do — that is, who think.” Well, that’s actually close to the mark, except that we like to think that we go by evidence not faith. But just as my spirits (metaphorically speaking) were beginning to lift a bit, I learned from Mr. Liu that skepticism has recently turned “[in]to something like a distinct, aggressive and almost messianic mentality.” Distinct, yes. Aggressive, maybe, though nothing compared to the aggressiveness of fundamentalists and homeopaths. Messianic? Here we go again with the projected Jesus complex!
Finally, perhaps remembering that the Observer allegedly caters to New Yorkers interested in the cultural activities of their city, Liu ends by asking (rhetorically): “how often does an enlightened New Yorker really have to come up against the messy particulars of superstition unless he’s somewhat titillated by seeking them out?”
Had he done his homework, he would have found out the answer quite readily: until the very same week of the meetup, New Yorkers had been treated to an inane message of the anti-vaccination movement, displayed in full colors on the CBS billboard in Times Square. But that’s a fact that was much less interesting to Mr. Liu than the type of earring I wear (a black diamond, if you need to know).
——
** I had to look this up. I assume he meant it as in the Wachowski Brothers’ use of archetypal figures in the movie The Matrix. Right...

Tuesday, May 03, 2011

A pluralist approach to ethics

by Michael De Dora
The history of Western moral philosophy includes numerous attempts to ground ethics in one rational principle, standard, or rule. This narrative stretches back 2,500 years to the Greeks, who were interested mainly in virtue ethics and the moral character of the person. The modern era has seen two major additions. In 1785, Immanuel Kant introduced the categorical imperative: act only under the assumption that what you do could be made into a universal law. And in 1789, Jeremy Bentham proposed utilitarianism: work toward the greatest happiness of the greatest number of people (the “utility” principle).
These attempts, while worthy, have failed, if only because moral philosophers have tough standards — and for good reason. Each proposal has been fully deconstructed, and shown to have flaws. Many people now think projects to build a reasonable and coherent moral system are doomed. Still, most secular and religious people reject the alternative of moral relativism, and have spent much ink criticizing it (among my favorite books on the topic is Moral Relativism by Stephen Lukes). The most recent and controversial work in this area comes from Sam Harris. In The Moral Landscape, Harris argues for a morality based on (a science of) well-being and flourishing, rather than religious dogma.
Harris’ book has drawn much criticism, most of it focused on his claim that science can determine human values. I do not wish to consider that here. Instead, I am interested in another oft-heard criticism of Harris’ book, which is that words like “well-being” and “flourishing” are too general to form any relevant basis for morality. This criticism has some force to it, as these certainly are somewhat vague terms. But what if “well-being” and “flourishing” were to be used only as a starting point for a moral framework? These concepts would still put us on a better grounding than religious faith. But they cannot stand alone. Nor do they need to.
The idea I would like to propose in this essay is that while each ethical system discussed so far has its shortcomings, put together they form a solid possibility. One system might not be able to do the job required, but we can assemble a mature moral outlook containing parts drawn from different systems put forth by philosophers over the centuries (plus some biology, but that’s Massimo’s area). The following is a rough sketch of what I think a decent pluralist approach to ethics might look like.
The most basic claim is the one made by modern utilitarians and virtue ethicists: that morality ought to function to increase the well-being (the state of being happy, healthy, or prosperous) and flourishing (to grow, to thrive) of conscious creatures and societies. Of course, this is open to some interpretation. What does well-being mean? What does it entail? Who gets what? In its purest form, you might say, it is still the mob rule of early utilitarian writing. But what if we fleshed out the framework with a couple of additional moral concepts? I would propose at least these three:
1. The harm principle bases our ethical considerations on other beings’ capacity for higher-level subjective experience. Human beings (and some animals) have the potential — and desire — to experience deep pleasure and happiness while seeking to avoid pain and suffering. We have the obligation, then, to afford creatures with these capacities, desires and relations a certain level of respect. They also have other emotional and social interests: for instance, friends and families concerned with their health and enjoyment. These actors also deserve consideration.
2. If we have a moral obligation to act a certain way toward someone, that should be reflected in law. Rights theory is the idea that there are certain rights worth granting to people with very few, if any, caveats. Many of these rights were spelled out in the founding documents of this country, the Declaration of Independence (which admittedly has no legal pull) and the Constitution (which does). They have been defended in a long history of U.S. Supreme Court rulings. They have also been expanded on in the U.N.’s 1948 Universal Declaration of Human Rights and in the founding documents of other countries around the world. To name a few, they include: freedom of belief, speech and expression, due process, equal treatment, health care, and education.
3. While we ought to consider our broader moral efforts, and focus on our obligations to others, it is also important to place attention on our quality as moral agents. A vital part of fostering a respectable pluralist moral framework is to encourage virtues, and cultivate moral character. A short list of these virtues would include prudence, justice, wisdom, honesty, compassion, and courage. One should study these, and strive to put these into practice and work to be a better human being, as Aristotle advised us to do.
As you have likely noticed, this pluralist approach does not include all moral theories (I mean, did you really expect me to bring up divine command theory?) The most notable omission is consequentialism, as far as it is distinct from utilitarianism. I’ve previously written about that here. As it relates to this essay: I think we should indeed try to imagine, and then achieve, the sort of outcomes we want. But consequences often do not match one’s original intent. Furthermore, our judgment of the consequences of acts depends on our prior conceptions of harm, rights and virtue.
Still, some say that irreconcilable tensions can arise between the different conceptions of ethics, which may mean that ethical pluralism is doomed. At least two examples are discussed in chapter 8 of Michael Sandel’s Justice, as they relate to the relationship between justice and law (an unavoidable issue when it comes to ethics). The first example highlights a potential tension between the utilitarian and virtue views. Aristotle believed that law was meant to inculcate moral virtue and make good citizens. What then about well-being? Aren’t these two different approaches to ethics? Not exactly. Law can be about instilling moral worth as a necessary step toward increasing our well-being, the overarching function of morality. The second example pits virtue ethics against rights theory. Aristotle believed the purpose of law was more to form an upstanding populace, and less to create rules and rights. But it goes both ways: forming good citizens with upstanding character requires a certain level of protection of rights; and cultivating virtue is a good way to make sure we support the correct rights.
A third potential tension has been mentioned in criticisms of Harris’ book. It is between utility and rights theory. Imagine if scientific data proved that slavery leads to greater societal flourishing. Would slavery then be moral? No. The point of a pluralist approach is that you do not rely on a single universal rule. Slavery might increase the collective well-being, but it would do so by limiting an essential right. And you don’t take away peoples’ rights just because the majority would be happier that way.
Indeed, each aspect of a pluralist ethical approach is intricately tied to other aspects. The way to increase well-being and flourishing is to feel obligated to accord conscious beings a certain level of respect and rights, to not cause harm to them without very good reason, and to actively work toward building moral character so that these promises are fulfilled.
I think most people already are ethical pluralists. Life and society are complex to navigate, and one cannot rely on a single idea for guidance. It is probably accurate to say that people lean more toward one theory, rather than practice it to the exclusion of all others. Of course, this only describes the fact that people think about morality in a pluralistic way. But the outlined approach is supported, sound reasoning — that is, unless you are ready to entirely dismiss 2,500 years of Western moral philosophy.