About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Sunday, April 29, 2012

Report from the Consilience conference, part III

by Massimo Pigliucci

[This is a report from the consilience conference held at the University of Missouri-St. Louis. Part I is here, part II here]

Last day of the consilience conference! We began with David Sloan Wilson, with whom I had just had a spirited (and constructive) discussion about how to measure individual and group cultural selection quantitatively (he admitted it hasn’t been done, yet...). His topic was “Using evolution to improve the quality of life.” According to David, evolutionary principles can be used to improve our quality of life at the level of cities and neighborhoods. His Evolutionary Institute is a think tank that has the explicit goal of connecting evolutionary ideas to public policy.

Wilson sees symbolic thought as an inheritance system, which is necessary to get a theory of cultural evolution off the ground if one excludes fuzzy concepts like memes (which he apparently is inclined to do). He proposes the idea of a “symbotype”-phenotype relationship similar to the standard genotype-phenotype relationship in evolutionary biology. (Though it has to be noted that genotype-phenotype mapping is one of the most difficult problems facing evolutionary biologists, and I doubt that it’s going to be any easier to operationalize the concept of symbotype-phenotype mapping.)

David’s example was a study of prosociality in Binghamton, NY neighborhoods. He geo-mapped individuals who had been scored on a measure of prosociality. He then ran statistical analyses exploring the social correlatives of prosociality on the territory. Prosociality turned out to vary over very small spatial scales. The survey found a strong correlation between the prosociality of individuals and that of their social environment (i.e., the more socially supportive the environment is, the more prosocial people are). Multiple forms of social support contribute to individual prosociality, and adolescents changing location within the city is taken by Wilson to demonstrate phenotypic plasticity. (I actually worked on phenotypic plasticity as it is understood in biology, and I think this is a somewhat metaphorical use of the term.)

This was all very interesting, and even useful from a practical (policy) perspective. But it is social science, the results are unlikely to surprise a social scientist, and the e-word did not add anything at all, as far as I could see, to the whole picture. Evolution has to do with fitness-related variation and inheritance. There were no measures of fitness in the data, and it’s hard to see in what sense an individual changing behavior from one year to another (e.g., moving to a different neighborhood) counts as “inheritance.” But maybe I missed something crucial, somewhere.

Next we moved to Barbara Oakley, on cold-blooded kindness: insights into pathological altruism. This is a situation where while the underlying motivation is to help others, the altruistic behavior has irrational and substantially negative consequences to the other and to the self. An example presented by Oakley was of a woman who married a drug addict and then killed him in self-defense. (More on this below.)

Oakley comes to this as an engineer, and she seeks analogies between engineering and social science principles. For instance, she takes the idea that local decreases in entropy must be offset by a broader increase in entropy in the environment (which is a fundamental principle of physics) and translates it into the idea that even good deeds can carry negative consequences in the human realm. This, honestly, seems to be a stretch, and not a novel insight, given that both social scientists and philosophers have explored the idea of consequentialism in detail, and without needing to reference entropy.

(I’m beginning to think that it would have been nice to have actual social scientists, historians and assorted humanists at this conference, just to see how they would have reacted to biology-based criticism of their fields. Another time, maybe.)

Back to the story of the woman who killed her husband. Apparently, it wasn’t self-defense at all, it was premeditated (he was shot in the back, and she had pre-dug his grave). She was also a sadomasochist, who had complained in the past that her husband refused to hurt her. Oakley contends that the fact that reporters for the National Inquirer (where she originally heard of the story) and others felt sympathy for the woman and bought into her side of the story (though apparently neither the prosecutor nor the jury did) is because we are at fault for excessive (pathological) altruism. Again, that seems a stretch. First, if we were given the actual facts, instead of the National Inquirer version, I bet very few people would have felt compassion for the woman. Second, this case needs to be understood against a cultural background — which Oakley did refer to — of a number of stories of battered women who do act, truly, in self-defense. None of the above, as interesting as it was, had much to do with consilience, as far as I could tell.

Next: Jonathan Gottschall on the storytelling animal, how stories make us human. We all know that we like fiction and stories, but we are not aware of just how much. Storytelling is about someone else in a sense taking over emotional control of your self for a while. We don’t leave storytelling when we go to sleep: dreams, whatever actual physiological function they have, are fragments of stories which the brain tells itself. And then there is daydreaming, in which we apparently spend a large chunk of our day (this includes, however, rethinking past actions and situations, or imagining how to handle likely future actions and situations).

The left hemisphere of the brain is known to be a storyteller, in charge of making sense of everything that happens to us. The downside is that when it doesn’t have reliable information the left “interpreter” will make up stories anyway. Classical experiments show that even simple moving geometrical shapes on a screen are interpreted by many people as agents interacting with each other, driven my motives. (To be fair, since Gottschall showed an example on screen, the shapes were moving around the screen in obviously non random fashion. Would people make up stories if the movements were random?)

Fictional stories have surprisingly wide ranging effects, for instance in the case of the (alleged) role of Uncle Tom’s Cabin in the events that led to the American Civil War. Or D.W. Griffith’s The Birth of a Nation, credited with (temporarily) reviving the then moribund Ku Klux Klan.

Why storytelling? It could be the result of sexual selection, or a byproduct of the way the human mind works, and there are other possibilities. Gottschall seems to favor a pluralist answer, with multiple causes for the origin of story telling propensities in humans. The obvious question is how one would go about testing these hypotheses.

Fiction has a “universal grammar”: character + predicament + attempted solution. Right, and not really surprising. Of course this tells you close to nothing about individual stories and how they vary with culture and time, but point taken. If story telling has a function, it may be a sort of “flight simulator” of the mind, through which we practice for life (do we need to practice possible encounters with zombies?). There is some preliminary evidence, apparently, that people who enter into fictional simulation more often do better in real life.

The first speaker of the afternoon was Henry Harpending, on kinship within populations: whole genomes as green beards. The green beards refer to Dawkins’ (hypothetical) example of people with green beards being inclined to help others sporting the same trait, a process undermined by how easy it is to fake a green beard. This is obviously relevant to the idea of kin selection-mediated altruism and how it is vulnerable to cheaters. Harpending asked how much evidence do we have for mechanisms of kin recognition (to counter cheaters) having evolved in humans. Not much, apparently.

Harpending went on to compare two versions, from research in the ‘70s, of “Mr. Natural”: on the one hand the cooperative and peaceful bushmen of the Kalahari desert; on the other hand the Yanomamo of the Amazons, fierce and aggressive. The question, naturally, is how can these two so different cultures both represent “Mr. Natural”? The most recent take is that there is no such thing as Mr. Natural, that people’s ways of living change rapidly from time to time and culture to culture.

After that excursion, Harpending returned to kinship, and how these days we can actually measure it via Single Nucleotide Polymorphisms (SNPs), an increasing large database which is becoming available for humans through projects like 23andme (a commercial genome sequencing enterprise). The result seems to be that there is not enough dispersion of kinship values in, say, the modern French or Japanese populations, to make it effective to deploy cryptic kinship detection mechanisms. However, this does not hold for small human populations, like the inhabitants of Surui island. In that case there is a significant spread of kinship coefficients.

Again, while much of this was interesting, it wasn’t at all clear to me what it had to do, if anything, with the theme of the conference. It isn’t unusual for people to be invited to a conference with a central theme, show up and then talk about whatever struck their fancy most recently. But there seemed to be a particularly high occurrence of this at the consilience conference.

The next speaker was Pascal Boyer, on “the dark matter of human history, the perils of cognition blindness.” Social science “that matters” needs to address questions like why people engage in warfare, why is there religion, and so on. Boyer is explicitly using the term consilience as synonymous with integration of disciplines, which is, again, different from E.O. Wilson’s use of the term.

Boyer uses an analogy with dark matter in physics to signify that there is quite a bit in social science that does not meet the eye, and that has to do with the neuro-cognitive processes underlying conscious mental states, motivations, social interactions etc. Parts of human nature are not accessible to conscious inspection and cognition awareness requires effort. This notwithstanding, intuitive (or “folk”) sociology takes an intentional stance to groups and states (memories, beliefs), so that behavior can be interpreted as goal-directed.

For Boyer intuitive sociology may be adaptive in the context of our social evolution, as we can trace what happens around us as the result of intentions by other people. However, intuitive sociology fails when it is applied to, for instance, understanding the economy (“folk economics”). An example, apparently, is the idea of rent controls. They make sense from the point of view of folks economics, because the landlord and the lodger are given intentional stances; they don’t make sense for economists because offers depend on preferences and availability of means in the relevant population. (This seems to entirely ignore the fact that rent control measures are usually implemented not to solve an economic problem, but to minimize negative social side effects of purely economic “solutions.”)

Boyer also criticized political science for having empirical basis but no theory, resulting in either formal modeling that does not make contact with empirical reality or the study of political preferences as brute facts.

After a long bit on warfare and ethnic conflict, where he stressed the same contrast between “folk” and more sophisticated theories of what goes on, Boyer concluded by stating that there is no such thing as religion. We expect of religions that they have doctrines, beliefs, personnels (priests, shamans, etc.), and domains of competence, such as survival after death, morality, etc.. But in reality, in most cases at the tribal level — argues Boyer — there is no doctrine at all, the personnel is varied and ad hoc, and there is no unified domain of religion. Seems to me that here as in other examples during the talk there is a confusion between origins on one hand and development and maintenance on the other hand of a given phenomenon. Religions may have originated without the characteristics of the modern stuff, but this neither licenses the bold claim that religion “doesn’t exist,” nor does it imply that things like modern religions doctrines, beliefs, personnel etc. don’t need to be understood on their own terms. Boyer is correct, however, in separating the issue of religion as a type of social organization that is typical of some human societies from supernaturalism, which seems to be a human universal. A consequence he derives is that it makes no sense to think of religion as an adaptation, as it is far too much of a late comer in human evolution.

The take home message is that the social sciences have been disappointing because they have not addressed the big questions, leading to no cumulative progress. (The latter, I think, is a bit uncharitable.) Things went wrong because of the lack of vertical integration, in this case a reduction to neuroscience. Again, Boyer seems to be making a couple of common mistakes at this conference. First, integration and reduction are different things. Second, reduction does not eliminate the higher level phenomena, it only helps explaining them. So, I think, social science should still focus on the high level targets, but also integrate as much as possible notions from other disciplines, including but not limited to neuroscience.

And last: Patricia Churchland on how the mind makes morals. Darwin (together with Hume and Aristotle) thought that our moral sense is the result of social instincts, habits and reason. Churchland’s basic hypothesis is that sociability is of value for social mammals and it evolved by natural selection; its neural “hub” is the hormone oxytocin (involved in attachment, bonding); this is augmented by the reward system in the brain; and the whole thing is elaborated by prefrontal structures in the brain.

Attachment and trust are the platform for moral values. Social problem solving is mediated by the enlarged prefrontal cortex, which overrides, represses, calculates and plans. Refreshingly, Churchland isn’t trying to “reduce” culture to neurobiology, she is after the much more sensible goal of understanding the neural structures that make it possible for us to have culture to begin with.

She cites Eleanor Rosch’s work on the “radial” structure of concepts, with a prototype at the center and fuzzy boundaries. (This is similar to family resemblance in Wittgenstein, which a philosopher like Churchland should have noted.) Social categories are also radial, including the category of “moral.” Interestingly, artificial neural networks “learn” to categorize by way of prototypical structures and fuzzy boundaries. The idea, of course, is that the brain is relevantly similar to neural networks, and likely learns in a similar fashion (which is interesting, but let’s not forget that the brain — unlike neural networks — comes with a great deal of genetic-developmental prewiring).

One final comment on the entire conference: I got the impression that a number of participants did not actually read Wilson’s Consilience, at least not recently (several have admitted as much to me). Many (though not all) seemed to be convinced that the book promotes a positive and multi-directional exchange between disciplines, particularly crossing the science-humanities divide. It does nothing of the sort. It is a clear attempt at a reductionist program of subsuming the humanities into the sciences, and particularly biology, though it isn’t obvious why Wilson didn’t go all the way and subsume biology itself into physics. Perhaps because he’s a biologist?

Interesting footnote: one of the conference attendees heard that I was blogging about it, and objected to it, on two grounds: first, I am bringing to an outside forum discussions and opinions that were not meant for that forum; second, I get to editorialize and comment about what was said at the conference, while the other participants can’t.

My response is that conferences of this type are public events (registration was open to everyone), and that bringing at least a flavor of the proceedings to a wider audience is a good thing (at least another participant was Tweeting about it, by the way). As for commenting, well naturally I am writing this, so you are getting my take on it. Presumably, the intelligent reader is aware of this and will take it into account in forming her own judgment. Moreover, once my thoughts are out in the blogosphere other participants can either comment on them directly or can use other forums to respond to and/or expand upon them.

Still, the question does raise interesting issues concerning the ethics of blogging from academic conferences (or in other situations), and I’d be interested in hearing people’s thoughts on this.

Friday, April 27, 2012

Report from the Consilience conference, part II

by Massimo Pigliucci

[This is a report from the consilience conference held at the University of Missouri-St. Louis. Part I is here]

The second day of the consilience conference started with Peter Turchin on cliodynamics (after Clio, the muse of history), or what he thinks may become a science of history. Peter argues that historians are incorrect in thinking of history as part of the humanities, and certainly wrong in having abandoned the search for overarching explanations of historical events. The author’s suggestion is that historians actually do deploy general frameworks for explanations, but do so implicitly, which screens such general frameworks from critical and quantitative analysis.

Peter proceeded to examine one example: the fall of the Roman empire. A German historian at some point counted 210 different explanations for that historical event. (Hmm, that seems hard to believe, I would like to see in what sense these are actually “different,” there just doesn’t seem to be that much room in logical space.) The suggestion is that this proliferation is due to the fact that historians are unable to eliminate hypotheses that don’t work, thereby leading to an ever increasing proliferation of conjectures not followed — to use Popper’s phrase — by any refutation.

Turchin went on to show some interesting data quantifying gridlock in the American Congress over the past several decades (e.g., number of judges confirmed, number of filibusters, etc.). The data clearly show a huge increase in polarization after 1970. He points out an apparent cyclical process of party polarization between 1800 and today, suggesting that if it is cycling, there may be an underlying unifying explanation for it. I think the data are fascinating, but the time series is far too short to pinpoint cycles, and historians (and, more pertinently, political scientists) would probably be able to argue that the factors explaining the ups and downs in political polarization were different for different time periods (e.g., the Civil War was significantly different from the current corporate-ideological wars). It also strikes me as strange to argue that historians don’t make use of quantitative data when they are available and pertinent, and it seems to me that a likely problem with the cliodynamics approach is that the quantitative data (unique historical sequences) often, though not always, will underdetermine possible explanations. The same thing happens in other quantitative social sciences, for instance economics. Again, this doesn’t make a quantitative approach useless, but it also imposes strong constraints on its utility.

Turchin proposed another example, this time based on a much longer time series, quantifying social instability in France between 800 and 1700. The graph shows a number of peaks and valleys. Peter sees four “waves” of instability: the Carolingian break-up, the Early Medieval crisis, the Late Medieval crisis, and the religious wars of the 17th century (he presented a similar graph for ancient Rome, with three periods of instability). While he talks about recurring patterns, I am reminded of similar graphs by paleontologists tracking periods of mass extinction. For a while, it was fashionable to look for periodicities and a single underlying explanation (like a recurring small star companion of the Sun), but they didn’t work out, and we now think that each mass extinction has its own ecological and/or astronomical explanation.

Next we had Christopher Boehm, on social selection and the notorious free rider problem. George Williams set the stage for the problem when he defined free riders as cheaters who defeat altruists within a group. Boehm is interested in a particular kind of free rider, the bully. They are found in any social dominance hierarchy, from chimpanzees to modern humans. He compared 150 modern hunter gatherer societies that are similar in structure to Pleistocene humans. Of these, 49 have data coded for sociopolitical variables.

These societies have a home base with centralized meat provisioning, there is male-female division of labor, and they cooperate altruistically to share meat. Importantly, subordinate band members form coalitions to hold down bullies, resulting in an egalitarian society. The group acts by consensus in sanctioning deviants, with sanctions ranging from social pressure and criticism to shaming, ostracism, ejection and even capital punishment.

The key to moralistic social control is gossip, which permits private evaluation of deviants and allows consensus about a deviant to develop, which leads to group action. The targets of social control are primarily bullies, thieves and cheats, and people engaging in sexually unacceptable behavior. Interestingly, bullies are often executed in these societies, many more than cheaters or sexual transgressives. This, according to Boehm, indicates that bullies are a primary type of free rider in the mind of Pleistocene-like hunter gatherers. Only about 9% of sanctions are irreversible (death, permanent expulsion), with reversible sanctions allowing deviants to reform with only a partial loss of fitness.

Boehm considered six theories of altruism: (1) Mutualism (you scratch my back, I scratch yours);  it is so immediate that no cheating is possible, therefore it raises no free rider problem. (2) Reciprocal altruism (long-term mutual back scratching); it is subject to the free rider problem. (3) Group selection (between-group effects trump within-group effects); subject to free rider. (4) Piggybacking n. 1 (misplaced nepotism, bonding and generosity transferred to non-kin as a maladaptive effect); susceptible to some free riding. (5) Piggybacking n. 2 (cultural learning is good for you, but one of the things your culture is teaching you is to be altruistic, and if you listen to it, you’ll lose out); subject to deceptive free riding, for instance by bullying. (6) People’s choice as an agency of selection; this can take place by reputation or collective sanctioning. I’m not sure in which sense these count as “theories,” they rather seem to me to be different scenarios, actualized to different degrees in different societies.

The third speaker of the day was Joseph Carroll (the organizer of the conference), on Graphing Jane Austen and the evolutionary basis of literary meaning. He set up a web questionnaire on 2000 characters from 202 British novels of the 19th century, and he sent requests for comments to a number of literary scholars encouraging them and their students to provide feedback. This approach strikes me as analogous to, say, “experimental philosophy’s” surveys of people’s attitudes toward issues like free will or consciousness. I find the latter to be somewhat interesting psychological surveys of how people think about philosophical issues; I do not think they have anything to do with philosophy, though. (Here, of course, a difference is that Carroll and his collaborators asked professionals and their students, not lay people.)

The point was to gain insight into the ethos of individual novels (based on how people perceived the different characters), and then to be able to generalize to the entire Victorian period, once enough novels had been so analyzed. The underlying reasons for the study were to demonstrate that major components of literary meaning can be reduced to the elements of human nature (reduced? Built upon, perhaps?); to test empirically the hypothesis that agonistic structure reflects evolved social psychology (evolved here means biologically, since the author deferred to evolutionary psychology); and to generate new empirical knowledge and begin a process of knowledge acquisition in literary study.

(Full disclosure, the author took pot shots at my own, somewhat skeptical, presentation from the first day, incidentally deeply mischaracterizing it. Oh well.)

The model of human nature used in the study included motives (impulses, instincts, goals of action), mate selection, personality differences, and emotions (of the readers, not the characters). A factor analysis highlighted “dominance” (quest for power, wealth and prestige, and tendency to not helping non-kin) and “nurture” (negatively correlated to short-term mating and positively to helping offspring and kin). This is beginning to look like the famous scene in Dead Poet Society where Robin Williams graphs the characteristics of a novel only to show his students how silly that exercise is, missing the point of literature. But Carroll was very serious about this. (Another analogy might help: most word processors can collect data on the structure of your writing, such as average sentence length, average word length, and so on. I seriously doubt, however, that a good English teacher needs that sort of quantification to tell you whether your essay is worth a crap.)

Mate selection was described by other factors, including extrinsic attributes (power, prestige, wealth) and intrinsic qualities (reliability, kindness, intelligence). Not at all surprisingly, male protagonists are interested in physical attractiveness of prospective males, not in their intrinsic or extrinsic qualities, while the reverse was true for female protagonists. In terms of the big five personality traits, the good guys and gals score highly on positive attributes and lowly on negative ones, while the reverse was true for the bad guys and gals (again, surprise, surprise!).

The data apparently disprove the following claims: (i) meaning in literary texts is undecidable; (ii) the novels center thematically on a struggle of power between the sexes; (iii) the novels merely push pleasure buttons. Of course, the first “theory” is postmodern, not difficult to argue that it is crap; the third one is by Steven Pinker, not a literary critic. The second would deserve more discussion, but Carroll went too quickly through the data apparently showing that the novels do not display a power struggle between the sexes. He prefers an interpretation according to which the novels show instead a struggle against cheaters and bullies. Surely both elements are present?

Finally Carroll gave a few remarks trying to tie his results to recent views in evolutionary psychology, according to which gossiping is important, and members of egalitarian societies are preoccupied with keeping bullies and cheats under control. Ironically, I think, this type of approach typifies a sort of mirror image of postmodernism-deconstructionism: according to the latter, the meaning of text is forever fluid and undecidable; according to the former, Pleistocene biology pretty much determines what is going on in literature. The multifarious and ever changing dynamics of human culture that makes Jane Austin and other authors such a pleasure to read get lost in both cases.

At the risk of being a bit simplistic here, the first question that comes to mind is whether one needed to do all this work to arrive at the conclusion that novels are written about the dramatic aspects of human foibles. Moreover, how does this sort of approach explain the differences between, say, Victorian and late 20th century novels, or between novels in the Western canon vs Japanese ones? Presumably “human nature” is the same across times and cultures (at least according to evolutionary psychologists), so what gives? Now, of course one can legitimately engage in the kind of data gathering that Campbell and his colleagues carried out. And one can obtain stats and graphs as a result. But I honestly didn’t learn anything new about Victorian novels throughout the presentation, not to mention that I doubt the memory of those stats and graphs will make it any more pleasurable to read Jane Austin from now on (unlike, say, a perceptive review by a good literary critic).

After lunch, we resumed with Michael Rose, on Darwinian evolution of free will and spiritual experience. His approach to these issues is based on evolutionary game theory as it pertains to behavioral strategies for social life. This consists in exploring quantitatively the dependence of fitness on strategies that specify an animal’s behavior in case of conflict.

Michael stressed that human intelligence must have evolved (and is currently maintained) by intense directional selection for increased brain size, otherwise natural selection would shrink our brains quickly. That is, there is a high cost to being smart. He directed a pretty good jab at other speakers by pointing out that detailed reconstructions of what early humans were doing in the Pleistocene’s savannas amount to little more than “science fiction” (ouch).

Hypotheses to explain human intelligence include the use of technology, but it turns out that the tools employed by early humans (already with large brains) were not much more sophisticated than those used by chimpanzees (which don’t need big brains). The currently fashionable hypothesis is the machiavellian one, that we evolved intelligence to deal with complex social situations. Again, though, early human societies were not that different from those of the Australopithecines, which did not need large brains to deal with their societies.

Another possibility is that selection favored a combination of tool use (particularly weapons) and social intelligence, which can trigger an arms race leading to a rapid and sustained evolution for larger and larger brains. Michael suggested that some (few) human behaviors can be analyzed by the methods used to study animal behavior, like incest. But that doesn’t get you very far from pop evolutionary psychology (hence the usefulness of game theory as an alternative approach). Incidentally, what he calls “free will” isn’t the metaphysical concept, but rather the social science idea that the human mind is not (much) constrained by past genetic evolution, that it’s a general purpose computer capable of making decisions that sometimes violate fitness requirements.

Rose’s suggestion is that this free willing feeling hides the fact that our computer brain does have anatomical compartments with specific functions, such as the prefrontal cortex, which calculate (subconsciously) the possible fitness outcomes of our actions. Evidence for this comes from neurobiological studies on what happens when the Darwinian fitness calculator is damaged: we get a host of psychopathic behaviors (1-2% of the population, committing 40-50% of total crimes). These individuals have a high mortality rate, and given their high incarceration rates, this behavior comes with a high fitness cost.

Michael then moved on to spiritual experiences. The neurobiological evidence suggests that we have this “second executive function” that subconsciously monitors your fitness functions (and is occasionally subservient to our conscious self). This second executive function is the source of the occasional awareness of some Other interfering in our lives. From there the step is (allegedly) brief to having spiritual experience and hence formulating the idea of gods. But, one wonders, how frequently do people actually have spiritual experiences? Enough to justify widespread belief in god?

Next to last was David Linden, on what notions from neurophysiology are useful if one wishes to connect evolution and culture. (Hmpf, another talk without slides.) Neurons are slow and unreliable, and yet they make possible “clever us.” This was achieved by expanding enormously the number of neurons during hominid evolution, something that had to happen without any radical structural redesign. Indeed, only the rough structure of the brain is specified genetically, the rest is the result of wiring induced by experience, beginning early on in utero. The rest of this talk was an informative, if didactic, overview of neurophysiology with comments on how the way the brain works affects our behavior in the world.

Finally, we get to Brian Boyd with a talk on “experiments and experience.” Art is like science in that it experiments with experience, yet literature doesn’t aim at building models, but rather at engaging people with the human experience. Evolution and cognition may “bear on” literature by way of application of general principles like cost-benefit analysis and pattern detection. Boyd talked about different levels of explanation, from the details of a particular work to general principles applying to human nature and derived from social evolution. The idea is that evolution bears on the broader levels of explanation, while attention to specific cultural-historical and even psychological conditions becomes more relevant at narrower levels of explanation.

Thursday, April 26, 2012

Report from the Consilience conference, part I

by Massimo Pigliucci

I am in St. Louis these days, where the University of Missouri has organized a three-day, 19-speaker conference on “consilience,” or the unity of knowledge, in the somewhat idiosyncratic interpretation of E.O. Wilson in his popular 1998 book. Indeed, the proceedings started with a keynote by Wilson himself, as sharp and as sprite as ever. The danger with this sort of conference is that they either become a predictable and somewhat uncritical celebration of a central figure or idea (in this case, Wilson), or that they evolve into a hodgepodge of loosely (if at all) related talks with only a vague central theme. Nevertheless, I accepted to be a speaker here because I thought the conference was a good idea and because my fellow speakers are likely to provide interesting food for thought on a broad array of issues. So, let’s get started.

Wilson’s keynote covered a lot of known material, from the evolution of eusociality (it’s rare, and yet has huge consequences for the species that crosses the threshold from partial to eu-sociality) to the basic steps of the evolution of humans (bipedalism, larger brains, etc.). His overarching theme, however, was that there are three fundamental questions that neither religion nor philosophy can answer, and that science has begun to tackle: Where do we come from? What are we? Where are we going? Somehow Wilson thinks this is equivalent to asking about the meaning of life, though I submit that’s a bit of an unjustified leap (surely what meaning we construct or attribute to our life can be informed by those broad questions, but the meaning of human life being local and personal, those questions hardly take center stage and are more likely simply interesting background).

The crux of Wilson’s talk was that human eusociality appeared as a result of group selection, and that individual and group selection are constantly in unstable tension, which explains why it is in the nature of humanity to always struggle between what he called (borrowing from David Sloan Wilson, no relation) “sinful” (i.e., selfish) and “virtuous” (i.e., cooperative) drives. Wilson also added a provocative note — on which he did no elaborate — about a paper he co-authored with two mathematicians (published in November 2010 in Nature) where he shows that the concept of inclusive fitness (and hence of kin selection) is mathematically incoherent. Which pretty much would demolish, if true, an established explanation for the evolution of altruism and leave the field entirely to a newly resurrected group selection.

I cannot comment on the kin selection issue (anyone?), and I am generally sympathetic to the idea of multi-level selection. But Wilson provided no evidence or particular reason to accept the idea that group selection played a crucial role in the evolution of human eusociality. Furthermore, it seems obvious to me that to label some behaviors as good or bad (virtuous or sinful) on the basis of which selection mechanism (allegedly) evolved them is a flagrant violation of the ought/is divide (which, I know, is not impermeable according to people like Quine, but I am a Humean on this...).

Besides, even if we take Wilson’s highly speculative scenario at face value, we are immediately confronted by the problem — of which Wilson himself seemed aware — that group selected “virtue” only applies to members of the in-group, thus generating a variety of nasty inter-group behaviors, including xenophobia and, of course, war. And here, I think, is where biology reaches a dead end, providing no answer to some of the broadest human problems, which are, somewhat ironically, handed back to the humanities (including philosophy, political science, literature, history, and possibly even religion!). A colossal failure of consilience? (Incidentally, Wilson left the conference immediately after his talk. I understand about his age, but I thought that was a bit rude, considering that the whole event is supposed to be a commentary on his work. Oh well, I guess he won’t see my critical analysis of his take on consilience.)

The second talk of the first day was by John Hawks, on behavioral implications of ancient genomes. Lots of interesting stuff here on the comparative genomics of humans, Neanderthals, other hominids and the broad array of contemporary primates. I do not have much to say about this talk, however, because — although packed with fascinating suggestions about Neanderthals in particular — it seemed to me to have little to do with the theme of consilience (see the second danger mentioned above for this kind of conference). Indeed, the speaker must have been aware of it, since his only reference to the humanities was a half-joking remark to the effect that fiction writers need to drop their stereotype of Neandys as dumb beasts, because of all the evidence of their smarts. Given that the total number of novels featuring Neanderthals is pretty minuscule, I doubt this will have much of an impact on English Lit classes...

Next we have Dan McAdams, with a talk entitled “From actor to agent to author: human evolution and the development of personality.” His starting point is that human nature is an evolved psychological design, with personality psychologists being interested in the variations on the basic design. Of course there are a lot of hidden assumptions here (is there such a thing as human nature? Is it the result of genetic evolution, culture, or both?).

To make sense of variation in human personality McAdams invokes three nested layers of understanding (from the inner to the outermost): the individual as a social actor (you know, as in “All the world’s a stage...” and so forth), the individual as motivated agent (i.e., engaged in goal-oriented social behaviors), and the individual as an autobiographical author (we weave “personal myths” about our lives). Social acting is connected to the Big Five personality traits, and begins very early on. Motivated agency appears in children 7-9 years old. Autobiographical authorship begins in someone’s 20’s and 30’s.

The talk struck me as very interesting, but again with precious little to do with consilience. While the author did mention the word evolution a few times, these were both speculative and largely irrelevant to the main points: McAdams gave a good talk about the psychology of personhood, but evolutionary biology provided only a distant background condition. And, I would add, this is precisely the way it should be.

Ellen Dissanayake talked about “markmaking” as a human behavior. The author focused on early non-representational marks on rocks, which apparently constitute more than 99% of known Pleistocene rock “art,” thus making the famous cave paintings of animals look like anomalies.

Dissanayake considers a number of possible proximate explanations for rock marks, including accidental byproduct of other activities, utilitarian and/or communicative functions (record keeping, didactic, territory marking, etc.), and doodling for pleasure. She is not happy with any of these on the ground that they do not apply universally. But one could reasonably ask why one expects a universal explanation for such a wide variety of human artifacts.

The author then suggests a connection between Paleolithic rock art and modern aboriginal tribes’ body painting, on the basis of the similarity of (some of) the patterns. The commonality would be “ritual use” (a somewhat fuzzy category of human behavior, it must be said). This strikes me as fanciful to say the least, and Dissanayake herself quipped that a “nice” feature of this suggestion is that it is virtually impossible to test, so that she can’t be proven wrong. Okay, then!

What about the ultimate causes of rock art? Possibilities include sexual selection, display of prestige within the group, provision of “cognitive order” (induced by the geometries of the marks), reduction of stress, group selection for social order and group unification. Needless to say, there is absolutely no way to discriminate empirically among any of these.

Toward the end of her talk, Dissanayake  suggested that humans may have had a behavioral disposition to what she calls “artify” that actually preceded symbolic art. Artification would be an evolved behavioral predisposition to make ordinary reality extraordinary or special. Fascinating, but why would we have this artification tendency? And how do we know this to be the case? As usual with evolutionary psychology: it’s easy to make stories up, it’s exceedingly hard to test them scientifically.

We then moved to Herbert Gintis, and the evolution of morality. Gintis began with a cartoon model of the so-called Standard Social Sciences Model and the blank slate approach to human culture, which he (rightly) dismissed. He then — a bit simplistically — mentioned that “the” model in philosophy is the Hobbesian model of war of all against all, moving to Dawkins’ idea of morality as the result of a culture that rebels against selfish genes, and finally arriving at the economists’ assumption that selfish (“rational self interest”) behavior is at the basis of all human transactions.

Instead of all of the above, the author prefers a gene-culture co-evolution approach, along the lines of the now classical studies by Feldman, Cavalli-Sforza and others. Human morality, then, is seen as the product of a dynamic process in the course of which humans transform culture, and culture makes new behaviors fitness-enhancing.

Gintis suggests that morality emerged as a contribution to social harmony and efficiency once hominids had destroyed the basis of the standard primate dominance hierarchy. The latter was undermined, allegedly, by the invention of weapons, by which weaker individuals could kill the alpha male from a distance, or even in his sleep. This is certainly quite speculative, but it does agree with a generally emerging view of basic morality being the result of evolution of highly integrated social behavior in small primate groups. The problem, as usual (and as readily acknowledged by the author) is that it is hard to test the proposition according to which, for instance, leadership by persuasion replaced leadership by brute force.

Gintis understands that persuasion was certainly not enough, and that early morality had to be based on evolved pro-social sentiments. That is, there had to be a strong emotional component to the process. He maintains that these elements explain both the evolution of language (persuasion) and of moral sentiments. Presumably, the combination of the two gave us morality as we understand it today.

I am generally sympathetic to gene-culture co-evolution models, which certainly beat the crap out of simplistic evolutionary psychological hypotheses or of equally vacuous “memetics.” However, it seems to me that this can explain the very beginnings of human traits like language and morality. Pretty soon the speed of cultural change has far outpaced that of genetic evolution. This doesn’t mean that our genetic makeup is not important (as a background condition to all we do), but it seems to imply that we need a second-order theory of cultural evolution per se, and so far I haven’t seen much of a serious candidate for this.

The interesting evidence put forth by Gintis concerns honesty and game theory. As is well known, the only straightforward biological explanations of “altruism” are kin selection (favoring individuals with whom you share a substantial chunk of your genes) and reciprocal altruism (tit-for-tat). Notoriously, neither of these behaviors actually brings about genuine altruism, because it always comes down to your own (genetic, long-term) self-interest. But Gintis’ evidence shows that when people play one-round (i.e., not iterated) games involving fairness and honesty — even when played anonymously — a significant portion of subjects behave honestly (though that percentage goes down if the cost of the honest behavior increases sufficiently). The idea is that, just like Aristotle would have said (though Gintis didn’t mention him), people behave morally for the simple reason that they think (and feel) it is the right thing to do.

Next to last for the day, we had Robert Frank on “The Darwin Economy: competition and the common good.” (And he was the only speaker without slides! He thinks that’s more engaging, I think it’s so much easier to lose concentration and track of what’s going on. But that’s another conversation.) According to Frank, economists eventually will recognize Darwin, not Adam Smith, as the founder of their discipline. Smith was much less of a true believer in the virtues of the free marketplace than modern libertarians and many economists consider him to have been. Frank agrees that markets fail frequently, but he thinks that is for reasons different from those maintained by Smith. He also, apparently, doesn’t buy much into the tenets of behavioral economics.

The reason the invisible hand doesn’t work, according to Frank, is because of Darwin’s central insight that there is a tension between individual and group interests. An example is the fact that, historically, NHL (hockey) players never wore helmets, even though they all thought helmets were a good idea. The problem is that without a helmet the player sees better, thus gaining an edge on the others. So if even one player took the helmet off, pretty much all the others would follow, in a hockey version of the tragedy of the commons. Helmets were finally instituted because of a league-wide rule that was voted on near unanimously by the players themselves. This, incidentally, makes a perfect anti-libertarian argument... (Which was made by John Stuart Mill in the 19th century.)

The talk went on for quite a while with example after example of absurdities caused by market-enabled runaway competition among people who end up damaging society as well as their own long-term flourishing.

Though Frank makes very good points, it seems to me that there may be a couple of fallacies at work here. First of all, once again we see the desire for a totalizing explanation, even though it is perfectly reasonable to think that Adam Smith, the behavioral economists, and Darwin all have gotten pieces of the puzzle right. Second, these aren’t even independent explanations, since the human behavior repertoire evolved (in part) by Darwinian mechanisms, and markets are the result of cultural evolution that is affected in turn by the range of human behaviors.

Dulcis in fundo (so to speak), it was my turn. I will publish a separate essay on my talk, so I’ll give just the gist here. It is a criticism of Wilson-style consilience (in the sense of “unity of knowledge”), which I think is a reductionist approach and has actually little to do with consilience in the original (and still most widely used in philosophy) meaning of the term, elaborated by William Whewell and referring to a type of induction known as inference to the best explanation. My basic theses are that: i) “Knowledge” is a heterogeneous category that does not admit of Wilson-type consilience; ii) Applying the type of knowledge emerging from the natural sciences to (some) other domains is a category mistake and ought to be avoided; and iii) Wilson-type consilience is actually a scientistic and anti-intellectual enterprise.

I gave a few examples of where Wilson goes wrong with his consilience (i.e., let’s reduce humanities to biology) program. For instance, Wilson is fond of what he calls “epigenetic rules” which he defines as “the regularities of sensory perception and mental development that animate and channel the acquisition of culture.” I don’t know that any biologist has ever measured any such entities, which are just as vague as another of Wilson’s favorite, memes. On the latter, I’ll let my colleague Jerry Coyne comment: “[Memetics is] completely tautological, unable to explain why a meme spreads except by asserting, post facto, that it had qualities enabling it to spread.” Well said, Jerry.

Wilson is also fond of the Enlightenment and of its 20th century philosophical offspring, logical positivism. He hopes that positivism will be back, especially once neurobiology tells us more about how human beings reason. The latter is an obvious non sequitur, on which I think I don’t need to comment further. As for logical positivism, Wilson — who is openly dismissive of philosophy — apparently has never read Putnam or Quine, or a host of other critics of positivism. Incidentally, the demise of logical positivism is a good example of how philosophy makes progress: people find faults in certain views or arguments, and when these can no longer be patched or repaired they are abandoned and the field moves on.

In the end, I found evolutionary biologist Allen Orr’s critique of the 1998 Wilson book on consilience to be right on target. Among other things Orr says: “The real reason Wilson favors his consilient scenario isn’t because he finds it more plausible but because he finds it more attractive. For as he admits near the start of his book, consilience isn’t science, it is a philosophy, a metaphysical view that he obviously finds both beautiful and deeply satisfying. The irony, of course, is that Wilson’s own science of evolution gives every reason for questioning this metaphysic, every reason, that is, for doubting whether our brains — jury-rigged and riddled with blindspots — are the stuff from which certain knowledge and seamless consilience can be obtained.” Yup.

Rationally Speaking podcast: Live at NECSS, David Kyle Johnson on the Simulation Argument

In this special live episode recorded at the 2012 Northeast Conference on Science and Skepticism, Massimo and Julia discuss the "simulation argument" -- the case that it's roughly 20% likely that we live in a computer simulation -- and the surprising implications that argument has for religion.

Their guest is philosopher David Kyle Johnson, who is professor of philosophy at King's College and author of the blog "Plato on Pop" for Psychology Today, and who hosts his own podcast at philosophyandpopculture.com.

Elaborating on an article he recently published in the journal Philo, Johnson lays out the simulation argument and his own insight into how it might solve the age-old Problem of Evil (i.e., "How is it possible that an all-powerful, all-knowing, and good God could allow evil to occur in the world?"). As usual, Massimo and Julia have plenty of questions and comments!

Wednesday, April 25, 2012

Lawrence Krauss: another physicist with an anti-philosophy complex

by Massimo Pigliucci

I don’t know what’s the matter with physicists these days. It used to be that they were an intellectually sophisticated bunch, with the likes of Einstein and Bohr doing not only brilliant scientific research, but also interested, respectful of, and conversant in other branches of knowledge, particularly philosophy. These days it is much more likely to encounter physicists like Steven Weinberg or Stephen Hawking, who merrily go about dismissing philosophy for the wrong reasons, and quite obviously out of a combination of profound ignorance and hubris (the two often go together, as I’m sure Plato would happily point out). The latest such bore is Lawrence Krauss, of Arizona State University.

I have been ignoring Krauss’ nonsense about philosophy for a while, even though it had occasionally appeared on my Twitter or G+ radars. But the other day my friend Michael De Dora pointed me to this interview Krauss just did with The Atlantic, and now I feel obliged to comment, for the little good that it may do. And before I continue, kudos to Ross Andersen, who conducted the interview, for pressing Krauss on several of his non sequiturs. Let’s take a look, shall we?

Krauss is proud (if a bit coy) of the fact that Richard Dawkins referred to his latest book, entitled “A Universe from Nothing: Why There is Something Rather Than Nothing,” as comparable to Darwin’s “Origin of Species,” on the grounds that it upends the “last trump card of the theologian.” Well, leave it to Dawkins to engage in that sort of silly hyperbolic rhetoric. (Dawkins still appears to be convinced that religion will be defeated by rationality alone. Were that the case, David Hume would have sufficed.) The fact is, Krauss’s book is aimed at a general audience, popularizing other people’s (as well as his own) work, and is not the kind of revelation of novel scientific findings that Darwin put out in his opus, and that makes all the difference.

Krauss’s volume was much praised when it got out in January, but more recently has been slammed by David Albert in the New York Times:

“The particular, eternally persisting, elementary physical stuff of the world, according to the standard presentations of relativistic quantum field theories, consists (unsurprisingly) of relativistic quantum fields... they have nothing whatsoever to say on the subject of where those fields came from, or of why the world should have consisted of the particular kinds of fields it does, or of why it should have consisted of fields at all, or of why there should have been a world in the first place. Period. Case closed. End of story.”

That’s harsh, and Krauss understandably doesn’t like what Albert wrote. Still, I wonder if Krauss is justified in referring to Albert as a “moronic philosopher,” considering that the latter is not only a highly respected philosopher of physics at Columbia University, but also holds a PhD in theoretical physics. I didn’t think Rockefeller University (where Albert got his degree) gave out PhD’s to morons, but I could be wrong.

Nonetheless, let’s get to the core of Krauss’ attack on philosophy. He says: “Every time there's a leap in physics, it encroaches on these areas that philosophers have carefully sequestered away to themselves, and so then you have this natural resentment on the part of philosophers.” This clearly shows two things: first, that Krauss does not understand what the business of philosophy is (it is not to advance science, as I explain here); second, that Krauss doesn’t mind playing armchair psychologist, despite the dearth of evidence for his pop psychological “explanation.” Okay, others can play the same game too, so I’m going to put forth the hypothesis that the reason physicists such as Weinberg, Hawking and Krauss keep bashing philosophy is because they suffer from an intellectual version of the Oedipus Complex (you know, philosophy was the mother of science and all that... you can work out the details of the inherent sexual frustrations from there).

Here is another gem from this brilliant (as a physicist) moron: “Philosophy is a field that, unfortunately, reminds me of that old Woody Allen joke, ‘those that can’t do, teach, and those that can’t teach, teach gym.' And the worst part of philosophy is the philosophy of science; the only people, as far as I can tell, that read work by philosophers of science are other philosophers of science. It has no impact on physics what so ever. ... they have every right to feel threatened, because science progresses and philosophy doesn’t.”

Okay, to begin with, it is fair to point out that the only people who read works in theoretical physics are theoretical physicists, so by Krauss’ own reasoning both fields are largely irrelevant to everybody else (they aren’t, of course). Second, once again, the business of philosophy (of science, in particular) is not to solve scientific problems — we’ve got science for that (Julia and I explain what philosophers of science do here). To see how absurd Krauss’ complaint is just think of what it would sound like if he had said that historians of science haven’t solved a single puzzle in theoretical physics. That’s because historians do history, not science. When was the last time a theoretical physicist solved a problem in history, pray?

And then of course there is the old time favorite theme of philosophy not making progress. I have debunked that one too, but the crucial point is that progress in philosophy is not and should not be measured by the standards of science, just like the word “progress” has to be interpreted in any field according to that field’s issues and methods, not according to science’s issues and methods. (And incidentally, how’s progress on that string theory thingy going, Lawrence? It has been 25 years and counting, and still no empirical evidence...)

Andersen, at this point in the interview, must have been a bit fed up with Krauss’ ego, so he pointed out that actually philosophers have contributed to a number of science or science-related fields, and mentions computer science and its intimate connection with logic. He even names Bertrand Russell as a pivotal figure in this context. Ah, says Krauss, but really, logic is a branch of mathematics (it’s actually the other way around), so philosophy can’t get credit. And at any rate, Russell was a mathematician (actually, he was largely a logician with an interest in the philosophy of math). Krauss also claims that Wittgenstein was “very mathematical,” as if it is somehow surprising to find a philosopher who is conversant in logic and math. Nonetheless, Witty's major contributions are in the philosophy of language.

Andersen isn’t moved and insists: “certainly philosophers like John Rawls have been immensely influential in fields like political science and public policy. Do you view those as legitimate achievements?” And here Krauss is forced to reveal his anti-intellectualism, and even — if you allow me gentle reader — his intellectual dishonesty: “Well, yeah, I mean, look I was being provocative, as I tend to do every now and then in order to get people's attention.” Oh really? This from someone who later on in the same interview claims that “if you’re writing for the public, the one thing you can’t do is overstate your claim, because people are going to believe you.” Indeed people are going to believe you, Prof. Krauss, and that’s a shame, at least when you talk about philosophy.

Krauss also has a naively optimistic view of the business of science, as it turns out. For instance, he claims that “the difference [between scientists and philosophers] is that scientists are really happy when they get it wrong, because it means that there’s more to learn.” Seriously? I’ve practiced science for more than two decades, and I’ve never seen anyone happy to be shown wrong, or who didn’t react as defensively (or even offensively) as possible to any claim that he might be wrong. Indeed, as physicist Max Plank famously put it, “Science progresses funeral by funeral,” because often the old generation has to retire and die before new ideas really take hold. Lawrence, scientists are just human beings, and like all human beings they are interested in mundane things like sex, fame and money (and yes, the pursuit of knowledge). Science is a wonderful and wonderfully successful activity (despite the more than occasional blunder), but there is no reason to try to make its practitioners look like some sort of intellectual saints that they certainly are not (witness also the alarming increase in science fraud, for instance).

Finally, on the issue of whether Albert the “moronic” philosopher has a point in criticizing Krauss’ book, Andersen points out: “it sounds like you’re arguing that ‘nothing’ is really a quantum vacuum, and that a quantum vacuum is unstable in such a way as to make the production of matter and space inevitable. But a quantum vacuum has properties. For one, it is subject to the equations of quantum field theory. Why should we think of it as nothing?” Maybe it was just me, but at this point in my mind’s eye I saw Krauss engaging in a more and more frantic exercise of handwaving, retracting and qualifying: “I don’t think I argued that physics has definitively shown how something could come from nothing [so why the book’s title?]; physics has shown how plausible physical mechanisms might cause this to happen. ... I don’t really give a damn about what ‘nothing’ means to philosophers; I care about the ‘nothing’ of reality. And if the ‘nothing’ of reality is full of stuff [a nothing full of stuff? Fascinating], then I’ll go with that.”

But, insists Andersen, “when I read the title of your book, I read it as ‘questions about origins are over.’” To which Krauss responds: “Well, if that hook gets you into the book that’s great. But in all seriousness, I never make that claim. ... If I’d just titled the book ‘A Marvelous Universe,’ not as many people would have been attracted to it.”

In all seriousness, Prof. Krauss, you ought (moral) to take your own advice and be honest with your readers. Claim what you wish to claim, not what you think is going to sell more copies of your book, essentially playing a bait and switch with your readers, and then bitterly complain when “moronic” philosophers dare to point that out.

Lee Smolin, in his “The Trouble with Physics” laments the loss of a generation for theoretical physics, the first one since the late 19th century to pass without a major theoretical breakthrough that has been empirically verified. Smolin blames this sorry state of affairs on a variety of factors, including the sociology of a discipline where funding and hiring priorities are set by a small number of intellectually inbred practitioners. Ironically, one of Smolin’s culprit is the dearth of interest in and appreciation of philosophy among contemporary physicists. This quote is from Smolin’s book:

“I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today — and even professional scientists — seem to me like someone who has seen thousands of trees but has never seen a forest. A knowledge of the historical and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is — in my opinion — the mark of distinction between a mere artisan or specialist and a real seeker after truth.” (Albert Einstein)


Postscript: As people have pointed out, Krauss has issued an apology of sorts, apparently forced by Dan Dennett. He still seems not to have learned much though. He confuses theology with philosophy (in part), keeps hammering at a single reviewer who apparently really annoyed him (in the New York Times), and more importantly just doesn't get the idea that philosophy of science is NOT in the business of answering scientific questions (we've got, ahem, science for that!). It aims, instead, at understanding how science works. Really, is that so difficult to understand, Prof. Krauss?

Monday, April 23, 2012

Understanding Nuclear Power, Part I: Whirlwind Nuclear Physics

by Ian Pollock


This post is the first in a series on nuclear fission power, intended to provide the background knowledge to understand what is at stake in all major aspects of the nuclear power debate — science and engineering, safety and health, economy and environment.

Energy policy may turn out to be by far the most important issue of our time. Given this, it is crucial that policy-makers and an informed public understand the relative costs and benefits of all the power generation methods that are on the table. Unfortunately, discourse about nuclear power in particular is plagued by wild misinformation. This discourse is heavily politicized, thanks in part to the cold war, and is riddled with fallacies arising from ignorance of the relevant science, heavily influenced by fear. This series of posts is meant to try, in some small way, to correct that.

I make no bones about the fact that I am pro-nuclear. One of the aims of this series of posts is to argue that, after taking into consideration the risks and drawbacks of nuclear fission energy, we would still be crazy not to expand its use substantially. That argument will have to wait for the final post in the series.

However, there is a more important purpose to these posts, which even critics of nuclear power generation should be willing to embrace. Namely, to move toward a saner discussion of nuclear power. To be clear, no policy decision is one-sided; there are reasonable objections to expanding nuclear generation. However, they are discussed less than they should be, partly because discourse is often derailed by a lot of very silly ideas (or worse, unstated assumptions!) from pop-culture floating around in the public imagination — to take one example, the widespread idea that a standard nuclear plant could blow up in a nuclear explosion, complete with a mushroom cloud. I want to help clear these myths out of the way once and for all.

If, dear reader, we still disagree at the end of this series, then I want our disagreement to at least be substantive!

This first post will give an extremely brief outline of basic nuclear physics concepts and jargon. Future posts will expand on specific aspects of this outline when they become relevant to the discussion.

Before starting, I wish to disclaim that although I work in the engineering profession, I am not a nuclear engineer, so my opinions on this subject should be taken as those of an informed layperson.

The nucleus

Atoms are usefully pictured as consisting of a small, dense, central nucleus, surrounded by a comparatively large cloud of very tiny, fast-moving electrons. Broadly speaking, the perceived size of an atom is determined by how much area the electron cloud covers. In comparison to the atom’s overall size, the size of the nucleus is typically extremely tiny — about 1/100,000th of the atom’s overall dimension.

The nucleus itself is composed of both protons, which carry positive charge, and neutrons, which carry no charge. The electrons orbiting the nucleus are negatively charged. Protons and neutrons have nearly the same mass, which is about 1,800 times more than the mass of an electron.

Electrical charge is clearly not the whole story in explaining the structure of atoms. For one thing, since electrons are electrically attracted to the protons in the nucleus, one would expect that they should quickly spiral down into the nucleus and stick to the protons. The fact that they do not do so finds its ultimate explanation in quantum mechanics.

Likewise, one would expect the protons in the nucleus to repel each other so violently that the nucleus would fly apart. Since it does not do so, there must be another force acting on the nucleons (protons and neutrons). This force is called the strong nuclear force. It is both extremely strong and extremely short-range, attracting both protons and neutrons to themselves and to each other; it is the balance of the electrical forces and strong nuclear forces in a nucleus that determines whether it is stable or not.

Because neutron numbers are more or less irrelevant in chemistry, the chemical elements are named based on the number of protons they contain, regardless of the number of neutrons. For example, Uranium (symbol U) has 92 protons (and therefore, 92 electrons). For the purposes of chemistry alone, it does not matter how many neutrons Uranium has, for its chemical properties will be virtually identical. However, for nuclear physics, the number of neutrons becomes very important.

Accordingly, nuclear physics identifies a particular atom not only by its chemical name but also by its atomic mass number, which is simply the count of all nucleons in that atom. For example, the most common type of Uranium has 238 nucleons (92 protons + 146 neutrons). But there are other types of Uranium that have the same number of protons; therefore the same chemical properties, but different numbers of neutrons. These varieties are referred to as isotopes of Uranium. Standard nuclear jargon is to identify an isotope with the chemical name followed by the mass number; hence the most common isotope of Uranium is called “Uranium-238” or “U-238.”

Nuclear physicists find it convenient to chart all the possible combinations of protons and neutrons in the Chart of the Nuclides, which is a very simple plot of the number of protons versus the number of neutrons showing which species are stable or unstable, along with their other properties (“nuclide” refers to any unique combination of protons and neutrons).

The neutron-to-proton ratio is a key piece of information. For lighter nuclei, n/p≃1 provides stability, but as one approaches heavier nuclides, stability can only be achieved with a ratio of n/p≃1.5. Observe the subtly downward-curving “line of stability” on the chart of the nuclides. A moment’s perusal of this chart will also show you that if I were to pick, at random, a certain number of protons and a certain number of neutrons, the nuclide resulting from their combination would almost certainly be unstable. This will become important.

Binding energy, fusion, fission, decay

A stable nucleus has tightly-bound nucleons that are difficult to separate from each other. Nuclei may be usefully characterized by their binding energy, which represents the amount of energy it would require to dissociate the nucleus into its constituent protons and neutrons. High binding energy means “tightly bound.”

If you were to dissociate a nucleus into its protons and neutrons, you would discover an interesting fact. Namely, that if you were to weigh all the protons and neutrons in isolation and add up their weights, they would be slightly heavier than the fully assembled nucleus — the whole weighs less than the sum of its parts. This difference in mass is called the mass defect. There is a familiar relation between the mass defect (difference in weight disassembled vs. assembled) and the binding energy (energy required to disassemble). If we symbolize the mass defect as Δm, and binding energy as ΔE, and use the symbol c for the speed of light (~300,000 km/second), then we find that ΔE=Δmc2. Mass is also a form of energy, as that famous equation shows, and the mass defect is just another way of writing the binding energy.

As a general rule, both light elements (like Helium) and heavy elements (like Uranium) have low binding energy and relatively unstable nuclei, while elements of medium weight (like Iron) have high binding energy and therefore very stable nuclei. This is suggestive of two ways of getting energy out of nuclear interactions: creating medium-weight elements by fusing light elements together (nuclear fusion — a worthy subject for another occasion), and creating medium-weight elements by smashing heavy elements apart (nuclear fission). It turns out that the ideal way to smash a heavy nucleus is to bombard it with neutrons. However, not all heavy nuclei break apart easily this way, and crucially, not all release neutrons when they do. Neutron release is important because it permits the chain reaction, allowing the process to sustain itself indefinitely, or even accelerate. Those fissionable nuclides which can sustain a chain reaction are called fissile. Currently, the two most important fissile nuclides are Uranium-235 and Plutonium-239. Controlled fission (steady chain reaction) is the key to nuclear power, while uncontrolled fission (exponentially accelerating chain reaction) is the key to the nuclear bomb.

Even when left to their own devices, however, radioactive nuclides do not merely sit placidly. Because they are unstable, they will tend to undergo radioactive decay — which essentially means ejecting particles from the nucleus — and the more unstable they are, the more readily they will do this. If I have a sample of Plutonium-239 that weighs 1 kg now, I can predict that half of it will have radioactively decayed in about 24,000 years, and half of that in another 24,000 years, and so on. Hence, we say that Pu-239 has a half-life of 24,000 years. Half-lives vary wildly depending on the stability of the nuclide. For example, Uranium-238 has a half-life of about 4.5 billion years — the age of the earth — while Helium-5 has a half-life of only 10-21 seconds, roughly the time it takes to transition from never having heard of an Ikea product, to desperately needing it.

The relative absence of radioactive materials in the world around us is due, not to non-radioactive material being “more natural” than radioactive material, but rather to survivorship. Radioactive materials have decayed into non-radioactive ones — our (mostly) non-radioactive world is what’s left after everything else has decayed.

The next post will look at decays more closely, especially with regard to their health effects.

Recommended sources:

- Hyperphysics section on nuclear physics.

- Richard Muller’s fantastic lectures “Physics for Future Presidents,” available on YouTube: Radioactivity 1, Radioactivity 2, Nukes 1, Nukes 2 & Review.

- David Bodansky, “Nuclear Energy: Principles, Practices and Prospects.” 2004, Springer.

Saturday, April 21, 2012

Curate’s Egg: Alex Rosenberg and the meaning of life

[Rationally Speaking is pleased to publish a guest commentary by Prof. Michael Ruse. Ruse is Lucyle T. Werkmeister Professor at Florida State University and Director of History and Philosophy of Science Program at Bristol University. His most recent book is The Philosophy of Human Evolution, Cambridge University Press.]

by Michael Ruse

I understand that a contributor to the New Republic has deemed Alex Rosenberg’s The Atheist’s Guide to Reality: Enjoying Life without Illusions, the worst book of 2011. This reaction is understandable. There is an irritating jauntiness about the work, coming across as something altogether too satisfied for its own good. Rather as though the work had been penned by an overly bright but somewhat ignorant fifteen-year old. Sweeping statements are made that too readily invite instant critical response, for instance about the fact that natural selection cares only about reproductive success and not the truth, in which case why should we care about a word that Rosenberg has written? In the same mode, matters of fact are claimed that are simply not true.  For instance it is said that, other than a late addition to the sixth edition of the Origin, Darwin never mentions God in that work. In fact, there are lots of references to the Creator in the Origin, and while one might query how many of these hint that Darwin himself endorsed His existence, one reference at least in all of the editions suggests just that:
Authors of the highest eminence seem to be fully satisfied with the view that each species has been independently created. To my mind it accords better with what we know of the laws impressed on matter by the Creator, that the production and extinction of the past and present inhabitants of the world should have been due to secondary causes, like those determining the birth and death of the individual.
I should say that, certainly in the early years, this fits in with what we know from other sources (the letters especially) on Darwin’s beliefs.

Having said all of this, I think the totally negative judgment on Rosenberg’s book is altogether too harsh. Clearly the New Republic contributor has not read Alvin Plantinga’s Where the Conflict Really Lies: Science Religion, and Naturalism, also published in 2011. In fact, the works of Rosenberg and Plantinga share some features, namely a kind of absolutism about their own views and disdain for the views of others. But at least Rosenberg is on the side of the angels in trying to take science seriously — some would say, altogether too seriously — whereas Plantinga takes every opportunity to opt for superstition and ignorance and bad argument. Being an enthusiast for Intelligent Design Theory is the least of his sins.

The trouble is that Rosenberg has been seduced into thinking that he can write a popular book, a trade book. Now some academics are very good at this. One thinks at once of Richard Dawkins and Stephen Jay Gould. Others are less gifted, their attempts at trade books veering between the leaden and the louche. I regret to say, because I could use the money, that Michael Ruse’s attempts at this genre fall into the unsuccessful category. The same can be said of Rosenberg’s The Atheist’s Guide to Reality. A cocky self-satisfaction is simply not a recipe for good writing for the popular domain. The public needs to be spoken to, not spoken down to.

All of this seems a preamble to saying that I think the book is not completely without merit. I certainly don’t want to praise it to the heavens, but I would not go the other way either. It is, as the curate said to the bishop when asked about his breakfast egg: “Good in parts.” This seems like faint praise and perhaps it is, but I do want to say that I think parts are good and some parts are very good indeed. Although the material on morality is presented in a way (I think needlessly) intended to shock and disturb — there isn’t any morality and you cannot condemn Hitler and that sort of stuff — in fact Rosenberg brings Darwinian evolutionary biology to bear on morality in a fruitful and enlightening way. He shows that you don’t need the will of God and those sorts of factors to get the proper norms of conduct and that, on the other side, the struggle for existence doesn’t lead straight to the kind of crude Social Darwinism embraced today very regrettably by the Republican Party of the United States of America. Moreover, I am even more glad to say that Rosenberg shows we don’t need any guff about group selection and other faux teleological mechanisms to get decent behavior and thoughts about it. Good, old-fashioned natural selection, working for the benefit of the individual, can do the job.

Rosenberg has a good eye for bad arguments (by others!) and has an enviable ability to skewer the inadequate and inept. He spots that a major (I would say, the major) problem for the theist, especially the Christian, when faced with Darwinian evolutionary biology is the essential randomness, the non-guidedness, of the latter. For the Christian, humans have to exist. But me no buts; we are not a contingent option. We may not be the exclusive focus of God’s care, but we are an essential focus of such care. However, Darwinism seems not to deliver. Mutations are random, in the sense of not appearing to order or need, and selection favors success not necessarily big brains and bipedalism. These may be nice things to have, but they are not the predetermined goal of the evolutionary process. In the memorable words of the late Jack Sepkoski, one of his era’s leading paleontologists: “I see intelligence as just one of a variety of adaptations among tetrapods for survival. Running fast in a herd while being as dumb as shit, I think, is a very good adaptation for survival.”

I am pretty sure that most of the solutions offered out there in the literature don’t work.  Physicist-theologian Robert J. Russell thinks God puts in direction from down at the unobservable quantum level. This, it seems to me, is simply a tarted-up version of theistic evolution that Darwin himself found so unacceptable in the thinking of his American friend Asa Gray. Non-believer Richard Dawkins thinks that arms races, competition between evolving lines, will eventually lead to beings with massive on-board computers. But even if arms races work, and not everyone thinks that they do, I don’t see that humans will necessarily evolve. Believer Simon Conway Morris (incidentally following non-believer Stephen Jay Gould) thinks that ecological niches exist objectively, that there is such a niche for culture, and that even if we had not found our way into it, some organism at some point would have done so. But apart from anything else, there is good reason to think that organisms create niches as much as they find them. So I am not sure that that solution works either. I myself am inclined to think that multiverses might do the trick. Given enough attempts, like the monkeys and Shakespeare, humans would come into being eventually. But I am not here pushing my own view — for which, incidentally, among believers and non-believers I have found absolutely no takers. I am simply congratulating Rosenberg for taking a lot more seriously a problem that too many think they can easily gloss over.

So, why then am I not more positively charged up about Rosenberg’s book? It is perhaps not surprising, as one who thinks of himself as much a historian of science as a philosophy of science, that my complaint starts with history. In this book, Rosenberg expresses contempt for history to a degree that (outside the American automobile business, and look at the state of that) I don’t think I have ever encountered elsewhere. (This is not something new. I remember Rosenberg saying something similar about thirty years ago.) “History is helpless to teach us much about the present.” Continuing: “When it comes to understanding the future, history is bunk.” I won’t comment on the irony of this coming from an ardent evolutionist, but simply suggest that his attitude leads him badly astray. Even as he opens by suggesting that science leads to non-belief, using the atheistic members of the US National Academy of Sciences as evidence, we can see that there is something wrong. Without knowing their histories, how can we be sure that science led to non-belief rather than non-believers turning to scientific inquiry early and fiercely and succeeded? What one can say is that the autobiographies of nineteenth-century non-believers almost always stress that they came to non-belief on theological and philosophical grounds and then embraced things like evolution. In Darwin’s own case, his non-belief came primarily from his detestation of the idea that non-believers like his father and brother were, purely on the grounds of their non-belief, destined to eternal damnation.

But let me dig a bit more. Rosenberg thinks that science basically wipes out the claims of religion, either showing them false or explicable purely in scientific terms. He proudly proclaims himself committed to “scientism”: “the conviction that the methods of science are the only reliable ways to secure knowledge of anything; that science’s description of the world is correct in its fundamentals; and that when “complete,” what science tells us will not be surprisingly different from what it tells us today.” In large part I agree with Rosenberg. Obviously you cannot hold to Noah’s Flood and at the same time to modern paleontology, let alone to plate tectonics.  As obviously, a literal Adam and Eve are entirely negated by modern paleoanthropology. They did not and could not have existed. I am not that keen on burning bushes or partings of water either. I hope also that my agreement with Rosenberg about morality shows that I think that we don’t need a lot of God talk to get ethical thinking and behavior. And that Darwinian evolutionary biology shows that the call for foundations is mistaken and unnecessary.

What about some of the basic issues for theism, for instance the very existence of anything at all (what Heidegger calls the fundamental problem of metaphysics) or at the other end of the scale the meaning of existence? Like other theists, the Christian has answers to these and related questions. Existence itself (that is, the universe and everything within, including us humans) is the product of a good creative God. Meaning is also related to God, and for humans in particular life here on earth is in some sense a time or trial or testing, preparing the way for the possibility of eternal bliss with the Creator, whatever that might mean.

As it happens, I am no more accepting of these answers than is Rosenberg. I would describe myself as an agnostic or skeptic rather than an atheist, but essentially I am pretty atheistic about the Christian answers. If there is more to life than meets the eye, as it were, I very much doubt it is something within my present comprehension. The question however is whether science as such negates these answers that the theist would give, and perhaps even more fundamentally whether science makes the very asking of these questions in some sense otiose or inappropriate.  This I think is Rosenberg’s position and here I part company with him. In line with Charles Darwin, I reject theism on theism’s grounds rather than because of science.

As I see it, Rosenberg simply says that modern science has no place for these sorts of questions, or if it does it answers them adequately — along the lines that the Big Bang speaks to origins — and that is that. In the old days, before the Scientific Revolution four hundred years ago, the Aristotelian science of the day may well have allowed such questions, but now we have moved on from an incorrect science to a more correct science, end of story. And it is here that I would say that the refusal to look at history leads to misunderstandings. If we look at the Scientific Revolution and ask exactly what it meant, we find it was not so much a simple matter of moving from falsity to truth — although I do accept that the new science has many virtues that the old science did not have — but rather a change of metaphors. The old science saw the world in an organic mode — things were living in a sense — and that is why, for instance, it was appropriate to ask about final causes and meanings. The new science sees the world in a machine mode — the mechanistic philosophy — and that, among other things, is why it is inappropriate to ask about final causes and meanings and so forth.

Notice however what using metaphors entails. As Thomas Kuhn taught us — and remember how he identified his paradigms with metaphors in some wise — metaphors are powerful tools for focusing on nature and giving us ways of understanding it. But they come at a cost, namely that they are limited and do not (and do not pretend to) answer all questions. To use a metaphor to talk about metaphors, metaphors are like the blinkers you put on race horses to make them focus on the track and not be distracted by the spectators. So, for instance, if I describe my love as a rose, I am presumably talking about her freshness and beauty — perhaps I am joking about her being a bit prickly — but I am not talking about her religious affiliation or her mathematical abilities. I am not saying she is not religious or cannot do mathematics, I am just not talking about those sorts of things.

Look now at the basic metaphor of modern science, the machine metaphor. It is very powerful, but there are some things it simply doesn’t speak to. Origins is one such issue and meaning is another. You take your materials as given and build your machine; you set it in motion and that is that. You might complain that machines do have meaning: an automobile is for travel. But as historians of the Scientific Revolution have stressed, very quickly the metaphor of a machine was truncated to simply the sense of something working according to law, nothing further. The world goes through the motions, as it were. Of course the early workers in the new mode did think there were meanings — meanings given by God. But very quickly they dropped these from their science as of no value qua science. In the words of one of the great historians of the Revolution (Eduard Jan Dijksterhuis), God became “a retired engineer.”

So here I do part company with Rosenberg. I think his insensitivity to history blinds him to the fact that science does not ask certain questions and so it is no surprise that it does not give answers — at least, not answers of a form that the theist finds adequate. As I have said, I am not at all sure that the theist’s own answers are correct, but they are not shown incorrect or inappropriate by modern science. Science is limited in scope and since, even if in the future you get rid of the metaphors of today’s science, you will have to find other metaphors to replace them, I would argue that science by its very nature is destined forever to be limited. History shows that!

I have tried to make these comments constructive. Obviously in a major way I find Rosenberg’s book intensely irritating. But I want to go beyond that because in some respects — and this applies to other parts of the book I have not really touched — I think his ideas and arguments are insightful and often correct. And where I differ from him, I find his positions stimulate me to provide alternatives that I think are better. So perhaps in the end, like the unfortunate curate, I find myself with an egg that is not entirely wholesome, but probably the good parts outweigh the bad parts.

Friday, April 20, 2012

Michael’s Picks

by Michael De Dora

* The Economist discusses Rachel Maddow’s new book on the relationship between the growth of executive power and war.

* Jonathan Turley, a law professor at George Washington University, writes on the alarming increase in public schools firing teachers over “perfectly lawful behavior during off-hours.”

* Why do Americans reject euthanasia? That’s the latest question posed by the New York Times in the newspaper’s Room for Debate section.

* Christian groups are opposing anti-bullying legislation in several states because they believe the laws restrict religious freedom and/or promote homosexuality, marriage equality, and transgenderism.

* In an effort to better explain a range of complex philosophical ideas to the general public, Genís Carreras has created a series of posters featuring a combination of basic colors, simple shapes, and concise definitions of different philosophies. You can see the posters here, and purchase copies here.

* A new study in the journal Psychological Science suggests that the human tendency to cheat is a natural impulse, and that given some time for reflection, humans are less likely to cheat.

* Referencing John Stuart Mill’s harm principle, Tauriq Moosa argues in a new article on Big Think that a society is hypocritical if it grants some individual rights, but not others. Take a look.

* Scientists have published research in the journal Nature that links a rare genetic mutation to a heightened risk of autism.

* Can science determine which foods taste best together?