Paul Rezkalla is a Graduate Fellow of the Society of Christian Philosophers and works with the Ian Ramsey Centre for Science and Religion at the University of Oxford. He is currently a Ph.D. candidate in philosophy at Florida State University and recently completed a M.Sc. in cognitive and evolutionary anthropology as part of his doctoral research on the evolution of morality. He also holds a MA in philosophy from the University of Birmingham (UK) and a MA in theology from Saint John’s University. Paul loves the English Premier League and plays the oud (look it up!) for the Tallahassee Middle Eastern Ensemble.
Website: https://sites.google.com/view/paulrezkalla
Can you describe something that has recently amazed you? How did it make you feel?
I’m often amazed at elegant scientific hypotheses. Some scientific findings are just amazing. For example, when a fetus is gestating, some of the father’s genes in the fetus are designed to extract as much resources as possible from the mother so that the fetus will grow properly until delivery. But this leads to negative repercussions for the mother, often manifesting in high blood pressure and diabetes! However, when the father is physically present near the mother during her pregnancy, hormones regulate and mitigate these negative consequences! This is one of those times where biology is beautiful! Examples like this are rife in the sciences. I’ll give you one more. Humans are the only mammal to enjoy some adult lactase persistence–this is what enables us to process milk past infancy. What’s interesting about this, though, is that we did not always have this ability! A technique known as ‘molecular clocking’ allows us to identify the origins of certain genes. Molecular clocking shows that adult lactase persistence only came on the scene about 10,000 years ago–roughly around the time of the advent of dairy farming! When humans started keeping cows and experimenting with dairy as a source of nutrition, our genome changed accordingly, allowing for an ability to process lactose into adulthood. This happened as a result of a genetic mutation that happened twice, once in Europe and once in Africa (both had pastoralist societies). Humans inadvertently, but effectively, redirected the course of our own evolution via our cultural innovations!
Where do you think our sense of wonder comes from and what can we do to cultivate it?
There may be an evolutionary basis for human curiosity in that a penchant for a cautious curiosity could have contributed to reproductive fitness. Individuals who investigated new places and technologies and interrogated their environments to understand how things worked would have uncovered some success over individuals who did no such thing. However, curiosity is different from wonder in an important way. Curiosity is like an itch–it goes away after it’s been ‘scratched.’ If I’m merely curious about how a microwave works, I can watch a YouTube video about how microwaves work and then my itch is gone. However, wonder is the kind of phenomenon you can engage in despite having had your curiosity ‘scratched.’ In other words, even after I discover how microwaves work, I can still wonder at/about them. Wonder is more like the fascination a child expresses upon encountering the seemingly mundane. Young children are often blown away and struck by the greenness of an avocado or the prickliness of a beard. They can enjoy the simplest of games for hours and reread the same stories over and over and over again. This is what it is to wonder. It’s to be captured by anything, everything, even the seemingly mundane. Hanging out with children is a good way to cultivate this sense of wonder. A child’s sense of wonder is often infectious! I can’t tell you how many times I’ve walked away from conversations with children thinking, “Yeah, carpets are actually really cool–look at all those little threads!” or “Bird beaks really are fascinating, with their various shapes and sizes and because we only have noses and mouths, instead!”
What’s cognitive and evolutionary anthropology?
Cognitive and evolutionary anthropology (CEA) simply studies humans with the aid of cognitive science and insights from evolutionary theory. Previously, anthropology could only document human behavior and practices, but with the advent of the application of evolutionary thinking to anthropology, we can now begin bringing together the social sciences and the biological sciences. In sum, CEA is an interdisciplinary approach to the study of humans, weaving together insights and methods from biology, cognitive science, primatology, paleontology, psychology, and anthropology.
From the findings of this field, how would a good life be described?
I’m hesitant to draw normative conclusions from descriptive projects. CEA is a scientific field, that means it’s simply describing human behavior and its evolutionary trajectory. Science on its own cannot make any substantive prescriptions for how we ought to live. Theology and philosophy are more appropriate tools for describing what the good life looks like.
Do humans have an innate sense of good and bad, or is it something we’ve learnt?
‘Innate’ is a tricky term and I’m inclined to shy away from it. That being said, it’s pretty well-established that human infants from very early on (about 3 months) can prefer helpers over ‘bad guys’ in puppet shows. They can also grasp sharing norms from about 6 months and onwards. Whether this is ‘innate’ is unclear. What’s more likely is that we are born with a certain core cognition that is designed to grasp norms easily. The input is very important, though, and this is where good, moral education and development come in. Nature and nurture work together in giving people this ‘sense’ of good and bad.
Do other species have a moral code?
We should first make a distinction between two kinds of ‘morality,’ There is a general sense of morality that can be said to apply to nonhuman animals. Lots of social animals behave cooperatively and even altruistically. Some primates and elephants can even mourn the death of their relatives. However, it’s pretty clear that this is insufficient for being a moral agent, that is being morally responsible (blameworthy or praiseworthy) for one’s actions. To be praiseworthy or blameworthy for one’s actions, one must be able to act on the right kinds of reasons. Here’s an example: Imagine a member of a university admissions committee who accepts high-quality, ethnic minority students into the university simply because she wants her colleagues to think well of her, while secretly, deep-down she is actually a racist. Is she commendable for that? Is she a good person? Certainly not. The reason being that even though she’s doing the right thing, she’s acting for the wrong reasons. It is this kind of morality that is 1) more important and 2) probably not within the capabilities of nonhuman animals. Humans are uniquely capable of this kind of morality because we have the ability to grasp and act on the right kind of reasons for our actions–the reasons themselves provide the basis for how we evaluate our actions. So even though nonhuman animals can act cooperatively and altruistically, their inability to act on the right kind of reasons (like the ones we saw in the admissions committee case) preclude their behavior from moral evaluation.
Can ethics arise out of science?
Science can shed light on why we have certain tendencies to prefer our family members over strangers or why we feel repulsed at the thought of lighting a cat on fire (philosophers love bizarre, macabre examples). Science can even helpfully point out that humans are prone to judge others more harshly for committing the same bad actions that we ourselves are guilty of and that “clean smells” can incline us to be more generous when asked to give to charitable causes. However, it’s not clear that science can tell us whether we should judge others or give to charity, or light cats on fire. Ethics has a unique “ought-ness” quality that puts it outside the realm of scientific inquiry. This feature (among others) throws a massive wrench in any attempt to give rise to ethics from science. What we’re doing when we do science is describing the way the world is, but that doesn’t tell us how the world ought to be. The “ought-ness” we mentioned earlier is not something we discover by doing science, it’s a different project, altogether. For example, there’s really good evidence showing that we have implicit racial biases that incline us to form judgments (both good and bad) about people simply because of how they look or what their names sound like. There are evolutionary and cultural explanations for why and how our brains form these mental shortcuts, but notice that this is just a description of a human behavioral tendency. However, it’s clear that we should try to become aware of our implicit biases and try to shed them as best as we can. This latter fact is not a scientific fact, rather it’s a fact about what we ought to do–it’s prescriptive. What is and what ought to be are conceptually distinct. Science is really good at the former, but the latter is a philosophical project.
Are there limits to human curiosity?
Sure, there are both conceptual and ethical limits. First, there are certain conceptual barriers to human curiosity in that the nature of reality prevents even some ‘things’ from even conceivability. We cannot conceive of square circles or married bachelors. Neither can we imagine balls that are both red all over and green all over, at the same time and in the same sense. And rather than this being stifling to creativity, it should be seen as a good thing. It means that there really is truth to be had. It means that no matter how hard I try to make my own truth, reality ‘pushes back’ on my thinking and constrains me towards true conclusions. If I see 2 billiard balls on the left side of the table and 1 billiard ball on the right side, there is nothing I can do to make those billiard balls amount to 4 or 6 or 7,589. The way the balls actually are, in reality, constrains what I am able to say or even think about them. This is an amazing feature of reality that makes truth possible and attainable. And second, there are ethical limits to human curiosity. No matter how curious we are about the state of a child’s metabolism after 12 days without food, we are never permitted to create that scenario to quell our curiosity–no matter how much it tells us about digestion, endocrinology, or whatever. This suggests that curiosity is not an intrinsic good–not all curiosity is good. Curiosity is only good when it is aimed at the good.
Continue reading interviews with:
What stood out for you? Any questions? Things you disagree with? Write a comment and join in the discussion.