Reinventing Morality

December 22, 2013 — Leave a comment

Morality may be something different for everyone; it may be the set of rules handed down by God to Moses on stone tablets, or the system in which karma is passed through the Dharma. But morality is also a decision- making process, one that plays out in the brain in the same way a mechanical decision-making process plays out on a computer. Clerics, theologians, and, in the last century, anthropologists have put forward various answers to the riddle of how our species stumbled upon the concept of goodness. Now, neuroscientists and evolutionary biologists are adding to that understanding. Discoveries in these fields have the potential to achieve something remarkable in this century: an entirely new, science-based understanding of virtue and evil.

Marc Hauser, author of Moral Minds (Ecco, 2006) and director of the Cognitive Evolution Laboratory at Harvard University, is at the forefront of the emerging scientific discussion of morality. David Poeppel of the University of Maryland is on the cutting edge of today’s brain and neuroscience research. I spoke with both of them about what science can contribute to the human understanding of good and bad.

The first thing I discovered is that applying a scientific approach to a murky, loaded issue like morality requires understanding the problem in material terms. You have the event, in this case the moral decision. Then you have the space where the event plays out, the brain. Some aspects of the decisionmaking process are fluid and unique to the individual. To form a crude and an unoriginal analogy, this would be like the software code that the brain processes to reach decisions about what is morally permissible and what is not. Other aspects are fixed, like hardware.

Marc Hauser is an expert on the former.

Moral Grammar

A great example of moral-writing software is culture. Cultural influences on moral decision making can include everything from the laws that govern a particular society to the ideas about pride, honor, and justice that play out in a city neighborhood to the power dynamics of a given household. Religion, upbringing, gender, third-grade experiences dealing with bullies, and so on all contribute lines of code to an individual’s moral software. For this reason, no two moral processes will be identical. Academics have given this phenomenon a fancy name: moral relativism. The theory holds that because morality is transferred from groups to individuals in the form of traditions, institutions, codes, etc., everyone will have a different idea of good and bad.

But what if there are limitations to the spectrum of variation? What if, beneath the trappings of culture and upbringing, there really is such a thing as universal morality? If such a thing existed, how would you go about proving it? Enter Marc Hauser, whose research is adding credence to the notion of a universal goodness impulse.

According to Hauser, the human brain learns right from wrong the same way it learns language. The vast majority of the world’s languages share at least one thing in common: a system of guidelines for usage. This is called grammar. Just as languages have rules about where to put a subject, an adverb, and a predicate in a sentence, so too every culture has a set of guidelines to teach people how to make moral decisions in different situations. So just as learning a language means learning not only words, but also a system for putting the words together, the same is true for morality; there are very specific “commandments” that are unique to every culture, but there are also softer usage guidelines. People who have mastered the moral guidelines of their particular culture have what some might call principles or scruples. Hauser calls this a moral grammar.

“A mature individual’s moral grammar enables him to unconsciously generate and comprehend a limitless range of permissible and obligatory actions within the native culture, to recognize violations when they arise, and to generate intuitions about punishable violations,” he writes in his book. “Once an individual acquires his specific moral grammar, other moral grammars may be as incomprehensible to him as Chinese is to a native English speaker.”

Hauser has spent his career studying how people from different backgrounds and cultures rely on different grammars to make moral decisions. About three years ago, he put up a survey Web site called the Moral Sense Test, which is still operating today. Since its establishment, some 300,000 people from around the world have logged on. Participants are asked to answer a series of so-called trolley problems to reveal their unique moral decision-making processes.

The quintessential trolley problem goes something like this: A group of five people is on a train track unaware that a runaway trolley is heading toward them. One person is on a separate track, equally oblivious to what’s going on. If you’re in a control room overlooking the train yard, is it morally permissible to pull a lever and divert the train away from the five people onto the track with the one person, thereby saving the five and killing the one? Or is it morally preferable to take no action and allow the trolley to continue along its predestined path?

“Each question targets some kind of psychological distinction,” says Hauser. “For example, we’re very interested in the distinction between action and omission when both lead to the same consequence…. It’s an interesting distinction because it plays out in many areas of biomedical technology and experiences.”

Surveys such as these aren’t new. But Hauser ‘s Web-based survey model allows him to ask these questions of people who originate from all sorts of cultural, economic, and educational points of view, as opposed to polling the opinions of a handful of Ivy League undergrads.

“There’s going to be that kind of variation culturally,” he says. “But what the science is trying to say is Look, could the variation we observe today be illusory? Could there be real regularity, universals that underpin that variation fundamental to how the brain works?”

Though the Moral Sense Test is ongoing, it is adding significantly to an understanding of moral reasoning across different cultures. Among Hauser’s most interesting findings: People who don’t adhere to a specific religion and people who do are remarkably similar in the way they make moral decisions.

“This is independent of the benefits that people obtain from being associated with religion; I have nothing to say about that,” he insists. “This is more a question of … does having a religious background really change the nature of these intuitive judgments. The evidence we’ve accumulated suggests, no.” His research shows that people who are religious and people who claim to be atheists show the same moral patterns and answer the same way when they’re presented with a whole host of moral dilemmas. Where they diverge, says Hauser, is when the question touches on political or topical issues about which people are likely to have preformed and not necessarily educated opinions.

This is one area where he hopes moral science can make a real difference.

“If you ask most people, Do you think stem-cell research is morally good or morally bad, many people will say bad,” says Hauser. “But then you ask, what is a stem cell? Most people won’t have a clue. What they’ve often done, they’ve masted stem-cell research onto something else, [such as] killing a baby. If killing a baby is bad then stem-cell research is bad. So that’s a matter of using a moral problem one is familiar with and using it to judge a new case that one is not familiar with. We do that all the time…. What science should be doing is trying to educate us, and say Look, the blastocyst is a cluster of cells that stem-cell research is focusing on … nothing like a baby. It’s the potential, with lots of change and development, to become a baby. But it’s not a baby. There’s an onus on researchers to educate. In the absence of education, what people do is examine moral cases in terms of what they’re familiar with.”

He’s received a mixed reaction to his findings. Some people, he says, see the work as artificial, that what morality is really about is how we behave. In other words, according to some, morality can’t be judged on the basis of how a person answers a survey, but on what that person does in real life.

In the future, new technologies like virtual reality will test this hypothesis. But first, researchers need to learn more about how the process plays out in the brain.

The Moral Hardware

In keeping with our computer- brain analogy, some aspects of the moral decision-making process are fixed; namely, the platform on which this process occurs. You might call this the hardware, the physical brain itself. We all process moral decisions based on different assumptions or beliefs, but the process happens in the same place for each of us, an area in the front of the brain called the ventromedial prefrontal cortex. This is where our emotional experiences – religious, traumatic, joyous – connect with our higher-level social decision making to give us a sense of good or bad.

So now that science has found the region involved in moral decisions, how long before some Silicon Valley start-up gives us a machine to read good or ill intentions, a veritable evil detector?

Not anytime in the foreseeable future. The human brain is an object of unfathomable complexity. To imagine that it might suddenly be rendered as transparent and simple as the items in an Excel spreadsheet is to commit hubris. This is why David Poeppel of the University of Maryland likes to keep expectations realistic. He studies language in the brain. Just as Hauser is focused on the language of morality, Poeppel is focused on how vibrations in the ear become abstractions. It’s next to impossible, he says, to see how a brain formulates big abstractions, like Locke’s Second Treatise of Government. He hopes one day to understand the neural processing of words like dog or cat.

Poeppel’s current work involves magnetoencephalography (MEG), an imaging technique that measures the brain’s electrical signals in real time. He was kind enough to invite THE FUTURIST to watch some experimentation. We found him in a lab with some of his brightest doctoral students, several gallons of liquid nitrogen, a $4 million MEG machine, and a girl named Elizabeth – who was having her brain activity, her inner- most thoughts, displayed on a big bank of monitors.

It looked like squiggles.

“What we’re looking at are the electrical signals her brain is giving off as she responds to certain stimuli,” Poeppel told me. In the case of Elizabeth, the stimuli were blips on a monitor and ping noises. The spikes and squiggles on the graph indicated that she was “seeing” the blips, without her having to make any other signal.

Poeppel doesn’t believe we’ll ever be able to hook people to a machine and get a complete transcript of their thinking. “We aren’t capable of that kind of granularity,” he says. But what his – and his students’ – experiments with MEG do show is the brain reacting to stimuli in real time, which can later reveal which parts of the brain react to which stimuli and how much electricity those regions throw off.

The way the brain reads little blips may not seem to be correlated with morality, but it is. Returning to the brain computer analogy, Poeppel says that the moral rules we follow, the impulses that tell us when to push the button and divert the trolley and when not to, are set in a sort of default position when we’re born, just like the default settings on your PC. “Those are constant, immutable. They form the basis of morality. And then the switches are set to particular values as a function of experience. There’s a close interaction between the universality (meaning the brain hardware) and cultural specificity (the software).”

One day, MEG research, trolley surveys, and other aspects of moral science will reveal the key aspects of that correlation.

Amazingly, even though neuroscience is still in its infancy, it’s already yielding insights into moral issues, such as race bias. According to Poeppel, studies have shown that “people make decisions that reflect race biases even when they’re aware of what they’re doing.” Race bias is a reaction that rises from lived experience. What MEG, fMRI, and other neuro-imaging techniques give us is a picture for how those experiences change the physical brain and how the physical brain recreates, reimagines, recomputes them all the time.

“Does this reflect very deeply imbedded mechanisms of decision making? If you’re aware of it, can you neutralize it, can you override it and reeducate the system? Of course you can. The brain is plastic. It changes all the time. That’s what learning is. But we still don’t have a real explanatory theory for how that works.” He adds, “It’s an area where we will see progress in the years ahead.”

In terms of mysteries of morality, that progress will likely take the form of more questions than answers.

Moral Science and Your Future

A common reaction to the radical breakthroughs that seem to occur daily in neuroscience is impatience for ever greater and more important breakthroughs. If we know what lying looks like under fMRI (goes this line of thinking), when will we be able to inoculate against deceit? If we can diagnose the roots of racism, when will we be able to predict which student will go on a violent shooting spree? If we know that bias has something to do the with the amygdala, when will we be able to see it on our computer screens?

The emerging science of morality will not relieve us of the hard work of examining our own motivations and impulses. But it will present us with a lot more data. As this line of inquiry progresses, as new neuralimaging techniques, new technologies like virtual reality come to bear on this problem, we will likely lose certainty about what is right and wrong rather than gain it.

People answering trolley problems will surely give different answers when they’re allowed to “live” the survey in a virtual-reality setting, when they can see the trolley, hear it approach, meet a computer-graphics generated version of the person to be saved or squashed. When we can view that decision-making process using fMRI, MEG, or some other brain-imaging technique not yet in existence, we may be able to see how slightly different firing patterns play out in different decisions. We’ll examine people’s actions in light of their brain activity and reach new understandings, and probably all sorts of hasty conclusions as well.

More importantly, and controversially, the science of morality may bring into doubt some of our most deeply ingrained cultural perceptions about right and wrong. We’ll have new, richer opportunities to examine our actions in the presence of consequences. We probably won’t like what we see.

Those awkward realizations may be the greatest value of moral science.

Consider that we’re called upon to make moral decisions daily. Every so often, we’re given an important one, a decision that will radically affect someone else’s life. Sometimes the decision comes masked as a professional matter, as it did for U.S. sheriff Tom Dart, who, when tasked with evicting individuals whose only crime had been renting from a landlord who had defaulted on his mortgage, decided against action and briefly suspended such evictions in Cook County, Illinois. Sometimes the choice comes in a more dramatic form, as in the case of Wesley Autrey, a New York man who jumped onto a set of train tracks to save a stranger from a speeding subway.

The moral actions of Dart and Autrey strike us as exceptional in their selflessness. But such feats of heroism are the products of the same moral decision-making process that occurs in each of us. When we are called upon to commit to such an act, we first make the decisions that are easiest for us. Our faith (or lack there of), upbringing, official job titles, obligations to our bosses or clients, and our various experiences justify action in the interests of self-preservation and in accordance with convention.

But suppose we were each given a better, more sophisticated understanding of the root of morality, its universal core. We suddenly have the opportunity to examine, perhaps even experience, the other option and explore our emotional aversion to it. We suddenly have a new tool to call upon, our private knowledge of the neurological decision-making process. We play the choice out differently, possibly picturing the person on the other end of the problem, and we reach a different conclusion and commit to a different action.

Something has happened. Insight into the moral deliberative method has yielded a result that is more inline with a broader, more rational, and surely more accurate understanding of what is good. The process has been improved.

And the future has changed, perhaps for the better.

Sidebar

Cyborgs Among Us

Science is daily gaining new insights into how the brain works. That growing field of knowledge is already coming into play in the world around us.

* Washington University medical students used fMRI to show that children use more of their brains and different portions of their brains than adults do when they perform word tasks.

* A group of Japanese researchers from ATR Computational Neuroscience Laboratories in Kyoto, working with researchers from the Honda Research Institute in Saitama, have used fMRI data to program a prosthetic hand to mimic the movements of a real hand.

* Perhaps most remarkable, researchers at Columbia University have demonstrated that fMRI can detect lies or truth more accurately than polygraphs.

– Patrick Tucker

Sidebar

David Poeppel, Master of Synthetic Telepathy

University of Maryland neuroscientist David Poeppel, along with researchers at University of California, Irvine, and other schools, is part of a $4 million U.S. Army grant to achieve what the Army is calling synthetic telepathy. This sounds like something out of Hollywood, but, says Poeppel, electronic telepathy is absolutely possible so long as “communication” is understood to be electrical signals rather than words.

“Suppose you tap out two rhythms,”says Poeppel. “I train you to get really good at tapping out those particular two rhythms, so you can do it mentally. You have motor memory connected to those two rhythms. That can give a big signal (readable via MEG). If I can extract that, I have a signal I can work with and send it.” All mental thoughts create electrical signals.

The experimenters hope to train subjects to make those signals fire in patterns that can convey information, like Morse code. The code could conceivably be picked up by a sensor trained to focus on a particular electromagnetic frequency and then sent to a computer and resent to another sensor, allowing for something like helmet-tohelmet telepathic communication.

How else will neuro science affect our lives in the dec ades ahead? Prescription medications for mental health will be far more effective than those currently available, says Poeppel. We’ll treat most sight or hearing loss with brain prostheses like the cochlear implant. We’ll discover the real roots and effects of mental illness, and mental disorders will become as mundane as a common sports injury and will be treated as such. Our cognitive functioning will become far clearer and better understood.

According to Poeppel, the number of people going into the field (the Society of Neuroscience boasted some 38,000 members in October 2007) guarantees a “full frontal assault” on the mysteries of the brain in the years ahead.

– Patrick Tucker

Sidebar

A girl named Elizabeth undergoes a brain scan at the University of Maryland, part of an experiment to discover the electromagnetic signal the brain sends out in response to auditory and visual stimuli.

A magnetoencephalography (MEG) experiment at David Poeppel’s University of Maryland lab. Because MEG measures electromagnetic brain activity rather than hemoglobin flow (fMRI), MEG scans show results that are much closer to real time. Poeppel is part of multi-university U.S. Army grant to achieve “synthetic telepathy.”

Sidebar

A Brave New Pyschocivilzed Society

Wildly optimistic notions about the potential of neuroscience aren’t new. In the 1960s and 1970s, famed neuro scientist José Manuel Rodriguez Delgado predicted that innovations in cybernetics and brain anatomy would lead to a “psychocivilized society.”

Delgado’s experiments with cyber netic brain implants in monkeys, apes, and even cows were revolutionary for their time. In one famous instance, he was able to stop a charging bull by sending a radio signal into a tiny electrode receiver (a stimoceiver) implanted in the animal’s caudate nucleus, an area of the brain that controls voluntary movement.

In another experiment, he put several small macaque monkeys in a cage with an aggressive male macaque that had similarly wired with a stimoceiver to his caudate nucleus. Also in the cage was a lever that – when activated – sent a signal to the implant. Delgado describes the results of the experiment in his book Physical Control of the Mind, writing, “A female monkey named Elsa soon discovered that Ali’s [the male] aggressiveness could be inhibited by pressing the lever, and when Ali threatened her, it was repeatedly observed that Elsa responded by lever pressing. Her attitude of looking straight at the boss was highly significant because a submissive monkey would not dare to do so, for fear of immediate retaliation…. Although Elsa did not become the dominant animal, she was responsible for blocking many attacks against herself and for maintaining a peaceful coexistence within the whole colony.”

In the late 1960s, Delgado was outspoken in his assertions that neuroscience, and particularly the suppression of urges through electrical stimulation, could lead to a world without war, strife, crime, or even cruelty.

“We are only at the beginning of our experimental understanding of the inhibitory mechanisms of behavior in animals and man, but their existence has already been well substantiated. It is clear that manifestations as important as aggressive responses depend not only on environmental circumstances but also on their interpretation by the central nervous system where they can be enhanced or totally inhibited by manipulating the reactivity of specific intracerebral structures,” he wrote.

– Patrick Tucker

AuthorAffiliation

About the Author

Patrick Tucker is the senior editor of THE FUTURIST and director of communications for the World Future Society. To obtain a free version of his article online, see more of Aaron M. Cohen’s exclusive photos of David Poeppel’s MEG lab, or to read Tucker’s interviews with Marc Hauser and David Poeppel, go to http://www.wfs.org.

 

Originally published in THE FUTURIST, January-February 2009.

Advertisements

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s