Over the next few weeks, I’m going to bring over some of the more relevant posts from my other site, on the basis that they will be interesting to you all here and reach a wider audience as curious is this one clearly is. This first post deals with bioethics. To me, it’s like the game theory of philosophy; a lot of the most interesting questions are going on there, as biomedicine and technology push the envelope over the next century. In fact, many are already arguing that the posthuman world is upon us. So how will morality work as technology blurs the lines between human, animal, and machine?
Two scenarios from which to begin this discussion:
- Someone straps a computer onto the brainstem of Merriweather the Chimp in an experiment to translate her brainwaves to speech and develops sophisticated software for interpretation. And it turns Merriweather into a chimp-borg, where she develops the ability to enter a discursive space not just with trainers who’ve learned ASL in a way that has been largely ignored by the public as legitimate interaction on equal footing, but with humanity and in her own voice. And she tells humanity of her thoughts, and fears, and dreams. She hopes, she laughs, she wonders, and she cries. She is, by all the measures we administer, a moral person. Right? Or no?
- Or how about one that, while less immediately clear, will probably happen first: it looks like chimps are going to be, in the next 10 years or so, granted “personhood” status. This will mean that, legally, they have to be treated as humans (sidebar: this doesn’t mean that they will have to be treated as equal in all capacities as human. Rather, it will be an instantiation of law, informed by science, which “fills” chimps as “legal vessels” with rights). At the same time, this will be the first definitive act by humanity which acknowledges that humanity doesn’t have a monopoly on moral instantiation. So, chimps are granted personhood status, and become the moral equals of humans. Then someone takes stem cells from the brain of a chimp and implants them into a dog fetus. The dog doesn’t develop any morally relevant capabilities (cognition, etc.), but the cells came from a moral being. And we’ve said a chimp is a legal, ethical, and moral person, just like a human. And in the past, moral philosophy (which directs juridical philosophy) has said, it comes in part from a moral being, it’s morally equal. So what is this dog, then? A moral being? Or not?
Unless you’re someone incapable of thinking rationally, soberly, and with self-reflection, it’s clear that morality and moral frameworks are going to be increasingly contested spaces during the twenty first century, especially as genetics continues its foray into splicing and transfection and we enter fully the era of the posthuman. The creation of nonhuman chimeras is a rich, exciting field of inquiry and therapeutics. It is, without qualification, one of the next frontiers of genetics and all the questions that follow such transitions.
Until now, the standard operating procedure regarding whether a nonhuman animal is morally relevant has relied on anthropocentric cell-origin arguments, i.e. if it came from a human, the chimera attains morally relevant status (morally relevant status just means we have to treat it like it’s human when it comes to questions of morality. So the operative word is “relevant”). So if human cells were used, the new animal is a chimera, and is the moral equivalent of a human being. If no human cells were used, it does not.
But it’s becoming an increasingly nebulous position thanks to advances in genetics and experimental technique, and thus difficult to defend. See the two examples above. And moral philosophers are, because of this, running into an increasingly difficult problem to parse: How do we treat chimeras which have cell origins from one or more types of species?
It has become clear, in other words, that we need a more nuanced framework for defining moral relevancy, or we run the very real risk of not only violating some philosophical boundary, but, as any good lawyer will tell you, legal ones as well. After all, jurisprudence has been in the past, and remains today, informed and even directed by political and moral philosophy. The exciting thing to historians of science is that, in a post-enlightenment world, moral and political philosophy has itself seen the replacement of previous vocabularies and epistemologies of religion with vocabularies and epistemologies of science.
One of the solutions offered gets around the cell origin problem is to consider capacity in a more complex way instead. Monika Piotrowska of Florida International University recently suggested a two-fold solution. If you take brain stem cells from a human in one case and inject it into a mouse, and in a second case take brain stem cells from a chimp and inject it into a mouse, you’ve (arguably) got chimeras with indistinguishable morally relevant capacities (because they are both capable of, for instance, rationality or sentience, and thus we need to treat them as moral equals).
But what if the cell transfer doesn’t result in the acquisition of distinguishable morally relevant capacity (if you didn’t transfer brain stem cells, or the experiment was not concerned with sentience or rationality), she asks? You still need to consider moral capacity. So how do you do it?
This is where Piotrowska suggests cell origin can still play a role. If the cells came from a phyologenetically morally relevant origin (like humans), then you can still give moral relevance to the chimera.
Some philosophers have a problem with this approach because it retains an anthropocentrism and relies on vague definitions of “easy-to-determine” and “difficult-to-determine.”
I agree with this criticism, not least because it completely falls apart when you consider non-organic intelligences, like AI. The larger reality when it comes to nonhuman animals is that there will be very little reason, outside the subdisciplines that make up moral philosophy, to construct any kind of hierarchy or dichotomy at all when we are no longer measuring nonhuman animals for our dinner plates and work harnesses. In a world of synthetic protein and cheap, universal, open-source robotics, all nonhuman animals will enjoy “protected from” status, and we’ll be looked upon by history—as the general public understands it—in this particular instance as discussing the symptoms rather than the source of a larger problem.
Monika Piotrowska , “Transferring Morality to Human–Nonhuman Chimeras” The American Journal of Bioethics, 14(2): 4–12, 2014.
*image credit, the wonderfully talented Patricia Piccinini