Below is a conversation with bioethics commentator Kelly Hills (who BTW has a great blog), tackling some of the key issues surrounding the potential use of CRISPR-Cas9 technology to make heritable genetic modifications in humans.
Part of the possible power of some forms of human genetic modification procedures being considered for potential future use is that they are heritable. This means, for example, that embryonic correction of a disease-associated mutation such as in BRCA1 or of a disease-causing mutation such as that in cystic fibrosis, would not only prevent the future individual from having these mutations (and associated risks or disease), but also all of their future descendants as well. So there is at least hypothetical strong, transgenerational benefit. At the same time, this means that any associated risks with these genetic modifications (e.g. off-target effects where other genes are mistakenly edited leading to negative health outcomes or other unintended consequences) would also be inherited down this family tree as well, potentially forever. From a bioethical perspective, what considerations if any should be given to heritability of these kinds of interventions?
KH: First, the general caveat: unlike some fields, bioethics isn’t homogenized and if you ask five folks involved in ethics the same question, you’ll get at least 12 different answers.
So, that said: modifying the germ line is definitely a big issue–I’d like to say big enough that I actually don’t think anyone will seriously try to bring a modified human embryo to term for a while. (Unfortunately, I think we all consume enough science fiction to go “yeah but what about [favourite sci-fi/horror story about genetic superhumans]?”) So far as considerations towards the future, I think that looking back towards IVF, ICSI and other reproductive technologies (since ultimately that’s what this is) is necessary. In particular, there was never any systematized following of babies born from various reproductive interventions–so we actually don’t know about potential harms. From the surface, it certainly seems like everything has been okay, but I think it would be an excellent idea to actually do longitudinal studies once we do get to the point of human use of CRISPR/Cas9. I might even go so far as to say that’s an ethical imperative.
With that, I’m actually less concerned about the viability of the science and more concerned about the dialog we have about what we do or do not fix–which is more in line with how I try to approach bioethics (as a conversation between stakeholders vs a bright line of not crossing).
For example, I’m not sure anyone would say “nope, we shouldn’t eradicate Huntington’s Disease,” if the science of CRISPR/Cas9 editing proves viable in animal models, if 3PN zygote experiments that Zhou, Huang, et al were doing show that we’ve fixed off-target effects, etc.
But should we eliminate hearing loss?
Someone who views hearing loss as a crushing disability, something being taken away, might say YES very quickly to that question–but someone who was born deaf and considers themselves part of the Deaf community could very well be offended by the idea that people want to eradicate not only something that defines them as an individual, but a rich culture with its own language.
So what, then, is normal? How do we decide what should be fixed and what shouldn’t be fixed? Again, this might seem like a simple question–if it’s not one that you’ve thought about. But “normal” is a lot of assumptions, and some of those assumptions are pretty offensive to folks who don’t fit into the definition–and thus find themselves being talked about as if they’re an error that shouldn’t exist.
And what is abnormal? What sort of modifications shouldn’t be made…and why? Should not, as much as should, is a normative statement.
It can be very tempting, when “doing science,” to merely think about the pieces in front of you: I’m swapping out broken DNA for something better! But within that very sentence, we escape from science and in to philosophy, language, culture: how do we define broken? How do we define better? And so we need to very seriously discuss how we define these terms and the others that frame the debate around germline modification-because no matter how hard we try, some genetic variation, including some genetic disease, crops up spontaneously. At this moment in time, we haven’t really managed to create a society that is open and welcoming for everyone, regardless of ability, and I can’t see that we would have a better society waiting for someone who has a spontaneous mutation that cropped up in utero if we eliminated all imperfection and deviation from a norm wherever possible.
Now, all that said, I think it’s also important to move away from the idea of “one gene one disorder”–something admittedly scary like Huntington’s, an autosomal dominant disease tied to a very specific loci, is pretty rare. Much more common are diseases spread across the genome that aren’t going to be easily fixed for swapping one part for another, and if we can move away from having that “swap RED LEGO A for RED LEGO B” mentality when discussing genome editing, we’ll probably have healthier (and less frustrating) conversations for everyone.
What about the issue of consent both of the genetically modified (GM) child and other possible future descendants, none of whom would have given consent to be part of an experimental procedure changing their DNA since they don’t exist at the time of starting the experiment?
Well, the snappy and fast answer here is: if you don’t exist, you can’t consent. That might seem sort of silly, but the question as stated is kind of loaded, because consent is something that we grant to beings we consider autonomous agents–children, for example, may be able to assent to treatment, but they cannot consent, because they’re not considered fully autonomous agents. A zygote cannot assent, let alone consent (and the debate about the agency and autonomy of a zygote through fetal development is at the heart of much of the abortion debate in America).
Now, what I think you’re actually asking is: is it okay to let parents make momentous and life-changing decisions for their children? As you note in your next question, we already allow for parents to make all sorts of medical decisions about children, whether or not those decisions are medically advised; we give parents the ability to consent for their kids because we have a general belief that parents will do what is best for the child. Which goes back to your last question: surely eradicating mitochondrial disorders forever is a worthwhile goal. But is eliminating autism? A lot of this is about values, not science.
Following up, some have argued against the consent issue being important or uniquely relevant here. For example, they point out that parents make these kinds of decisions all the time with respect to lifestyle or medical choices that also affect their children, grandchildren, etc. For instance, some have argued that parental choices about prenatal nutrition, smoking, drinking, and such can impact future children in substantial ways and without the children’s consent. In some cases such as with smoking, it could even hypothetically lead to random, but heritable genomic changes via mutagens in the smoke. Yet we as a society for parents do not prohibit certain behaviors perceived to be risky for future descendant or mandate other behaviors perceived to be positive. In other words, that line of reasoning goes that the lack of consent of future children/descendants is not a serious consideration related to heritable human genetic modification because of how we as a culture have handled other parental behavior issues. What are your thoughts on this one of reasoning?
KH: “We” as which society? American society definitely values autonomy over just about everything else, and parents are certainly allowed to do things that many of us disagree with, like smoking around children. But it would be a mistake to assume that’s the same for all societies! And, even within the USA, it’s not absolute: consider the parents who are convicted of child abuse for allowing children to die instead of taking them to the doctor, or laws that allow prosecutors to charge women with assault if there are complications in pregnancy or delivery after using illegal drugs.
That line of pedantry aside, I think it’s silly to say “we don’t address epigenetic changes so we shouldn’t bother discussing intentional germline manipulation!” It’s sort of a weird variation on appealing to a bigger or more pressing problem (technically, the informal fallacy of relative privation); how can we discuss germline edits when there are children starving in Africa?
Perhaps a better question would be: if we’re considering preventing a constellation of inherited diseases via germline editing, should we also consider trying to prevent negative epigenetic changes and/or other negative parenting choices? If, after all, certain classes of mutation should be prevented, wouldn’t that hold true regardless of how the mutation is caused?
Another argument made contrary to potential concerns over heritable human genetic modification has been that human choices regarding mate selection and societal influences on breeding choices are already commonplace and accepted forms of what can arguably be called “heritable human genetic modification” amongst humans. In this line of thinking, the editing of a genetic mutation in an embryo such as by CRISPR-Cas9 would not be so different than these other everyday, acceptable human sexual/breeding practices that also result in new genomes in children. What do you think of this kind of equivalency argument?
KH: In general, I think we have to be careful with X=Y arguments. Just because something is “of a kind” doesn’t mean it’s necessarily the same, and it can encourage sloppy thinking to lump “of kind” things together. Humans have been doing “genetic modifications” to food crops for thousands of years, but we’ve still been very careful with GMO crops. They’re “of kind” in that they’re both modifying the food we eat, but careful breeding to reduce large seeds in a watermelon is still different than inserting fish genes in a tomato to boost cold-hardiness, and we recognize that in the scientific evidence we require for safety claims.
As you note, we do already have quite a few everyday acceptable attitudes towards human genetic modification: for example, some Jewish populations utilize PGD and IVF in order to avoid heritable diseases. So again, going back to the last question, the answer here might not be “does it matter since we already do” and “we already do, should we think about this some more?”
The first published paper to report germline editing of human embryos came out a few months ago and it used CRISPR-Cas9 technology. It reported a number of problems with the technology including mosaicism, off-target effects in the genome, and more. In the publication the authors indicated that they had approval from an institutional ethical oversight committee. Is that sufficient to demonstrate that this work was ethical? From a distance and not knowing specifics of a particular oversight committee’s mission, institutional guidance, membership, etc., from a bioethics perspective how can one evaluate whether such work was given a rigorous review?
KH: You can’t–this is a problem in academic science publishing regardless of the subject. Last year, two researchers assured PNAS they had IRB approval for their Facebook emotional manipulation research–oops, they didn’t, and two universities and PNAS played “not me hot potato!” denying responsibility for confirming ethical oversight.
Earlier this year, a paper on how to change people’s minds regarding gay rights was retracted by Science when it came out that the researcher, LaCour, had falsified data…but hidden in all of the information that came out about how he falsified data was that he also lied about having IRB approval. (And this I confirmed with the Science EIC; he told them he had IRB approval, and never contacted them to “clarify” the mistake, as he was ordered to do by his IRB when they found the error.)
The majority of academic journals request that authors give a confirmation (via a check box, signature, etc) that they followed the human subjects research rules of the Declaration of Helsinki; you’ll see this somewhere in the first few paragraphs of any paper: “The authors confirm that this paper meets the requirements of the Declaration of Helsinki and has been approved by University IRB.”
But it would be naïve to pretend that this is an issue only with ethics in scientific research. Most scientific papers don’t provide the raw data from their experiments, or lab notebooks, or anything else that can be verified externally. No one actually ran Obokata’s stem cell stresser experiment; we take researchers at their word because we extend trust–and you might say that the existence of sites like Retraction Watch should encourage us to rethink how we approach all of the data submitted with research papers, from IRB approval and ethical research declarations to raw data and so on.
Until there is a radical adjustment in the process of publication, the ethics of each paper has to be judged just as the rest of the data and information in a paper: with skepticism until the author supports their work.
Following up, in the media it has been mentioned numerous times that a version of the same manuscript was rejected by “elite” journals such as Nature and Science due to ethical concerns. Without factual confirmation of such alleged “ethical concerns”, should the community disregard such reports as simply rumor?
KH: Well, if a paper was rejected several times by “elite” journals for scientific concerns before finally being published, should the science community disregard further scientific concerns without factual confirmation–or should they go look for those scientific concerns and see what they come up with?
As I recall, that’s precisely the situation of the Obokata/RIKEN stem cell issue, which led to–well, a lot of chaos, and ultimately retractions and ruined careers, as your readers are well aware. If folks have ethical concerns, they should by all means chase them–but they should be sure their concerns are actually ethical concerns, rather than unconscious racism, race bias, or other issues that have certainly come up to cloud the discussion since the Huang et al paper was published.
I think that for some people, they may have ethical worries but not be trained in bio/research ethics, and be unsure how to vocalize those concerns. In this case, I’d say to that person: contact your friendly local bioethics specialist, ideally someone who specializes in research ethics, and talk to them. Go get a beer or a coffee or whatever the socially acceptable academic drink is where you are, and be honest about your concerns, and let your friend/colleague/expert help you figure out where your unease sits. You might come across something new and exciting and get a paper out of it! Or you might learn more about ethics and research and have your fears assuaged at the same time–a win any which way, right?
(And for the record, something akin to this with ethics did happen last year: a study was published about Facebook doing some emotional manipulation studies, social scientists and ethicists had concerns, started talking to one another, and in addition to getting everyone involved to admit that oops, no ethical oversight, quite a few people got papers out of it.)