Jump to ContentJump to Main Navigation
Human Subjects Research RegulationPerspectives on the Future$

I. Glenn Cohen and Holly Fernandez Lynch

Print publication date: 2014

Print ISBN-13: 9780262027465

Published to MIT Press Scholarship Online: January 2015

DOI: 10.7551/mitpress/9780262027465.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 22 October 2021

Getting Past Protectionism: Is It Time to Take off the Training Wheels?

Getting Past Protectionism: Is It Time to Take off the Training Wheels?

Chapter:
(p.341) 22 Getting Past Protectionism: Is It Time to Take off the Training Wheels?
Source:
Human Subjects Research Regulation
Author(s):

Greg Koski

Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262027465.003.0028

Abstract and Keywords

The current protectionist paradigm for ethical review of human subjects research is based on an implicit assumption that scientists, left on their own, are either unable or unwilling to fulfil their personal responsibilities to ensure the safety and well-being of individuals participating in their studies. A sound ethical framework exists, but a well-intended effort to implement its principles through rules, regulation and guidance has failed to achieve its intended goals. The resulting process itself is excessively focused on achieving regulatory compliance at the expense of meaningful ethical consideration. Calls for reform focus on relieving regulatory burdens and expediting review and approval without substantively changing the underlying protectionist assumptions. Consistent with modern regulatory science, a pragmatic alternative paradigm based on existing and proven models of professionalism is proposed to address these challenges with greater effectiveness and efficiency. This new approach would require that investigators and research teams be appropriately qualified, assume responsibility for their actions, and aspire to the values delineated by Henry Beecher more than half a century ago—a reasonable and achievable, although so far elusive, goal.

Keywords:   Ethical review, Protectionism, Regulation, Regulatory science, Responsibility, Compliance

That human subjects research is a regulated activity in the United States and other countries is an interesting phenomenon. After all, how many areas of scientific inquiry are subject to governmental regulation? Peer review of research is common in virtually all fields of science, generally as part of the funding process and, of course, for publication, but prospective review and approval of research involving human or animal subjects is different—it is required by law.

In the second half on the twentieth century, public and governmental concern generated by media accounts of irresponsible conduct and unethical research conducted by legitimate scientists in major academic institutions and government agencies led to a national discussion of the ethical principles underpinning research on human subjects. As a direct result and with all good intentions, the government enacted legislation and eventually promulgated rules requiring that all such research be reviewed and approved prior to its initiation to ensure that it was justifiable, both scientifically and ethically.

This institutional review board (IRB) process, also known as the “human subjects protection process,” has changed little since it was implemented in the mid-1960s, first by policy, and then by regulation in the 1980s. This apparent resiliency should not, however, be taken as a testament to its value or effectiveness. Many, indeed a rapidly growing number of scientists, institutional officials, and even the regulators themselves, are very much in the process of “rethinking” this long-standing regulatory framework, as many of the contributions in this volume aim to do.

This re-thinking, driven initially by dissatisfaction among scientists concerned with the excessively compliance-focused approach taken by many, if not most, IRBs, has now spread to many institutional officials, members of their IRBs and the staff who manage them, as well as the federal government itself. In reaction to these widespread concerns, and (p.342) as discussed in greater depth by Davis and Hurley in chapter 1 of this volume, DHHS (with the cooperation of the Office of Management and Budget) offered seven basic proposals to amend the Federal Regulations for the Protection of Human Subjects in Research (DHHS 2011). The seven recommendations were largely obscured by the seventy questions posed to stimulate public comment on the proposals, raising in some minds concerns about the confidence of the authors about their own recommendations.

Since the ANPRM was issued, many individuals and organizations have weighed in on its perceived merits and shortcomings, and a number of them have included chapters in this volume presenting several innovative proposals about how the Common Rule might be modified, as the ANPRM says, “to both strengthen protections and reduce burdens, delays and ambiguity for investigators.” While these are no doubt laudable goals, tinkering around the margins of our existing regulatory frame-work, as the ANPRM does, is not our only way, and perhaps not our best way, forward.

Rather than simply consider the future of human subject research regulations, we might do well to broaden our perspective, to challenge ourselves to look beyond rethinking of what we have as just a remodeling effort. Instead, we might begin by asking a more fundamental question: If we had an opportunity to start from scratch, would we build the same protective fortress today?

To answer this question, we might first consider, as many have before, the roots of our present approach and the goals to which it allegedly aspires. From the outset, as Carol Levine has said better than anyone, we have built a system that is “born in scandal and reared in protectionism” (Levine 1988, 167). Few can disagree with this apt characterization, and yet even as we nod our heads in agreement, we do so without evidence or good reason to believe that we are in fact achieving our goals or that there are no more effective alternatives available for consideration.

The regulatory approach that we have adopted over the last four de-cades for protection of human subjects in research is based on the implicit assumption, if not an explicit accusation, that left to their own judgment and without constant oversight, scientists will do harm to their human subjects—essentially we are taking the position that scientists are somehow irresponsible, bad people.

Surely, the oft-recited litany of events—tragedies, abuses, ethical lapses—however we might choose to characterize what happened at Tuskegee, Willowbrook, the Jewish Chronic Disease Hospital, the Fernald (p.343) School, the Human Radiation Studies, and so many others, is not some-thing in which science or society can take pride (Brandt 1978; Beecher 1966; Jones1993; Goldby 1971; Katz 1972; ACHRE 1995). Nor will we count among the more glorious accomplishments of science the losses of Jesse Gelsinger, Ellen Roche, and others, tragedies that we claim will never be forgotten, even as too many already have (Raper et al. 2003; Steinbrook 2002). But ought we to use these events to justify continuation of an approach that even the Office for Human Research Protections, the very office responsible for oversight of IRBs and enforcement of the regulations we have created, now seeks to change them because they do not offer sufficient protections and impose excessive burdens and ambiguities for investigators and the human research endeavor?

In this time, nearly half a century or more after most of the tragic events cited above occurred, do we still truly believe that scientists are so untrustworthy, so irresponsible, so poorly trained or so selfish—choose whichever characterization you wish—do we truly believe that the risks to human subjects in research are so great that scientists cannot be al-lowed to design and engage in their scientific studies without prior review and approval by an oversight committee? If so, then surely we have failed miserably in our efforts to educate and train responsible investigators.

For the sake of discussion, consider an analogy—the approach that we have taken to protect our air transportation system and its passengers from terrorists. Because of a small number of admittedly horrific events, we have made an assumption that all passengers about to get on a plane are potential terrorists intent on blowing our planes from the sky. As a result we spend nearly $10 billion annually and cause endless hours of delays and inconvenience for passengers without even knowing whether or not the system we have built has actually done anything to make air travel safer. There have been several thwarted attacks in recent years, but none of them have been prevented by the TSA security screening process. Events have been prevented by effective intelligence activities and surveillance, while passengers with bombs in their shoes or their underwear have managed to get onto planes despite our best attempts and technology to prevent them from doing so.

Similarly the deaths of Ellen Roche and Jesse Gelsinger, and the terrible events in the TeGenero study in the United Kingdom, in which six young men narrowly escaped death by cytokine storm caused by a new biologic agent in a poorly designed phase 1 study, all occurred in research studies that had been reviewed and approved by IRBs or ethics commit-tees (Suntharalingam et al. 2006; Wood and Darbyshire 2006).

(p.344) If either the TSA security system or the “human subject protection system” were subjected to a thorough, rigorous analysis of their effective-ness as preventive programs, they would almost surely be abandoned on the grounds that they are simply not cost effective (Mann 2011). And yet we persist, more out of hope or desperation than out of any evidence-based reason, to believe that they are effective. It is as if doing something, anything for that matter, is better than doing nothing at all. While that is probably true, could we not being doing something better?

Imagine for a moment that a group of physicians were discovered doing abusive, unethical, and harmful things to their patients in the course of their medical practice. The events are reported in the media, and Congress, in its outrage, convenes a National Commission on Patient Safety and Protection of Patients from Medical Risks. The Commission meets for two years at the Petrie–Flom Center at Harvard Law School and eventually issues “The Cambridge Report” (Cambridge not being very far from Belmont). The report identifies several ethical principles for responsible conduct and oversight of medical practice to prevent patients from being harmed by irresponsible practitioners. Soon the Department of Health and Human Services issues new regulations requiring that every physician must submit a treatment plan to an Institutional Medical Practice Committee for prospective review and approval prior to initiation of treatment, except of in the cases requiring immediate, critical life-saving therapy.

While at first glance such an approach would seem completely disproportionate to the risk and inappropriate for the medical profession, is it in fact so very different from the approach we took to deal with reports of abuses in human subjects research? Such an approach, like our approach to research abuses, begins with an assumption that all physicians are irresponsible and cannot be trusted to take care of their patients. And we would therefore create a system for prospective review and approval of treatment plans that would pose huge impediments to the practice of medicine and timely delivery of care to patients in need, all at great cost to society with no commensurate benefits and no demonstrated effectiveness. Indeed the committees are not there when the care is delivered—ultimately the committees and the patients must rely on the professionalism of the physician. Extrapolating to the realm of human research, so too are the IRBs absent when investigators actually conduct their re-search, an observation duly noted by Henry Beecher (1966). We thus find ourselves confronted by a paradox in which the person best positioned to actually prevent harm is also the person most likely to do harm, and (p.345) apparently the person we trust least to protect the rights and well-being of their research subjects—the investigator (Koski 1999).

Would society tolerate such a costly, inefficient, and ineffective approach to regulation and oversight of medical care? Of course not, but we do not simply allow doctors to run wild and do anything they want, and possibly harming their patients without oversight or discipline. In medicine we have started with a different set of assumptions and taken a different approach. We have begun with assumptions that physicians are caring, well-intended, well-trained, competent professionals who wish to take care of their patients, to “first do no harm”—to ensure their patients' safety, health, comfort, dignity, and privacy—even as they strive to treat and prevent disease.

How can this assumption be justified? The simple truth is that we have built a system to accomplish these goals using the tools of professional-ism. Physicians are required to undertake a rigorous educational experience—college, medical school, residency, and specialty training, and at each step along the way, they are subject to objective examinations at accredited institutions. They are required to obtain professional certification, and before they are allowed to practice, they must be licensed and granted privileges to care for patients through a rigorous credentialing process. To maintain their privileges, they must receive continuing education. There are also well-developed, fully integrated processes for peer review, adverse event reporting, oversight, and discipline to ensure that physicians who fail to conduct themselves according to the standards of the profession are stripped of their privilege to practice medicine.

Ironically, in the profession of medicine, the only activity that is somehow exempt from these processes is clinical research—research involving human subjects. What if we were to apply the professional paradigm that we currently use for every other field of medicine to research?

It is interesting to note that the Food and Drug Administration and the Department of Health and Human Services, which oversee most biomedical research involving human subjects rely on rules and regulations to try to ensure responsible conduct, whereas in the rest of medicine, it is professionalism that regulates physician behavior—indeed it is the physicians themselves who embody the spirit of professionalism that guides their conduct, not the need to comply with regulations. As a backstop to this professional paradigm is, of course, a legal system that affords individuals who believe that they have been harmed or neglected by their physicians a right of individual recourse. Physicians who engage in substandard and/or irresponsible conduct or negligence are subject to (p.346) medical malpractice lawsuits and claims of liability, but even here, there is a strong element of “peer review” to ascertain the prevailing standard of care and appropriateness of physician conduct. While the medical malpractice system may be a deterrent to irresponsible physician conduct, few would argue that fear of legal action is the primary motivator for a physician's professional behavior. Nonetheless, its potential impact is undeniable and capable of inducing unfortunate and costly consequences, namely the practice of “defensive medicine.” Indeed the licensing and tort system for dealing with medical malpractice is itself problematic. Accordingly overreliance on the medical malpractice model in the research realm could well have an impact as detrimental to the responsible conduct of research as a rigid system of regulatory requirements, so a note of caution is warranted.

Development and implementation of a professional paradigm for research involving human subjects is likely to be at least as effective for ensuring responsible, ethical, safe conduct of clinical research as is the existing, compliance-focused approach, and it would be far less burden-some than the system for which so many are now calling for extensive reform. Reform is no longer a reasonable or appropriate goal. As has been said before, we cannot continue to do things essentially the same way, based on the same flawed assumptions and expect to have a different outcome. A complete redesign of the approach, a disruptive transformation, is necessary and long overdue.

Over the past thirty years, since the adoption of the Federal Regulations, our efforts to protect human subjects from research harms have focused on regulatory compliance achieved through education and oversight, not ethics. Each time the compliance paradigm has failed, we have intensified efforts to train investigators and hope for a better outcome. In essence we have done little more than put training wheels on the bike, with IRBs and HRPPs (human research protection programs) running alongside the investigators and research teams to keep them from falling down without fully appreciating what Henry Beecher said half a century ago, when ethical review and oversight were first adopted as a means of preventing harm to research subjects—that the only true protection of the safety, well-being, and rights of research subjects is the well-trained, well-intended, conscientious investigator (1966).

Jonathan Moreno (2001, 16) has written eloquently that what was once a system of “soft protectionism” has morphed into a rigid protectionist paradigm, and he believes that we have passed the point of no return—”Good-bye to all that,” he says. But if we are to accept (p.347) Moreno's position, what does the future hold—more of the same? The very thought once again evokes Einstein's oft-quoted definition of insanity: “doing the same thing over and over again and expecting different results.” With great respect for Moreno's opinion, I argue that not only can we change the failing protectionist paradigm, but that indeed, in the interests of science and society, we must. Rather than focus on how we should revise the current regulatory framework, we should focus on how we achieve the true goals of ensuring that research involving human subjects is done well and only by trained, certified professionals who take their responsibilities to ensure the well-being of their subjects as their highest priority. If we choose to take such a course, we will not abandon completely the current system for ethical and scientific review, which does indeed serve us well in many ways, but we could use it differently. All human research ought to be subject to peer review at any time, and certainly there are some types of highly risky or controversial research that might appropriately be subject to prospective review and approval. But moving forward, we should change our mindset and assumptions. Perhaps we can come to a realization that we have bred a new generation of investigators, better trained, more responsible, more willing to do the right thing not because they are required to do so by regulations but because it is the right thing to do. Perhaps we can reject our failing paradigm of protectionism and turn instead to the proven paradigm of professionalism.

Doing so, as we do in every other aspect of medical practice, would not be so very hard, as the tools already exist and their effectiveness is proven. Our greatest challenge is to find the will, as we already have the means. Yes, perhaps it is time to take off the training wheels!

References

Bibliography references:

Beecher, Henry. 1966. Ethics and clinical research. New England Journal of Medicine 274 (24): 1354–60.

Brandt, Allan M. 1978. Racism and research: The case of the Tuskegee Syphilis Study. Hastings Center Report 8 (6): 21–29.

Department of Energy, Office of Health, Safety and Security. 1995. Advisory Committee on Human Radiation Experiments: Final report (ACHRE). http://www.hss.doe.gov/healthsafety/ohre/roadmap/achre/report.html.

Department of Health and Human Services (DHHS). 2011. Advance Notice of Proposed Rulemaking. Human subjects research protections: Enhancing protections for research subjects and reducing burden, delay, and ambiguity for investigators. Federal Register 76 (143): 44512. (p.348)

Goldby, Stephen. 1971. Experiments at the Willowbrook State School. Lancet 1: 749.

Jones, James H. 1993. Bad Blood. New York: Free Press. (Orig. pub. 1981.)

Katz, Jay. 1972. Experimentation with Human Beings. New York: Russell Sage Foundation.

Koski, Greg. 1999. Resolving Beecher's paradox: Getting beyond IRB reform. Accountability in Research 7 (2–4): 213–25.

Levine, Carol. 1988. Has AIDS changed the ethics of human subjects research? Journal of Law, Medicine and Ethics 16 (3–4): 167–73.

Mann, Charles C. 2011. Smoke screening. Vanity Fair, December 20. http://www.vanityfair.com/culture/features/2011/12/tsa-insanity-201112.

Moreno, Jonathan D. 2001. Goodbye to all that: The end of moderate protectionism in human subjects research. Hastings Center Report 31: 9–17.

Raper, Steven E., Narendra Chirmule, Frank S. Lee, Nelson A. Wivel, Adam Bragg, Guang-Ping Gao, James M. Wilson, and Mark L. Batshaw. 2003. Fatal systemic inflammatory response system in an ornithine transcarbamylase deficient patient following adenoviral gene transfer. Molecular Genetics and Metabolism 80: 148–58.

Steinbrook, Robert. 2002. Protecting research subjects—The crisis at Johns Hopkins. New England Journal of Medicine 346 (9): 716–20.

Suntharalingam, Ganesh, Meghan R. Perry, Stephen Ward, Stephen J. Brett, Andrew Castello-Cortes, Michael D. Brunner, and Nicki Panoskaltsis. 2006. Cytokine storm in a phase 1 trial of the anti-CD28 monoclonal antibody TGN1412. New England Journal of Medicine 355 (10): 1018–28.

Wood, Alastair J. J., and Janet Darbyshire. 2006. Injury to research volunteers— The clinical-research nightmare. New England Journal of Medicine 354 (18): 1869–71.