A little more than a year ago, I attended a virtual symposium on “Nudges in Health Care.” The conference focused on how hospital systems, insurers, and private employers might promote better health by applying an approach known as “nudging,” developed in the field of behavioral economics. Nudging refers to the practice of altering an individual’s choices by structuring her so-called “choice architecture,” often without her awareness. Doing so, according to nudge advocates, can help individuals make decisions that benefit them and society.
Often-cited applications of nudging include making retirement contributions opt-out, rather than opt-in, and requiring restaurant patrons to ask for plastic utensils rather than simply giving them out by default. In both cases, individuals’ “choice sets” aren’t restricted; they are simply rearranged in a way that predisposes people to make decisions deemed beneficial. This is what sets the nudge approach apart from more coercive forms of social engineering: It promises to reprogram behavior while preserving freedom. Richard Thaler and Cass Sunstein, whose 2008 book, Nudge, furnished the movement with a manifesto, call this “libertarian paternalism.”
Behavioral economics has been making inroads into American medicine for quite some time, and the nudge technique was given a major boost by the Obama administration, in which Sunstein served from 2009 to 2012. The Affordable Care Act, Obama’s signature healthcare legislation, included nudges designed to coax individuals into purchasing private health insurance on the open marketplace. Since then, the approach has steadily gained steam: A quick search for the term “behavioral economics” in the National Library of Medicine database shows an exponential uptick in papers published on the subject in recent years.
Participants in “Nudges in Health Care” suggested interventions like sending patients text reminders containing “social norms,” meant to elicit certain emotive responses in the recipient (for example, reminding patients that “9 out of 10 people attend” their appointments). Another presentation explored “gamification” as a way to increase physical activity among patients with diabetes. Patients were given a wearable step counter and then sorted into different groups. The groups’ aims ranged from collaboration (patients worked together to score “points” corresponding to things like weight loss and improvements in blood sugar) to competition (patients were notified of others’ progress in an effort to boost their own motivation to exercise).
As a practicing physician, I felt a vague unease about the exercise of soft power being celebrated at the conference. The underlying assumption seemed to be that patients can’t be trusted to recognize their own failings; it is therefore the responsibility of the enlightened medical-research-industrial complex to forestall their undesirable decisions and optimize their health outcomes. Listening to the presentations of distinguished psychologists, economists, and physicians, I got the distinct sense that, in their vision of the world, there are two groups of people, divided by their degree of mastery over their own cognitive machinery. Those in full possession of such mastery, it would seem, should be empowered to manage the lives of those who lack it.
My wariness about this worldview didn’t seem to be shared by other symposium attendees, however. The mood of the event was ebullient, the underlying ethos one of “it’s all for their own good”—the “they” in question, of course, being the teeming hordes who, left to their own devices, would squander their savings and let their prescriptions go unfilled. Such an attitude was to be expected: Most physicians have great confidence in our vast and expanding array of medical interventions. To hear members of the profession tell it, the treatments we offer are a consequence of a totality of scientific evidence, neutral in its collection and analysis; their effects are clear, and their benefits to humankind unassailable. Presumably, nudge-inspired social interventions will have similarly miraculous effects.
“Most health gains in recent centuries haven’t been due to pills and devices.”
In reality, much of this confidence is unwarranted. True “blockbuster” treatments, such as insulin and antibiotics, are rare. The benefits of preventive drugs like statins, in absolute terms, are small and accrue over the course of decades. Treatments like cardiac stents are invasive interventions with a whole institutional and industrial apparatus devoted to their delivery—and yet, outside of a narrow band of patients, their benefits are scant. Most health gains in recent centuries haven’t been due to pills and devices dispensed by medical professionals, but things like improved living conditions and worker protections.
These are the sorts of things, it turns out, about which behavioral strategies like nudging have little to say. Nor, for that matter, does medical science writ large. The profession typically prefers to attribute the failures of its preferred interventions to individual patient “noncompliance” and related deficiencies, bracketing the social, economic, and political factors that may limit the efficacy of treatments in certain populations to begin with. This preference goes a long way toward explaining the current appeal of nudging in health care: It claims to offer a novel fix for certain limitations of the system—without calling these basic assumptions into question.
A patient I saw recently—U., a 59 year old woman with congestive heart failure, in and out of the hospital eight times in the past year, fired from her job due to her inability to stand for prolonged periods and her requests for frequent breaks—had been denied state disability assistance twice. I recently spent the better part of a morning helping her pro bono attorney construct yet another appeal. The work was plodding, the requisite forms Kafkaesque (“Has the patient been unable to engage in any substantial gainful activity because of any medically determinable physical or mental impairment which can be expected to result in death or has lasted or can be expected to last for a continuous period of not less than 12 months?”). But this was likely U.’s best shot at achieving the material security that might lead to better health.
If we are to listen to the nudgers, a patient like U. would benefit from being subjected to a more behaviorally informed approach: an electronic pill bottle that helpfully buzzes when it’s time for her medications; or, perhaps, an automated text-messaging platform that cheerily reminds her that she is doing a great job. Such efforts seem futile at best—and at worst, bordering on cruel. Interactions with systems of concentrated power and authority, hard and soft, already play an outsized role in the everyday lives of people like U. Nudge-style interventions, often trivial and innocuous-seeming, nevertheless represent the slow accretion of an invisible lattice, the further contraction of Max Weber’s “iron cage”: more monitoring, more data, more alerts, more reminders, more entry points into the panopticon of American health care.
Health and disease are modulated by social conditions such as housing and employment. Chronic conditions like diabetes and congestive heart failure are among the most salient examples, yet too often, medical professionals and policymakers treat them as mere functions of individual choice—eat fewer donuts, take your statin. But having a low-wage job with no health insurance is hardly a function of one’s volition. Take D., another patient of mine, also with congestive heart failure. At our last visit, she told me about her job at an Amazon fulfillment center: She worked fast, she said, because she wanted to earn a bonus by hitting certain targets, which the wearable device on her wrist tracked for her. She didn’t take breaks. Her labor, it would seem, had been “gamified”—which meant that her heart failure was getting worse.
“Nudging attempts to relocate large-scale problems to the level of the individual.”
Nudging attempts to relocate large-scale problems to the level of the individual. The well-being of a population, it assumes, rests on its members’ individual mental attributes, just as a firm’s profitability depends on its workers’ internal motivations. These are conventional explanations, the stuff of any introductory business-school course. Behavioral economics, despite its heterodox posturing, hews to these same explanations, while also resting on additional presuppositions.
For the nudgers and their ilk, biases are built-in, the result of humans’ evolutionary programming. The claim, according to psychologist Gerd Gigerenzer, is that cognitive biases are “firmly imbedded [sic] in our brains,” serving as an impediment to our inner rational homunculus. Thaler and Sunstein compare biases to optical illusions, citing the tendency of our visual system to make errors when presented with certain stimuli as an example of a cognitive system riddled with flaws. The assumption, as Gigerenzer notes, is that because “our cognitive system makes such big blunders like our visual system,” then of course it also causes us to deviate from expected utility theory (the formal name for economists’ favored version of rational behavior, involving “maximization, consistency, and statistical numeracy”) in our everyday decisions.
This assumption—that biases, like the components of our visual apparatus, have a neuronal basis, that they exist somewhere inside our skulls—unifies behavioral economics, evolutionary psychology, and the burgeoning field of “neuroeconomics.” Underwriting any social policy based on research emanating from these disciplines is the idea that psychological phenomena like biases correspond to structures in the brain—and that the brain, like any other organ, has genes as its blueprint. And that these genes, in turn, result from the selection of advantageous mutations.
In other words, behavioral phenomena, such as the tendency of many people to misjudge simple probabilities (a favorite foible of behavioral economists and the target of many nudges), have, at their core, natural substrata, sculpted across evolutionary time, operating below the level of conscious awareness, and over which individuals have no control. Hence, the need for the more enlightened among us to exploit these tendencies—magnanimously, of course.
Nudging thus recasts societal problems like poverty and inequality not merely as problems of individual cognition, but as problems of biology. Inscribing such problems into nature renders them fixed and immutable—something to be tinkered with, rather than overcome through collective action or public policy. The ideological function served by this line of thinking should be obvious: The notion that a sizable portion of the population is hard-wired to behave in a certain way operates as a kind of taxonomical device. In this way, it serves a purpose not unlike that of racial categorization.
Race, as Adolph Reed observes, sorts people “into hierarchies of capacity, civic worth, and desert based on ‘natural’ or essential characteristics attributed to them,” legitimizing a social order’s “hierarchies of wealth, power, and privilege … as the natural order of things.” Biases supposedly chiseled by evolution function the same way. The solution to someone like A.—my patient with terribly high blood pressure, barely controlled on a five-drug regimen—eating too much salt lies not in the sort of redistributive effort that might transform her neighborhood into something other than a food desert; rather, it involves recognizing that A. suffers from “diversification bias,” and instead suggests altering the local fast-food joint’s menu layout.
Narratives like these function as “just-so stories,” propagated over the years precisely due to their tendency to provide explanations that also happen to preserve existing relations of power. Over time, these stories come to seem like a priori facts, coalescing comfortably with the interests of society’s upper strata.
Behavioral economists and their champions in medicine are quick to point to empirical data to buttress their claims. One need only look at the copious endnotes to Thaler’s and Sunstein’s best-selling Nudge, or, for that matter, the extensive citations peppering any of the PowerPoint presentations at the conference I attended. But it’s one thing to invoke the results of controlled experiments and conclude that biases must exist empirically; it’s quite another to then locate their existence in our gray and white matter. The distinction may seem trivial, but it is an important one.
For starters, this sort of logical leap papers over the replication crisis plaguing not only behavioral economics, but the behavioral sciences more generally. If the initially robust findings supporting core tenets of behavioral economics don’t hold up to scrutiny, why the ongoing effort to legitimate them using the techniques of cognitive neuroscience? And why the investment on the part of our health institutions in approaches like nudging, which suffer from the same problems of reproducibility and have been shown to be minimally effective at best?
“The minimal effectiveness of nudges is precisely the point.”
One answer might be that the minimal effectiveness of nudges is precisely the point. As long as those with more or less direct access to the levers of power can focus their time and effort (and public funds) on nominal behavioral interventions like nudging; and as long as these efforts are supported by a cadre of experts, it’s safe to assume that those levers will remain largely untouched by those disempowered and dispossessed in market society.
Race is, again, deployed in a similar way. In recent years, we have witnessed a widespread effort to ascribe all racial injustices to “implicit bias.” The effect is to divorce the social category of race from its political-economic roots and to transpose racism to our amygdalas. The most common method of detecting implicit bias, the implicit-association test, falls woefully short in its ability to tell us anything meaningful about our own internal mental states. But this hasn’t stopped an array of powerful and influential organizations from doubling down; just last year, the hallowed New England Journal of Medicine recommended its use.
Notions of individual bias—racial or otherwise—transform inequality into something attitudinal, in the form of subconscious predilections. In doing so, they divert our collective attention away from structural causes of human flourishing or misery and consolidate the supervisory role of a thin stratum of behaviorally attuned technocrats, a sort of “cognitive aristocracy.”
Listening to the presenters at “Nudges in Health Care,” I couldn’t help but think about what drew me to medical school in the first place, so many years ago. My reasons were hardly selfless; if anything, I sought a more explicit version of the sort of power being extolled during the symposium. In my 20s, this was the view of medicine contained in television shows like Grey’s Anatomy, with its cast of surgeons performing miracles on grateful patients. Among the medical students I teach these days, the theme persists, albeit in a different form—many of them still want to be neurosurgeons, but just as many profess a sincere desire to be, like the conference organizers, “change agents” and “thought leaders.”
In both the older vision and the newer one, health care is the purview of an enlightened priesthood manipulating individual bodies and collections of bodies. Technological developments have reinforced the ultimately bureaucratic quality of this enterprise. As electronic health records have supplanted written notes, the very purpose of doctors’ documentation has changed. A patient’s chart no longer represents the written repository of her life and eventual death. Rather, it can be seen as a glorified Excel spreadsheet; a vast aggregation of data from which diagnostic codes can be derived, health-care costs tallied, and bills sent.
To write about a patient like U. in the perfunctory manner amenable to medical billing and coding is just another way of collapsing personhood and biology under the rubric of instrumental rationality, as the nudgers would have us do. U., like so many of my patients, doesn’t simply have “congestive heart failure” and “chronic kidney disease.” These disease states only make sense as treatable entities within the larger process of lifemaking: her first job, the one from which she had been fired, packing garment boxes; the second job she had picked up immediately after, as a home health aide; the unpaid rent, the mounting collection notices. Viewed in this light, U. is the bearer of an illness which defies categorization.
The promise of behavioral economics is that, in recognizing U.’s biases and nudging her accordingly, this illness would become easier to bear. I have my doubts. But more than that, I worry about ensnaring a patient like U.—and countless others like her—into a cordoned-off socio-political sphere, one whose boundaries are demarcated as scientific and couched in the language of evolutionary theory. What would that mean for medicine itself?
None of this should be taken as an endorsement of unqualified blank-slateism. Human beings very likely do possess innate cognitive structures, such as those that endow us with remarkable linguistic ability. What we should be wary of are those accounts of innateness that, like now-discredited Victorian race science, are adduced to explain—and thereby legitimize—existing patterns of inequality. The incursion of intellectual programs like behavioral economics and evolutionary psychology into the practice of healing are merely the newest iterations. If we are to heed Rudolf Virchow’s dictum that “medicine is a social science, and politics nothing but medicine at a larger scale”—and in doing so, to realize medicine’s emancipatory potential—we need to rid it of this sort of biodeterministic casuistry once and for all.