Space has always been an important spawning ground for dual-use technologies and the associated issues raised by attempts at targeted innovation: that of unintended consequences, or the spectre that many innovators do not get what they want—and, embarrassingly, sometimes get what they don’t want. Because space exploration is on its face directed at otherworldly terrain, emerging technologies in this domain are often justified by non-utilitarian or, more broadly, indirect reasons. In short, the joys of discovery and the delights of exploration for its own sake often dominate debates over the pursuit of both space travel and research into foundational technologies. The expansiveness and idealism of the rhetoric of space exploration means that technologies developed in pursuit of those lofty goals are open to a broad range of interpretations and applications, both military and civilian […].
The challenges of managing the dual-use aspects of civilian technologies are not unique to space. Civilian nuclear energy remains problematic because of the role of reactors in providing essential fuel for nuclear weapons. The intense monitoring of civilian nuclear energy programs around the globe, notably those sponsored by the governments of Iran and North Korea, has received wide attention. And the difficulty of maintaining such scrutiny over long periods is well known, not least because of errors made by the U.S. government in characterizing Iraq’s nuclear programs under Saddam Hussein. That weapons of mass destruction can arise from civilian science and engineering remains the chief reason for interest in the riddles of dual-use today. Not only do nuclear weapons of mass destruction continue to shadow the future of humanity, but a new set of bioengineering tools, which enable researchers to design novel organisms or reengineer existing organisms in menacing ways, raise additional concerns as to how that technology could be used to harm rather than help, potentially threatening all life on the planet.
As Robert Rosner, an astrophysicist at the University of Chicago, observes in his preface to a 2016 study of dual-use technologies, commissioned by the American Academy of Arts and Sciences, “the dual nature of technological advances—capable of elevating humanity and unleashing destruction on it—long predates the total war and scientific breakthroughs of the twentieth century.” But the capacity of today’s dual-use technologies, Rosner adds, “drastically” exceeds the scale of mayhem introduced by bygone innovations such as the machine gun or even the chemical weapons that scarred thousands of combatants during the First World War.
The “trickle-down” character of technological innovation can make constructing durable remedies more challenging. “What is high precision today,” notes Rosner, “is run-of-the-mill tomorrow.” He adds: “capabilities once considered rare and extraordinary, and thus conducive to control, evolve to become the ordinary, slipping outside any possibility of enforceable regulation.”
The most salient contemporary example of “trickle-down” centers around a set of digital technologies—computers, the internet, strong cryptography, decentralized networks, the dark web, and even cyberwar. With deep roots in civilian technology—from supercomputers that simulated nuclear explosions to simple computer and video games to email to online commerce—cyberwar owed its early successes to freelance code writers, often flatteringly termed “hackers” by fans and critics alike. The path from hacking to attacking is surprisingly direct and another example, perhaps the most dramatic, of both dual use and unintended consequences.
Space-led technological innovation in the 19th century
A bit of history can shed light on the special nature of space-led technological advance. Exploration of the skies, by government, arose in the context of competition between the U.S. and Soviet Union over technological supremacy. Allies during the Second World War, each country deserved much credit for defeating the totalitarian regimes of Germany and Japan. But after the war’s end in August 1945, the U.S. and Soviet Union fell out over how to manage the post-war peace. By 1948, these two superpowers were arch geopolitical rivals, and nuclear weapons were the focus of their intense techno-scientific competition. By the mid-1950s, jet planes and guided missiles were the object of various races. When the Soviets launched their simple satellite Sputnik in 1957, a new “space race” erupted that threatened to overwhelm all else.
Perhaps because restraining the spread of nuclear weapons on terra firma proved impossible for crucial years, achieving a practical ban on space-based weapons proved far easier. Indeed, from the earliest years of U.S.-Soviet technological competition, space was the place where high-minded humanists could trumpet the grand potential that techno-science harbored for bettering humanity. Rather than militarize space, then–President Dwight D. Eisenhower sought to give a civilian face to space exploration. The bias towards peaceful uses of space meant that concerns over dual use were mainly about the application of civilian technologies to military problems. The Star Wars program championed by President Ronald Reagan in the 1980s foundered not so much on the impracticality of space-based laser weapons, but rather on the deep and abiding commitment, however rhetorical, to keep space off-limits for state-controlled weapons of mass destruction.
In the race to put a man on the Moon, many accepted as appealing and persuasive the dual-use distinction as justification for pursuing broad security and social aims at the same time. Wide support, domestically and internationally, existed for refusing to openly pursue military objectives in space. Instead, President John F. Kennedy and his successor Lyndon B. Johnson chose to elevate the generative aspects of human ingenuity over darker impulses. This approach was seductive. The political appeal of demilitarizing space undercut to some degree the costs of proliferating military technologies on land. Yet the distinction was always something of a fiction, because of the vagaries of unintended consequences.
Who actually could be sure that working on civilian applications would not help militarists in the future?
How could choosing to work only on civilian science and engineering provide moral cover if the fruits of this labor ended up benefiting military technologies anyway?
And what military project might not ultimately help civilians, so that even earnest weapons designers might argue that, someday, their inventions and insights might also save or enrich lives?
Wernher von Braun and his work at the Army Ballistic Missile Agency exemplified the uncomfortable overlap between military and civilian agendas at the dawn of the Space Age […]. In a world of uncertainty and serendipity, technologies can leapfrog across any boundaries, especially those seen in retrospect as arbitrary […].
Even if the brittleness of the dual-use distinction invites policy-makers to ultimately question its value, the core concern about the societal impact of promoting space exploration and its foundational techno-scientific knowledge and tools remains. Indeed, questions over these societal impacts should trump worries about dual use. I do not mean to say that the dual-use distinction is spurious, only that: whether we discard or retain the distinction, a gnarly set of problems persists regarding how public funds for innovation in space can support public goods.
These problems orbit around a concept called “targeting.” For policy-makers, targeting seems an obvious solution to the challenge of getting what one wants from spending on innovation. Just say you want X and then achieve it. The Apollo project was the classic case of targeting and remains a lodestar for managers of the techno-science enterprise. The pervasiveness of the term “moonshot” is no accident. Setting forth a specific goal, such as explicitly reaching the Moon, is the very definition (and origin) of the moonshot. Highly specific, drawing on well-understood technologies and a limited range of unknowns, the Apollo targeters struck a comfortable balance between too difficult and too easy. In a calibration reminiscent of Goldilocks and the Three Bears, NASA identified the sweet spot of space targets—and all Americans were rewarded by the seminal achievement of putting men on the Moon.
Yet targets are notoriously difficult to craft, and the process of targeting difficult to manage. Ultimately, the concept is deceptively complicated. Consider the “war on cancer,” which arose with a vengeance in the aftermath of the Moon landings, and came to be criticized for being overly broad and practically impossible to operationalize. Today, concerns over climate change cause many to imagine targets that might either cool the planet or help humans adapt to warming. Yet either approach runs into immediate complexities over what targets to specify and which intermediate targets—we might think of these as stepping stones—should be pursued and in what order.
To be sure, talking targets highlights for the public alternatives to technological determinism, the view that the laws of physics and the dictates of pragmatic engineers shape the outcomes of techno-scientific enterprises. Increasingly politicians, civil society, corporate leaders, and the media talk about technologies they want rather than settling for what Technology (with a capital T) can give them. In this sense, the democratization of technological possibilities has co-evolved with the decline in relevance of the dual-use distinction. The U.S. government once held hegemonic sway in nearly every techno-scientific domain except those areas, such as nuclear weapons or engineering bio-weapons or pandemics, where entrepreneurial freedom obviously isn’t permissible or perhaps even sustainable. Now the federal government’s hegemony is gone, and a new approach to targeting for the public good must be constructed.
Mounting targets, under any calculus, is worth the effort if only because targets are clever means of holding scientists and engineers to account. Targets help policy-makers and citizens alike chart progress towards appealing outcomes. In the coming era of wider democratic pathways to space travel and space technologies, publicly-generated targets—and accountability trajectories—promise to garner wide support and even shape the new politics of public innovation.
And yet many areas of potential improvement in the human condition, whether on planet Earth or in infinite space, do not seem to lend themselves to targeting. Targeting seems least effective, and most costly, when goals are broad and poorly defined. Fuzzy targets, in short, should inspire anxiety. Such laudable far-reaching targets as democratizing a country ruled by a tyrant, or improving American primary education, or preventing terrorists from using social media, are inherently flawed expressions of homo targetus. These goals and others like them require pushing down multiple paths, incorporating many specific targets, which may exponentially increase costs, complexity and chances of failure.
The power of fiction
In his classic 1977 essay “The Moon and the Ghetto”, the economist Richard R. Nelson examined the paradox of the modern American state, which could put men on the Moon but not desegregate schools, improve education, find a cure for cancer, or better equip parents to raise successful children. Long a leading analyst of the political economy of publicly funded research, Nelson brings valuable humility to the search for methods that can raise the odds of achieving desirable civilian goals through the concerted efforts of scientists and engineers. Neither the internal tensions within the dual-use paradigm nor the seeming ethical benefits of freedom from military imperatives and constraints are decisive here. In his 2011 essay “The Moon and the Ghetto Revisited”, Nelson identifies a different culprit:
Clearly the difficulties that societies are having in dealing effectively with some of these problems are due to the constraints associated with significant differences among citizen groups in their interests and the values they hold. However, in many cases the constraints are not so much political as a consequence of the fact that, given existing knowledge, there are no clear paths to a solution. The heart of the problem is that society lacked the know-how to deal with it effectively.
I would argue that this remains the case today.
Nelson’s concern about the centrality of knowledge to the innovation enterprise brings us full circle. Perhaps the very reason why imaginative scenarios about the future of space travel—in form, circumstance, and value—appear so compelling and useful is because fiction works best in filling critical gaps in human knowledge. It is in the gaps of our knowledge that the imagination flowers and alternative narratives thrive. While engaging with future scenarios, innovators must remain alive to the challenge of closing knowledge gaps—and in ways that insure that the inevitable spill overs from civilian to military, or military to civilian, are part of the anticipatory governance afforded the innovation project. Unintended consequences need not be wholly unanticipated.
Space is a place of extraordinary promise, where the best in human impulses and aspirations can commingle with emerging technologies as yet dimly understood. Better understood are the risks of ignoring the lessons of the experience of dual-use technologies, whether in the skies or on the Earth.
About the Author
G. Pascal Zachary is the author of “Endless Frontier” (1997), a biography of Vannevar Bush, and “The Diversity Advantage” (2003), on culture and innovation. He has been a professor of practice at Arizona State University since 2010. He writes a column on innovation for the New York Times and IEEE’s Spectrum magazine and is general editor of ASU’s “Rightful Place of Science” book series.
Endless Frontier by G. Pascal Zachary
The Diversity Advantage by Pascal Zachary