ETHICS OF TERRAFORMING: A PRACTICAL SYSTEM George A. Smith George A. Smith, 23 Lexington Avenue, No.1739, New York, New York 10010. E-mail: [email protected]
Recently, there have been calls for “an ethical analysis concerning our obligations as space explorers.”1 This paper is an attempt to supply such an ethical analysis. Actually it is a bit more ambitious than that: It attempts to provide not merely an “ethical analysis”, but a practical ethical system, 2 and to show how the system works by applying it to a terraforming scenario. This paper is a condensation of a longer paper titled “Otherland Ethics” in reference to the “land ethic” of naturalist and pioneer environmentalist Aldo Leopold (1887-1948), a man who has inspired many philosophically-inclined environmentalists. This highlights an additional aim of this paper: to bear in mind the increasingly intertwined agendas of environmentalism and space exploration, and consider how these agendas might be served by the proposed ethical system.
PART I: INITIAL CONSIDERATIONS Different Ethics for Different Places? Commonsense intuitions support the idea that our ethics are at least somewhat “portable”. Even on a spaceship halfway to Alpha Centauri, there is no reason to think Earth-grown ethical systems would somehow lose their relevance by virtue of physical distance from Earth. The particular facts and circumstances of those on the spaceship would certainly be novel, but there is no obvious reason to think that the ethical system—the framework for moral reasoning about those facts and circumstances—would be novel. To the contrary, our intuitions tell us that killing on Earth is like killing on the Moon is like killing on the spaceship halfway to Alpha Centauri. Why, then, should space explorers need to reexamine ethics merely because they have left the Earthly environment? “Environment” is the operative word. Those who claim that space explorers do need to reexamine ethical principles are typically concerned not with relations among people (or between people and intelligent extraterrestrials), but rather with relations between humans and the natural environment. Space explorers who visit other places in our solar system (or beyond) are without question moving into radically different natural environments. Of course, the skeptic will still ask, Why does a mere change of place, of mere physical location, ultimately matter? Space is merely nature on the grand scale, merely the natural environment extended physically. Why should this physical extension matter philosophically? For now I will merely make the following claim: An ethical system that is truly satisfactory from the point of view of environmental ethics should be able to deal with questions of environmental ethics irrespective of whether they arise on Earth, on Mars, or on some about-to-be-discovered extrasolar planet.
985 Moving Beyond “Centrisms” One writer describes conventional ethical theories as follows: They are all geo-centric, Earth-centered theories which automatically exclude Mars, the solar system and the universe as a whole from the moral universe. Space projects may be easily shown to be morally permissible from our Earth-based perspectives. Homo-centrism, zoo-centrism and bio-centrism all exclude inanimate objects, like Mars, from the moral universe. But if we adopt a cosmo-centric perspective, moral permissibility for humans in space would require further justification.3
Do these “centrisms” tell us anything usefully precise about the ethical systems they purport to describe? I suggest that these terms tend to oversimplify more than clarify, because practical ethical systems have multiple dimensions, and thus when fully characterized are in fact polycentric. The Neglected “Polycentrism” of Practical Ethical Systems A practical ethical system helps people decide what they ought or ought not do in particular circumstances. Let us replace “people” with “moral agents”, thereby allowing the theoretical possibility that non-humans may be involved. And let us further clarify that such a system helps moral agents decide what they ought or ought not do with regards to a second group of entities. I will refer to this second group—in a sense the “target” group—as the “scope of application” of the moral system. If the moral agents are in some sense the system’s acting subjects, the scope of application describes the objects of their concern. Example: An ethical system may include the principle, “People should not kill any other people or animals.” Here, the moral agents consist of all humans, and the scope of application consists of all animals (including humans). Finally, one could also consider the sphere within which the system operates. I will refer to this third parameter as the “sphere of operation” (or simply, the “sphere”) of the system. Thus the system has three identifiable parameters: moral agents, scope of application, and sphere. With these three parameters in mind, it is easy to describe a variety of conceivable (and plausible) ethical systems, then further characterize them using “centric” terminology:
• System 1. This system contains ethical principles of the following pattern: “People should not kill any other people anywhere on Earth.” The only moral agents are humans, the scope of application is limited to humans, and the sphere is Earth. This system is anthropocentric with respect to both moral agents and scope of application, and geocentric with respect to sphere. •
System 2. This system contains ethical principles of the following pattern: “People should not kill any people or other sentient animals, anywhere.” The only moral agents are humans, the scope of application includes humans plus all sentient animals, and the sphere is the cosmos. This system is anthropocentric with respect to moral agents, moderately zoocentric with respect to scope, and universal (or perhaps cosmic) with respect to sphere. • System 3. This system contains ethical principles of the following pattern: “People (as well as any intelligent ETs we might discover) should not kill any living things, anywhere.” The moral agents are humans, but in theory could include other intelligent beings; the scope is humans plus all other living things; the sphere is the cosmos. This system might be described as “provisionally anthropocentric” with respect to moral agents, biocentric with respect to scope of application, and universal (or perhaps cosmic) with respect to sphere.
986 One could devise many more combinations. However, the point is not to flesh out all the possibilities, but rather to stress that a practical ethical system of realistic complexity will exhibit these three (at least) parameters. And a “centric” label, to be usefully precise, needs to specify which parameter of the system it is characterizing. This terminological point is by no means a merely academic matter. Non-specific use of “centric” terminology frequently gives rise to oversimplistic distinctions, e.g. that between “anthropocentric” ethics and “biocentric” ethics. A parameter-sensitive analysis would find that even an “anthropocentric” system may be radically open, in the sense of allowing the theoretical possibility of non-human moral agents. Conversely, it would find that even a “biocentric” approach, if one examines the moral agent parameter, is as anthropocentric as the “anthropocentric” approach (and possibly even more so). That is, all ethical systems, if they are practical ones, function primarily as guidance systems for human moral agents. To this extent they all—however they may describe themselves—share a fundamentally anthropocentric orientation. Granted, the scope of application of an ethical system is vitally important, and must be carefully specified. But so are the other two parameters, and they too must be specified. Practical Ethical Systems vs. General Theories of Value Why does non-specific use of “centric” terminology persist even if—as I have claimed— it often oversimplifies the polycentric nature of practical ethical systems? One reason may be that those who use such terminology are often not really concerned—at least not overtly—with practical ethical systems per se. In a recent paper, for example, the authors state that “we suspect that the prevailing anthropocentric, or even geocentric, ethical perspectives will be inadequate for the questions outlined in this essay, as well as others sure to be raised by continued space exploration. We believe that a cosmocentric ethic will be required.”4 The “centric” terminology is here employed to characterize not a practical ethical system, but rather an ethical “perspective”, or merely an “ethic”. Such rather abstract terms are signals that the
focus may be less on practical ethical decision-making, and more on general considerations of value. Such treatments do not avoid specifics entirely, and in fact they often move at least intermittently to the level of practical ethical problems. But when they reach the practical level, they typically are limited to tentative general remarks about how the value theory or “ethic” might affect the problem. Such treatments lack the connective tissue provided by a conceptual apparatus that helps one to thoroughly consider a set of specific circumstances, systematically analyze the relevant factors, and come up with a complex, plausible answer to the problem. This is not the time and place to debate the relative merits of practical versus evaluative ethical theories. This paper will be proposing a practical ethical system, and I tend to agree with Bernard Gert that “If an ethical theory is going to be of any use as a moral guide, it must be as a practical theory.”5 No doubt many would counter that evaluative theories have uses other than as moral guides, or even as very indirect moral guides. I raise the distinction between practical and evaluative theories here because I think it may help explain why a rather vague and undifferentiated use of “centric” labels persists in much ethical discourse: Most of such discourse remains predominantly at the evaluative theory level, hence does not descend to the level of nitty-gritty practical analysis that forces one to fully flesh out the parameters of the ethical system.
987 The Importance of Being Systematic One could take a practical, “applied ethics” approach without necessarily striving for a unified ethical system. Donald MacNiven, for example, structures much of his ethical discourse around discussion of discrete moral dilemmas. Nevertheless, MacNiven concedes that a multiplicity of perspectives can make public consensus logically impossible, and that “To be fruitful, practical ethics clearly requires the development of a unified theory of ethics.”6 J. Baird Callicott apparently concurs, noting “there is both a rational philosophical demand and a human psychological need for a self-consistent and all-embracing moral theory.”7 Bernard Gert, whose practical ethical system I will be describing and applying in this paper, notes in a similar vein the advantages of a unified system: Some hold that there is, or can be, no universal moral system, only systems that apply to particular fields....However, one of the most important features of good work in applied ethics is that it shows how what may look like an acceptable solution to a moral problem in one field is not adequate because applying that same solution to another field comes up with a clearly unacceptable solution. It was only when physicians saw that there was no special moral system just for them that any progress was made in medical ethics.8
Similar benefits might accrue to environmental ethics by relying on a universal moral system rather than one that carves out a subdivision called “environmental ethics.” A final advantage of relying on a general ethical system is that such a system will be the
creation of a generalist moral philosopher, someone with no identifiable pro-environment or pro-space interests. Such relative disinterestedness will help ensure the system is not custombuilt to fulfill a certain agenda. This raises an issue that has left environmental ethics vulnerable to criticism in the past, namely its instrumentalist tendencies. Instrumentalist Use of Ethics By “instrumentalist use of ethics” I mean the construction or revision of an ethical framework or theory for the purpose of solving a particular (and presumably particularly pressing) realworld problem. As an example that catches the flavor, consider the statement, “we might have to assign intrinsic value to lifeless planets and so assign them rights of some sort to protect them from exploitation and wanton destruction.”9 Faced with such explicitly goal-oriented use of ethics, a purist would object that philosophical ethics are not to be instrumentally used for any purpose, however noble. Of course, today such a purist would have to confront an objection often heard in the academy, albeit more typically from literary academics than from scientific academics: “objectivity” is an illusory notion, at best. Without being drawn too far into a highly-charged area, for present purposes I will merely suggest the following: (i) environmental ethics has often been rather straightforwardly instrumentalist, (ii) those with instrumentalist tendencies tend to focus less on conceptual niceties and more on solving real-world problems, and (iii) inattention to conceptual coherence may ultimately run counter to the best interests of environmental ethics. Instrumentalist use of ethics, however well-intentioned, may tend to produce the conceptual equivalent of hastily-grafted branches: the result may lack the organic connection needed for growth. It is relatively easy to make broad calls, at the evaluative theory level, for “ethical extension” from humans to animals to plants to microbes to abiotic objects to species to communities to ecosystems to biospheres to the cosmos. A more challenging task is to get down to the conceptual nitty-gritty and explain how the espoused principles of ethical extension play out within a coherent ethical system which provides real help in dealing with complicated practical problems.
988 Confronting a Philosophical Objection One of the central root-level issues which remains unresolved in many discussions of environmental ethics is what Elliott Sober terms “the demarcation problem”10 . The crux of the problem lies in trying to mark the boundary between objects which matter for their own sakes and those which do not, and then enumerating of set of ethically relevant properties which justify the demarcation. The issue here concerns what I earlier termed the “scope of application” of a moral system (one of three basic parameters, the other two being “moral agents” and “sphere”). Many environmental ethicists advocate extending the scope of application beyond humans, to animals, plants, inanimate objects, and even ecosystems. The problem, according to Sober, is that if one tries to expand the circle of ethical concern by accepting a radically
expanded sphere of autonomous or intrinsic value, the system becomes practically unworkable. Of course, environmental ethicists have long been aware of this practical problem, which sometimes goes by the name “Schweitzer’s dilemma.”11 If one takes Schweitzer’s principle of “reverence for life” seriously, how can one eat, kill germs, or even live? Roderick Nash notes that Schweitzer conceded that in the process of living one did on occasion kill other life forms, but “this should happen only when absolutely necessary to enhance another life and then only with a compassionate sense of ‘responsibility for the life which is sacrificed.’”12 Many find the notion of “only when absolutely necessary” imprecise at best. J. Baird Callicott found Schweitzer’s decision procedure “very vague and indeterminate.”13 Callicott finds this difficulty serious enough to justify shifting to a new approach in his attempt to find theoretical grounds for the ethical intuition that nonhuman species have intrinsic value. But note that the “demarcation problem”, or “Schweitzer’s dilemma”, has not gone away. Rather, Callicott has moved on to an interesting discussion at the “evaluative theory” level. At the practical level, the problem persists as stubbornly as it did for Schweitzer: How can the notion of intrinsic value of nonhuman species be integrated, coherently, into a practical ethical system which gives guidance to moral agents who must systematically consume other life forms merely to exist? As will be seen, the practical system I am proposing in this paper is indeed forced to confront the demarcation problem. Recapitulation The foregoing initial considerations can be recapitulated as follows: It is not obvious that a mere change of physical location—even one of great magnitude—requires a whole new way of thinking about ethics (as opposed to an application of existing ethics to a whole new set of circumstances). In any case, many if not most approaches to environmental ethics typically use “centric” terminology quite imprecisely: they fail to specify which aspect of the ethical system is being characterized with the term in question. This imprecision may stem from the tendency of such approaches to (1) focus on general theories of value rather than unified practical ethical systems, and (2) take a results-driven instrumentalist approach. One consequence is that many of these approaches, particularly those which depend on the concept of “intrinsic value”, have failed to confront and satisfactorily address the basic problem of demarcation (a.k.a. “Schweitzer’s dilemma”). If we are to adequately make environmental ethical decisions, on Earth or elsewhere, we need a practical ethical system which is unified, which avoids instrumentalism, and which directly confronts the problem of demarcation.
989 PART II: THE PROPOSED SYSTEM Seeking Professional Help I will be proposing a unified, practical moral system, but not one of my own devising. Why rely on someone else? First , because it would be inefficient and perhaps even arrogant not to
build on the prior efforts of a professional moral philosopher. Secondly, a general ethical system created by a generalist moral philosopher is unlikely to have been created to fulfill a certain narrowly-defined purpose or agenda. The system I propose using as an analytical framework is the work of Professor Bernard Gert, and is set out in his Morality: A New Justification of the Moral Rules (Oxford University Press, 1988). To find out how Gert’s system really works, there is no substitute for a careful reading of this book. As a practical concession, I have appended to this paper my brief summary (see Appendix). Why is this particular system recommended? First, because it is a practical ethical theory, one which leads to a universal moral system designed for practical application in any field. Second, it is articulated with relentless intelligibility.14 Third, it a conceptually tight ethical framework with clear accounts of basic presuppositions—a balanced conceptual ecosystem that Gert has been tending for nearly three decades. Philosophic Characterization of the System Gert specifically cites his debt to Aristotle, Kant, Mill, and especially Hobbes. For contemporary influences, he notes John Rawls (one of his graduate professors), Kurt Baier, and Marcus Singer. The system is centered on ten basic moral precepts or rules. It puts rules at its core, but puts consequences at the center of the method for determining justified exceptions to the rules. In effect, it is a modified form of “rule consequentialism” that incorporates elements of both Kantian and utilitarian views. Gert’s own characterization of his system stresses that it is not a simple variation of any classical philosophic view: Characterization in Terms of the Three Parameters The proposed system can be characterized in terms of the three basic parameters discussed earlier, namely moral agents, scope of application, and sphere of operation. The moral agents will be limited to humans with certain voluntary abilities, with the implication that the category could be expanded to include rational impartial non-humans if such were ever discovered. As for sphere of operation, the system is a universal moral system, i.e. there is no justification for restricting its sphere of operation to any physical location. The remaining parameter, scope of application, will require some elaboration. The question of scope is a crucial one, because it is intimately connected with the issue of “ethical extension” of rights that has been a major concern of environmental ethics. The proposed system requires, at a minimum, that the scope of application include all presently existing moral agents, as well as all who were ever moral agents and remain persons, i.e. are still capable of any conscious awareness. Beyond this “minimal group”, the system allows leeway— within limits—in defining the scope of application. Gert: Given that one includes all presently existing persons who are or were rational in the group, one may expand the group on any basis whatsoever, as long as one is prepared to have all moral agents act impartially with regard to this group whenever considering a violation of the moral rules. However, no rational person would
accept any enlargement of the group beyond all actual and potential present and future sentient (conscious) beings, and most would not accept a group this large.15
990 These two sentences represent the system’s direct confrontation with the “demarcation problem”, or “Schweitzer’s dilemma”. The crux of the demarcation problem, as noted above, is the need to satisfactorily mark the boundary between beings/objects which are to be morally privileged and those which are not, and then enumerate ethically relevant properties which justify the demarcation. The ultimate line of demarcation in the proposed system is actual and potential present and future sentient (conscious) beings. Gert observes that, “One needs no reason for refusing to enlarge the group beyond the minimal group, though, of course, there is a reason available, viz., every enlargement of the group restricts the freedom of those already in the group.”16 Note that this restriction of freedom should not be equated with the practical matter of the system’s unworkability (the “paralysis of moral judgement” noted above). Rather, it brings in an essentially moral objection to enlarging the scope beyond a certain point. One of the system’s fundamental theoretical elements is an account of certain basic evils, namely death (permanent loss of consciousness), pain, disability, loss of freedom, and loss of pleasure. Indeed, the goal of the entire moral system is the minimization of these evils. Expanding the scope beyond a certain point will result in a pervasive loss of freedom, i.e. will have the effect of expanding rather than diminishing evil. The upshot of the proposed system’s confrontation with the demarcation problem is this: The scope of application must include all presently existing moral agents, as well as all who were ever moral agents and remain persons, i.e. are still capable of any conscious awareness. Furthermore, the scope may be expanded to encompass, as an upper limit, all actual and potential present and future sentient (conscious) beings. In this paper I will use the system in its maximally-expanded scope of application. Accordingly, the scope will include, above and beyond the “minimal group” of moral agents, (i) all present sentient beings; (ii) all present potential sentient beings, i.e. ones presently in utero, or in an infant stage; (iii) all future sentient beings, i.e. ones which do not now exist but will exist as sentient beings in the future; and (iv) all future potential sentient beings, i.e. ones which do not now exist but will exist in utero or in an infant stage in the future. It is crucial to remember that the non-sentient entities not within the scope of the moral system are certainly not thereby rendered ethically irrelevant or dispensable. To the contrary, they will figure integrally in the moral calculus, as will become clear in the following application of the system to a terraforming scenario. PART III: APPLICATION OF THE PROPOSED SYSTEM TO TERRAFORMING Assumption 1: The terraforming activity in the following Scenario will be carried out by individuals acting as representatives of either a national government or coalition of national
governments, or a commercial entity or consortium. Assumption 2: The terraforming activity does not violate any applicable laws or treaties. (This paper aims to flesh out the ethical issues, not provide a legal analysis.). Assumption 3: No more than 2% of the budgeted expenditures of the applicable governmental unit are being used in the terraforming effort, and the decision to expend such resources was arrived at in a reasonably fair and democratic manner.
991 Assumption 4: The risks of back and forward “contamination” prove safely manageable. SCENARIO: Humans have thoroughly explored Mars and extensively studied the indigenous microbes—quite unlike any on Earth—that live in vents a mile below the surface. Eventually, what began as an outpost for explorers and scientists has evolved into a permanent colony. The colonists, desiring an environment more like the biosphere of their homeworld Earth, develop plans for a long-term terraforming project of several hundred years duration at least. The planned project would involve the entire planet as a global system, would require the introduction of various non-native organisms, would likely affect the native microbes, and would in essence be an ongoing experiment with uncertain results. Should the plans proceed? The moral agents in this scenario are the terraformers, and the scope of application encompasses sentient beings on Mars and on Earth. The terraforming activity is an engineering project (albeit vast) which does not per se break any moral rule, and thus on the face of it is ethically permissible. The focus accordingly shifts to those who would restrain or prohibit the terraforming. Thus the issue is whether one can justify the violation of moral rules 4 (“Don’t deprive of freedom”) and 5 (“Don’t deprive of pleasure”) that would occur if the terraforming were prohibited. What Evils Would be Avoided or Prevented by Such a Violation? (a) Displeasure over perceived hubris, and violation of “intrinsic rights” of extraterrestrial life and Mars itself. Some people would be repulsed by the very idea of humans essentially recreating an entire world, and would view it as an arrogant assumption of godlike powers. For such people, even if relatively few in number, a terraforming project would be an ongoing reminder of human hubris, and as such an ongoing source of displeasure. Another group, perhaps with some overlap and again relatively few in number, would be concerned with the inviolable nature (perhaps described in terms of “intrinsic value”; perhaps in terms of the sanctity of wilderness) of a place almost entirely unconnected to the Earthly ecosystem.. Even if the terraforming changes were gradual enough to allow the indigenous microbes to adapt, the natural evolutionary trajectory of Mars and its microbes would have been altered irreversibly by humans. For people with such views, the terraforming project
would no doubt engender a sense of loss and ongoing spiritual and esthetic displeasure. (b) Harms to the Mars colonists. Those living on Mars would essentially be living within the test tube of a vast experiment, and conceivably things could go fatally wrong. On the other hand. On the other hand, the colonists could probably be said to have consented to the risks. Moreover, adverse results would presumably take place gradually enough to allow evacuation to Earth, particularly if plans were made for such an eventuality. (c) Loss of scientific information. Irrespective of whether the terraforming project succeeded or failed, it would alter the basic physiochemical conditions on and in the immediate vicinity of Mars. Indigenous microbes, even if deep beneath the surface, would be disturbed and perhaps destroyed. Conceivably the changes would be gradual enough to allow some microbial adaptation and survival. Nevertheless, the pervasive changes induced by terraforming (including introduction of non-indigenous organisms) would mean that useful scientific information, both with respect to indigenous microbial life and the physiochemical aspects of the planet in general, would be lost or at the very least rendered more complicated. The magnitude of this loss would of course depend (inversely) on how thoroughly scientific investigations were carried out prior to beginning the terraforming, as well as on how successfully microbial life could be preserved in localized controlled environments.
992 What Evils Would be Caused by Such a Violation? (a) Deprivation of more pleasurable living conditions for Mars colonists. Not allowing the terraforming to proceed would deprive the Mars colonists (and their descendents) of physical conditions which would significantly increase their freedom and pleasure. (b) Displeasure that an opportunity for an ambitious experiment, or expression of “destiny”, has been denied. For both the Mars colonists and many scientists and vicarious participants on Earth, prohibiting terraforming would mean passing up the chance for a great scientific experiment, an extensive investigation of biospheric management. For such people, even if relatively few in number, prohibiting a terraforming project would be a source of frustration and displeasure. Another group of people, perhaps with some overlap and again relatively few in number, would view the terraforming of Mars as an initial step in a larger process of life-propagation through the solar system and beyond. They might be motivated by an intuitive sense of human destiny or Life-destiny, perhaps supported by notions such as Lovelock’s Gaia hypothesis, Vernadsky’s noosphere, or Christian co-creation. For people with such views, prohibiting the terraforming project would no doubt engender a displeasing sense of lost opportunity and perhaps even unfulfilled responsibility. (c) Deprivation of benefits of terraforming knowledge. Terraforming would provide much useful information for Mars colonists about the physicochemical dynamics of their planet. Such
information would likely be at least as beneficial for those on Earth, as it would provide a richly detailed case study in deliberate biospheric modification. Therefore, to prohibit the terraforming project would be to forfeit a potential scientific and technologic harvest that could be immensely useful in increasing the freedom and pleasure (or more negatively, in reducing the suffering) of succeeding generations of humans and other sentient beings on Earth as well as Mars. (d) Deprivation of Spaceship Mars benefits. If successful, terraforming would allow a longterm human presence on a nearby planet. We would have created a Spaceship Mars, travelling alongside us through space. One benefit would be that some or many of those born on either planet could choose to live in a fundamentally different place—a basic increase in freedom. More significantly, having a Spaceship Mars would offer the following benefits to the people (and accompanying sentient beings) on both planets: First, the human efforts required to adapt and thrive within a new and initially hostile physical environment would likely stimulate a number of useful technological innovations and advancements. Second, humans living on a second planet would be a great cultural experiment, providing rich insights into the sociological, psychological, and anthropological aspects of human experience. Third, there could be some big-picture environmental-survival advantages: (a) if an asteroid or comet were to significantly damage one of the planetary habitats, the other one could serve as a repository of technological and cultural knowledge for faster restoration of the damaged habitat; and (b) having a second functioning biosphere elsewhere in the solar system could offer a useful second data point (e.g., useful in differentiating between relatively local ecosystemic fluctuations, and changes tied in with larger solar systemic changes, such as those associated with cosmic events outside the solar system, or the movement of the solar system into new regions of interstellar space).
993 Not allowing terraforming to proceed would rule out all these types of potential “Spaceship Mars benefits”, all of which could be useful in increasing the freedom and pleasure (or more negatively, in reducing the suffering) of succeeding generations of humans and other sentient beings on Earth as well as Mars. Balancing and Concluding Analysis: The Scenario requires a balancing that in general terms can be expressed as follows: Allowing the violation (i.e. prohibiting terraforming) would mean depriving a group of people (Mars colonists and their descendants) of significant increases in freedom and pleasure, and depriving a great number of people (on both Mars and Earth, future as well as present) of likely increases in freedom and pleasure accruing from (i) scientific and technological benefits, especially those related to learning about biospheric management, and (ii) longer-term Spaceship Mars benefits. This deprivation would be for the sake of avoiding (i) the displeasure of a group of people with intuitive resistance to terraforming, and (ii) the loss for everyone of scientific information that would be destroyed or complicated by terraforming. Assuming one can reasonably surmise that the scientific knowledge lost is not extremely consequential, it is unlikely that any impartial rational person would want to publicly allow such a tradeoff. Accordingly, the violation is not justified, i.e. the terraforming should not be
prohibited. This conclusion assumes that prior to the onset of any terraforming, enough scientific study of Martian microbes, surface chemistry, atmosphere, etc. has been completed, and/or efforts to preserve microbial life in localized controlled environments has been sufficiently successful, so that a reasonable person could judge that the most consequential scientific knowledge to be garnered from such study has been attained or is still attainable. PART IV: EVALUATIONS Space exploration and environmentalism are often seen as enterprises with radically different aims and activities. An assumption of this paper is that the two enterprises, increasingly, are becoming entwined in mutually reinforcing ways. That being said, one can flesh out some differences in outlook by asking how environmentalists and space exploration advocates, respectively, might evaluate the practical ethical system that has been described and applied in this paper. Environmentalists Environmentalists who subscribe to some notion of “intrinsic rights” (and not all do), might resist the system’s decision to resolve the demarcation problem by drawing the line at sentience. And some environmentalists, especially if they react with certain hands-off instincts born on Earth, would dislike the system’s general permissiveness with respect to human activities on Mars (at least, as demonstrated in this paper’s analysis of the application Scenario). On the other hand, environmentalists would no doubt appreciate two particular characteristics of the system: (1) it can allow great concern for sentient creatures—which of course includes many of the higher animals on Earth; and (2) it gives full recognition to certain long-term environmental benefits, in particular terraforming benefits (practice in biospheric management) and what are termed Spaceship Mars benefits—benefits connected with having two viable biospheres in our solar system.
994 Space Exploration Advocates Space exploration advocates would like the system’s general permissiveness with respect to human activities on Mars, and in particular its shifting of the burden of justification to those who would prohibit such activities. And they would be pleased that (at least, as demonstrated in this paper’s analysis of the Scenario) those who would prohibit such activities would probably be hard pressed to justify the prohibitions. On the other hand, space advocates would not like the flip side of the coin of permissiveness—namely that the system provides no moral imperative, no positive duty to engage in space activities. It posits no panspermic imperative, no human responsibility or destiny to propagate ecosystems throughout the solar system. Even something as noble as trying to protect a biosphere from an incoming asteroid—although in the
service of a moral ideal—is probably not required. Points of Agreement Both environmentalists and space exploration advocates, arguably, could agree that the system possesses the following characteristics, and furthermore that these characteristics are desirable ones:
1. Disinterestedness: The system and associated ethical theory has been developed by an unaligned professional, i.e. a professional moral philosopher with no discernible interest in the respective agendas of environmentalism and space exploration. 2. Rigor and Practicality: It is based on a philosophically justified and conceptually tight theory, one which among other things directly addresses the demarcation problem. At the same time, the theory gives rise to a very practical ethical system. 3. Generativity: When applied to a particular problem, the system readily generates a wide spectrum of ethical considerations, ones which seem to be the relevant ones, and moreover provides a procedure for weighing and balancing them. 4. Flexibility: To some extent, different people can balance relevant factors differently. There may be some disagreement about how to rank different evils, or their magnitude. 5. Focus on Empirical Facts: The ethical analysis, to be sound, needs to be continually informed with the latest relevant scientific facts. In addition, if people disagree about how to rank evils or their magnitude, the system encourages them to talk about their disagreement in ways that lead toward the facts (and perhaps toward resolvable factual disagreements).17 APPENDIX: THE MORAL SYSTEM Following is a brief summary of Bernard Gert’s Morality: A New Justification of the Moral Rules (Oxford University Press, 1998). Numbers in parentheses refer to page numbers in the text. Morality as a Justified Public System Morality is more than simply a code of conduct adopted by a group. Rather, it is a “public
system”, a system of conduct understood by all to whom it applies. Unlike other public systems, morality applies to all rational persons. A justified or rational morality is “a public system that all impartial rational persons would advocate adopting to govern the behavior of all rational persons”(5). To this formal description, one needs to add features which give morality content. Hence the following definition: “Morality is a public system applying to all rational persons governing behavior which affects others and which has the minimization of evil as its end, and which includes what are commonly known as the moral rules as its core”(6). The first task of moral philosophy is to examine the commonly recognized moral rules, such as “Don’t kill” and “Don’t lie”, and determine if these rules—perhaps adapted or supplemented—can form the core of a public system that all impartial rational persons would advocate adopting. The second task is to set out the procedure for determining morally acceptable violations of these rules. The third task is to show that the system is justified, meaning that it would be supported by all impartial rational persons.
995 Rationality A person’s actions are irrational if the person acts on certain irrational desires. There are certain desires for which we regard it as irrational to act upon them simply because one feels like it, i.e. without some reason. These basic irrational desires include the desire to die (i.e. to permanently lose all consciousness), the desire to suffer pain and other unpleasant feelings, the desire to be disabled, the desire to suffer loss of freedom (including opportunities), and the desire to suffer loss of pleasure. To act on one of these irrational desires simply because one feels like it is to act irrationally. Reasons are conscious beliefs which can make rational what would otherwise be irrational action. Not surprisingly, these include beliefs that the action will decrease the chances of dying, suffering pain, etc., or conversely that the action will increase ability, freedom, or pleasure. Significantly, reasons are not limited to considerations of self-interest. That is, the belief that an action will increase the ability, freedom, or pleasure of someone else is not an irrational belief. We are rationally allowed (although not required) to act contrary to our self-interest in order to benefit others. Good and Evil Evil plays a more important role in morality than good does, because it is easier to specify what is evil. An evil is the object of an irrational desire. This definition provides a list of evils: death, pain, disability, loss of freedom, and loss of pleasure. No rational persons (insofar as they are rational) desire these things for themselves without a reason. No rational person desires these evils for himself It seems logical to try to define “good” as what is desired by all rational persons. The only thing we can be certain all rational persons desire is the absence of evils. However, if we define
“good” as that which no rational person will avoid without a reason, we come up with a list of obvious goods: ability, freedom, and pleasure. Others would be health, wealth, knowledge, love, and friendship. A great many things are neither good nor evil. Although a good is always better than an evil, rational people will not always agree which of two evils is worse, or which of two goods better. Such differences in ranking of evils and goods mean that rational people can—within limits—advocate different courses of action even when they agree on the facts. Moral Rules What are the characteristics which distinguish what are ordinarily regarded as basic moral rules (e.g., “Do not kill”) from what are not so regarded (e.g., “You may only move the bishop diagonally”)? First, the basic moral rules are universal, meaning they apply to all persons who can understand them and follow them. Second, they must be known by all rational persons in all societies at all times. Third, they must be rules that rational persons in all societies at all times might have acted upon or might have violated. This last characteristic means they are necessarily very general, with no references to particular persons, places, or times. Their universality and generality, however, does not mean they are absolute. To the contrary, they all have exceptions, i.e. there are circumstances in which breaking the rule is not acting immorally.
996 The content of moral rules is limited: “Moral rules do not tell one to promote good, or even to prevent evil, but to avoid causing evil”(70). Therefore, although one ought to act in accordance with the moral rules at all times, this is ordinarily no great burden. Moral rules prohibit actions which cause, or increase the probability of causing, people to suffer evils. Therefore, all impartial rational persons would generally advocate acting in accordance with the moral rules, barring justified exceptions. Given their universality and generality, they can form the core of a public system that applies to all rational persons. Impartiality One is impartial with regard to a group in a given respect if one does not favor any member of the group over any other member in that respect. One’s decision with respect to the group will be impartial if all information is withheld as to which member will benefit from the decision. The moral system is concerned with impartiality with respect to the moral rules. This impartiality is with regards to a group that includes oneself and, at least, all other moral agents. Moral agents are all those to whom the moral rules apply, i.e. all rational persons with certain relevant voluntary abilities. For reasons of symmetry one might hold that a moral agent should impartially follow the moral rules with regards to a group consisting of all moral agents. However, many rational persons hold that the group should be expanded to include more than moral agents. At a minimum, one
would want the group to include not only all presently existing moral agents, but all who were ever moral agents and remain persons, i.e. are still capable of any conscious awareness. One possible enlargement of this minimal group is to add presently existing sentient beings who are potential moral agents. This would add human infants, and human fetuses after they had become sentient. A second enlargement is to add actual future moral agents, e.g. people living 200 years in the future. With such an enlargement one would have to deal with uncertainties, but nevertheless it would be seriously immoral, for example, to pollute the environment in ways we know would cause serious evils to future generations. Proponents of such an enlargement might be expected to want to include future potential moral agents also (i.e. future human infants and fetuses). A third enlargement is to add any presently existing sentient beings irrespective of rational status, which would include many kinds of animals. Again, proponents of such an enlargement might be expected to want to include future sentient beings, as well as future potential sentient beings. Gert argues that “one may expand the group on any basis whatsoever, as long as one is prepared to have all moral agents act impartially with regard to this group whenever considering a violation of the moral rules. However, no rational person would accept any enlargement of the group beyond all actual and potential present and future sentient (conscious) beings, and most would not accept a group this large”(90). One reason for not accepting enlargement is that every enlargement brings an evil—restriction of freedom—to those already in the group.
997 Impartially following the moral rules means that one does not violate a moral rule with regards to any member of the group unless one would allow everyone to violate that same moral rule with regard to any member of the group in the same circumstances. Note that a rational person need not be impartial in this sense. Impartiality is allowed but not required by rationality. But only if one is both rational and impartial with respect to the moral rules will one act in a morally acceptable way. However, this does not mean that all rational impartial persons will always agree. They may sometimes disagree, if they interpret the rule governing their decision differently. For the moral rules to be followed impartially means that acceptable violations of the rules must be determined by a procedure that is known and acceptable to all moral agents. The procedure cannot rely on facts not known to, or beliefs not held by, all rational persons. When violations are determined by such a procedure, they are “publicly allowed”. Moral impartiality requires that one never violate a moral rule unless one can advocate that such a violation be publicly allowed. Justification of The First Five Moral Rules
A justified moral rule—one which can be part of a rational morality—must be one towards which all impartial rational persons adopt the moral attitude. The moral attitude is that the moral rule should be followed impartially by everyone with regard to all those protected by the rule. A rational person wants protection against certain consequences, namely death, pain, disability, loss of freedom, and loss of pleasure. Consider the following five rules: 1. Don’t kill. 2. Don’t cause pain. 3. Don’t disable. 4. Don’t deprive of freedom. 5. Don’t deprive of pleasure. A rational person would take the following “egocentric attitude” toward these rules: “I want all other people to obey the rule with regard to all for whom I am concerned (including myself) except when I have (or would have if I knew the facts) a rational desire not to have the rule obeyed with regard to them”(105). How can this egocentric attitude (i.e. the limitation to the sphere of one’s concern) be transformed into an impartial attitude? Gert concedes that “In a very important sense, this problem cannot be solved”(106). However, if the rational person in question considers the rules as public rules that are to be followed by all rational persons, then the egocentric sphere of concern must expand to include at least all rational persons. Alternatively, one might simply assume what is universally acknowledged—that moral rules require impartial obedience. These rules are not to be followed with blind obedience. A violation of a rule is strongly justified if all impartial rational persons would advocate that the violation be publicly allowed (e.g. breaking a minor promise to save a life, or causing suffering as a necessary part of medical treatment). A violation is weakly justified if impartial rational persons differ as to whether or not they would publicly advocate violation (e.g. breaking a bad law in order to get it changed). Most genuine moral perplexities arise in connection with the latter. There is a clear connection between breaking one of these moral rules and what is commonly thought of as violating a basic “right”. If a moral agent (as opposed to, e.g., a disease, a tornado, or a fall off a ladder) kills you, that moral agent has violated your basic right not to be killed by breaking the moral ruled against killing with regard to you. Gert notes that speaking in terms of moral rules and their violation and speaking in terms of fundamental “rights” may simply be two ways of talking about the same thing. But he believes that the rules/violations terminology allows a conceptually clearer and more precise way of discussing the moral aspects of killing and of causing other evils.
Given that the moral rules can be justifiably violated, an impartial rational person’s attitude toward each rule is the following: “Everyone is always to obey the rule except when an impartial rational person can advocate that violating it be publicly allowed. Anyone who violates the rule when an impartial rational person could not advocate that such a violation be publicly allowed may be punished.” Only those rules towards which an impartial rational person would adopt this “moral attitude” are justifiable or genuine moral rules. Impartial rationality will thus require taking the moral attitude toward the moral rules. (It will not, alas, require acting in the way specified by that attitude.) Justification of the Second Five Moral Rules Consider these rules: 6. Don’t deceive. 7. Keep your promise. 8. Don’t cheat. 9. Obey the law. 10. Do your duty. Rational persons want protection against deception, promise-breaking, cheating, lawbreaking, and violation of duty because these things generally increase one’s chances of suffering an evil, and lessen the chance of obtaining the things one wants. Thus rational persons will want to obey these rules unless an impartial rational person could publicly allow the violation. In short, these five rules are justifiable in that an impartial rational person would adopt the abovementioned moral attitude to these rules. As with the first five moral rules, the moral attitude allows the possibility that violating a rule can be justified. In analyzing whether a violation is justified, only certain features of the violation are relevant, i.e. only certain features would affect whether or not some impartial rational person would advocate that the violation be publicly allowed. The most important of these features can be fleshed out with the following questions:
1. What moral rule is being violated? 2. What evils are being caused, avoided, or prevented by the violation? 3.
What are the relevant desires of the person toward whom the rule is being violated? 4. What are the relevant beliefs of the person toward whom the rule is being violated? 5. Is there a relationship between the person violating the moral rule and the person toward whom it is being violated such that the former has a duty to violate some moral rules with regard to the latter? 6. What goods are being promoted by the violation? (Gert believes this is morally relevant only when the previous feature applies.)
999 Based on the analysis that these questions stimulate, one asks the “morally decisive question”, namely, “What effects would this kind of violation have, if publicly allowed?” Agreement is surely not certain, however, since people may have factual disagreements over what such effects would be, and may also disagree over how to rank the effects. Moral Ideals Moral ideals can be summarized as “prevent evil” or “do those things that lessen or are likely to lessen the amount of evil suffered by anyone”. Those who volunteer for relief work to help victims of warfare or natural disasters, for example, may be following the moral ideal to “prevent killing”, “prevent the causing of pain”, or “prevent the deprivation of freedom”. Morality requires that one always obey the moral rules (unless a violation is justified); it only encourages one to follow the moral ideals. One is not required to follow moral ideals at all, hence one is certainly not required to follow them impartially with respect to all. Morality does not have any positive goal at all. To the contrary, it sets limits on what can be done in following positive ideals. The centrality of rules as opposed to ideals is not only philosophically sound, but also has the practical benefit of doing away with the excuse that the demands of morality are too difficult for ordinary humans. Why Should One Be Moral? Why would a rational but not necessarily impartial person advocate that one act according to the moral system? One can imagine cases in which an unjustifiable violation of one of the second five moral rules will not lead to evil consequences. In such cases, why act morally? One possible answer is that it builds a more virtuous character. A more satisfying answer would be
that rationality requires acting morally, but unfortunately this goes too far. While rationality always allows acting morally, it does not always require acting morally. REFERENCE NOTES 1. Richard O. Randolph, Margaret S. Race, and Christopher P. McKay, “Reconsidering the Theological and Ethical Implications of Extraterrestrial Life,” CTNS Bulletin, Vol. 17 No.3, Summer 1997, at 6-7. 2. I hasten to add that I will merely be presenting and applying an ethical system developed by a professional moral philosopher. 3. Donald MacNiven, Creative Morality, Routledge, New York (1993), p.203-4. 4. Richard O. Randolph, Margaret S. Race, and Christopher P. McKay, “Reconsidering the Theological and Ethical Implications of Extraterrestrial Life,” CTNS Bulletin, Vol. 17 No. 3, Summer 1997, at 7. 5. Bernard Gert, Morality: A New Justification of the Moral Rules, Oxford University Press (1988), p. 281. 6. Donald MacNiven, Creative Morality, Routledge, NY (1993), p.6. 7. J. Baird Callicott, “Moral Considerability and Extraterrestrial Life,” in Beyond Spaceship Earth: Environmental Ethics and the Solar System, Eugene C. Hargrove (Ed.), Sierra Club Books, 1986, p.251. 8. Bernard Gert, Morality: A New Justification of the Moral Rules, Oxford University Press (1988), p. 300. 9. Donald MacNiven (Ed.), Moral Expertise, Routledge, New York (1990), Introduction p. xix. 10. Elliott Sober, “Philosophical Problems for Environmentalism,” in The Preservation of Species, Bryan G. Norton (Ed.), Princeton University Press (1986), p.187. 11. E.g. Holmes Rolston, Environmental Ethics, Temple Univ. Press (Philadelphia, 1988), p 119: “...if all organisms are to be equally counted morally (=to be counted as equals?), how to escape a kind of paralysis of moral judgment—sometimes called Schweitzer’s dilemma—in which we are unable to weigh competing claims. There are no criteria for judgment.”
1000 12. Roderick F. Nash, The Rights of Nature, University of Wisconsin Press, 1989, p. 61, quoting Albert Schweitzer, The Animal World of Albert Schweitzer, Charles R. Joy, trans. (Boston, 1950), 189-90. 13. J. Baird Callicott, “On the Intrinsic Value of Nonhuman Species,” Chap. 6 of The Preservation of Species, Bryan G. Norton (Ed.), Princeton University Press (1986), p.155. 14. Gert notes in the preface, “I have tried particularly hard to be clear because I agree with Hobbes that truth arises more easily from error than from confusion. This overall clarity should make it easier for philosophers to find the remaining errors and omissions and to spot any remaining obscurity.” Bernard Gert, Morality: A New Justification of the Moral Rules, Oxford University Press (1988), Preface p. xiii. 15. Id. at 90. 16. Id. at 88. 17. Cf. Gert at 142: “...in real life situations most moral disagreement is the result of different beliefs about the facts, especially different beliefs about the probability of various evils occurring”.