top of page
  • Writer's pictureSharon Chau

Moving Beyond Deontology


  • Deontology is a school of thought that judges the morality of choices through an action's intentions. It is a rule-based system of ethics which disregards an action's outcomes.

  • Judging actions on their intentions can be problematic because it is intuitively irrational, there are conflicts between different duties, it allows strategic manipulation, and it cannot explain the paradox of relative stringency.

  • It creates terrible outcomes eg. you cannot kill one person to save a million because killing is categorically wrong.

  • Threshold deontology is a school of thought that aims to respond to the problem of creating disastrous outcomes but is difficult to justify.

  • Consequentialism is a school of thought that judges the morality of choices through an action's outcomes.

  • There are three categories of justifications for consequentialism: appealing to our moral intuitions, using universalisability and prescriptivity, and through contractarian justification.

  • Critiques of consequentialism include the practical difficulty of analysing numerous consequences of an action and its neglect of individual rights.

  • However, we can reasonably estimate actions' outcomes based on general rules, and worse consequences usually mean inadvertently violating other people's rights even though you were not the perpetrator of that.

  • Hence, the wrongness of an action ought to depend on the action's consequences.

Moral Wrongness

Morally wrong acts are acts that would be forbidden by principles accepted by people in a social contract (Rawls 1971; Gauthier 1986), or forbidden by principles these people could not “reasonably reject” (Scanlon 2003). In this essay, a comparative analysis will be made between deontology, which determines the wrongness of an action by the person’s intention, and consequentialism, which judges the wrongness of an action by its effects. The logical coherency of these two schools of thought will be analysed, and a conclusion of which ethical system should be followed reached.


Deontology judges the morality of choices through the intentions of the actor instead of the action’s effects. It holds that some choices can never be justified by their effects - no matter how good their consequences, some choices are morally forbidden. By this logic, agents cannot make certain wrongful choices, even if doing so would decrease the number of wrongful choices made in the future. What makes a choice right is its conformity with a moral norm. In particular, the agent-centered theory is a branch of deontology which concerns itself with the actor’s intentions and obligations (Scheffler 1988; Kamm 2007). The theory posits that we all have permissions and obligations that give us agent-relative, objective reasons for action. Our categorical obligations are to keep our own agency free of moral taint, not on how our actions cause harm. Importantly, it also differentiates between the role of intention and other mental states, including belief, risk and cause, in constituting the morally important kind of agency. More specifically, there are five differentiations with causings: 1. omissions; 2. allowings (Kamm 1994); 3. enabling (Hart and Honore 1985); 4. redirecting (Thomson 1985) and 5. accelerations (Williams 1961). The famous doctrine of double effect aptly demonstrates this. It follows traditional deontology, stipulating that we are categorically forbidden to intend evils even if doing such acts would minimise similar acts in the future. However, if we only risk, cause, or predict that our acts will have evil consequences, we might be able to justify the acts by evil-minimising consequences of such actions.

Problems with Deontology

There are many criticisms of judging the morality of actions based on intentions. Firstly, there is an intuitive irrationality of our having duties or permissions to make the world morally worse. Compared to the intuitively justifiable “an action’s wrongness should be judged by whether it creates better consequences”, deontology needs a more rigorous model of rationality to justify judging actions by intent. Secondly, there are conflicts between certain duties and rights. As Kant points out himself, “a conflict of duties is inconceivable” (Kant 1780, p. 25). Some deontologists have responded in reducing the categorical force to only primae facie duties (Ross 1930, 1939). Conflict between merely prima facie duties is unproblematic so long as it does not infect what one is categorically obligated to do, which is what overall, concrete duties mandate. However, this response faces the danger of collapsing into consequentialism, based on 1. whether any good consequences are eligible to justify breach of prima facie duties; 2. whether only such consequences over some threshold can do so and 3. whether only threatened breach of other deontological duties can do so. Thirdly, there is the problem of strategic manipulation. Avoidance through manipulating means, such as using foresight to achieve permissibly what deontology would otherwise categorically forbid (Katz 1996). Fourthly, the paradox of relative stringency arises when deontology argues that all deontological duties are categorical, but that some duties should be more stringent than others. As Frey notes, “there cannot be degrees of wrongness with intrinsically wrong acts” (Frey 1995). These four criticisms point to fundamental problems with judging the wrongness of a person’s actions through their intentions - that it puts the self above common good, it creates unresolvable conflicts between duties or a paradox, and allows room for manipulation and avoidance.

Another crucial problem with deontology is its compliance can bring about disastrous consequences. For example, a person is categorically forbidden by deontology from torturing an innocent person to save the lives of a million other people. Many philosophers view this as a reductio ad absurdum argument if the suffocating shackles of deontology lead to the death of millions of people. In response, deontologists bite the bullet. As Kant famously quips, “better the whole people should perish,” than that injustice be done (Kant 1780). This absolutist conception which holds that conforming to moral norms has absolute and non-overridable force is unpalatable for many philosophers. Another plausible response is to use the “aggregation” problem, which argues that harm should not be aggregated. This is to decrease the severity of practical harms that consequentialists pile onto deontologists. As Taurek asserts, we should not assume harms to two people to be twice as bad as the same harm to one person - as each of the two suffers only his own harm and not the harm of the other (Taurek 1977). At the same time, Nozick proposes the separateness of persons, and therefore argues there does not exist any entity which suffers twice the harm when two separate people are harmed (Nozick 1974). The responses by deontologists have been evasive and unpersuasive. Some assert that the examples such as torturing one innocent to save a million can never arise, because a truly moral agent realises it is immoral to even think about violating moral norms in order to avert disaster (Anscombe 1958; Geach 1969; Nagel 1979). This avoidance of answering thorny philosophical scenarios is dubious. Some other philosophers argue that the moral appraisals of acts should be separated from the moral blameworthiness or praiseworthiness of the person who commits the act. However, this appears to be logically inconsistent with the deontological view that only the actor’s intentions matter in determining the wrongness of an action (Alexander 2004). Hence the responses by deontologists to this critique are woefully inadequate.

Threshold Deontology

A response to the difficulties posed to Kantian absolutism is the emergence of threshold deontology. This ethical system argues that categorical imperatives dictate our behaviour up to a point despite adverse consequences; but when the consequences become so dire that they cross the stipulated threshold, consequentialism takes over (Moore 1997). Basically, the theory sets a fixed threshold of awfulness beyond which morality's categorical norms no longer have their overriding force. This responds to the reductio ad absurdum of the allowance of absurd consequences against Kantianism. For example, torture is not permitted without any good consequences, but permitted when doing so saves a million lives. However, many problems arise for any deontological theory with a threshold. The comparatively obvious challenge is to provide for both a justified threshold below which any harm would be permitted (Alexander 2000; Ellis 1992). Even if a threshold is set up and logically justified, it is problematic to suggest that if we are one-life-at-risk short of the threshold, we should be obliged pull one more person into danger who will then be saved, along with the others at risk, by killing an innocent person (Alexander 2000). The other problem is how a threshold-based deontology threatens to collapse into consequentialism. In fact, the agency-weighted form of consequentialism is almost equivalent to the threshold-determined deontology (Sen 1982).

Up until this point, it appears that deontology does not stand up adequately to its many criticisms. It has to either bite the highly unpalatable bullet that enormous amounts of harm can be done for an individual keeping their moral conscience clean, or become principally inconsistent in suggesting thresholds of harm, in which thorny practical problems and the threat of collapsing into consequentialism emerge.

Let us now examine the theory of consequentialism and how consequences of actions stand as methods to determine an action’s wrongness.


Consequentialism is a theory that stipulates actions should be morally assessed only by the outcomes they create. Consequentialists have to provide justifications for outcomes that they find intrinsically valuable, which is usually known as the Good. The theory then states that actions which increase the Good are those which are morally right to execute. Consequentialists have different interpretations on how the Good should be defined - monoist utilitarians define the Good as pleasure, happiness, or “welfare”, while pluralists care about the distribution as part and parcel of the Good. The “utilitarianism of rights” is another interpretation of utilitarianism, which posits that rights should not be violated, and duties should be kept for the Good to be maximised (Nozick 1974). Classic utilitarianism is the most well-known form of deontology, stipulating hedonistic act consequentialism. Act consequentialism argues that an act is morally right only if that act maximises the good. As explained by Moore, classic utilitarians hold that if the total amount of good for all minus the total amount of bad for all is greater than this net amount for any incompatible act available to the agent on that occasion, the act is the morally correct one. (Moore 1912).

Justifications for Consequentialism

Multiple justifications of consequentialism have been presented, with varying successes. The most common ones are the appeal to our intuitions. Most people begin with the presumption that we morally ought to make the world better when we can, so the consequentialist factor is something that everyone accepts should be taken into account when assessing the morality of actions. The question then is only whether any moral constraints or moral options need to be added to the basic consequentialist factor in moral reasoning. (Kagan 1989) If no objection reveals any need for anything beyond consequences, then consequences alone seem to determine what is morally right or wrong, just as consequentialists claim. As Sidgwick emphasises, the principle of utility follows from general self-evident principles (Sidgwick 1907). This follows from three principles - universability, which states that if an act ought to be done, then every other equivalent act ought to be done; rationality, where one ought to aim at the general good instead of a particular good; and equality, where the good of any single individual is worth the same. Intuitively, many people do believe that the maximisation of outcomes appears to be the ethical theory that most aligns with our biological and psychological desires, so this proof is reasonably successful.

The second category of justifications concerns non-normative facts or non-moral norms. Mill is infamous for his “proof” of the principle of utility from empirical observations about what we desire (Mill 1861). He further deflects the question of proof by asserting that “questions of first principle are not amenable to direct proof”. In contrast, Hare tries to derive his version of utilitarianism from substantively neutral accounts of morality, of moral language, and of rationality (Hare 1963). He argued for a theory he termed universal prescriptivism. It states that moral terms such as “good”, “ought” and “right” possess the logical properties of universalisability and prescriptivity. Universality means that moral judgments must identify the situation they describe according to a finite set of universal terms, while prescriptivity states that moral agents must perform obligated acts whenever they can. Hare argued that universalisability and prescriptivity combined led to a certain form of consequentialism - preference utilitarianism. He used this logic to justify his position as a two-level utilitarian, which states that a person’s moral decisions should be based on a set of “intuitive” moral rules, except in rare situations where they have to engage in moral reasoning.

The third category of proofs relies on contractarian justification. This Hobbesian type of proof analyses how purely self-interested people would act and wish other people to act within a society. Harsanyi argues that all informed, rational people whose impartiality is ensured because they do not know their place in society would favour a kind of consequentialism (Harsanyi 1978). This is because the people behind this Rawlsian veil of ignorance would prefer utility to be maximised in the world they inhabit. The obligation for beneficence, illustrated by Peter Singer’s famous “drowning baby” example, illustrates one would prefer to be constrained by moral rules that cause limited harm to oneself for the greater good of society, as one cannot be sure which side they will land on. This proof for consequentialism is strong, as it uses our rationality and choice-making ability to prove that consequentialism should logically be the ethical system that we adhere to.

Criticisms for Consequentialism

Criticisms have been levied against consequentialist justifications. One such pragmatic criticism is that it is simply impossible to spend a large amount of time in making consequentialist decisions. According to Bentham’s hedonic or felific calculus, there are seven criterion one has to go through for making decisions to determine which action creates the best outcome. In general, it seems relatively tedious and difficult to be able to Consequentialists and utilitarians have responded strongly to this claim. Bentham himself wrote that his calculus should not be “strictly pursued previously to every moral judgment.” (Bentham 1789) Mill agreed that it was a “misapprehension” of utilitarianism to believe that we should calculate decisions at every turn - he believes in a set of general guidelines as “landmarks and direction-posts” (Mill 1861). Some consequentialists further argue most agents usually ought to follow their moral intuitions, because these intuitions evolved to lead us to perform acts that maximise utility in general (Hare 1981). Following these responses, it seems reasonable to expect actors to be able to make decisions based on general rules, past experience or their intuitions, which renders the criticism ineffective.

A second problem faced by consequentialists is its neglect of justice and rights. In the quest to maximise utility or happiness, consequentialism tramples on individual rights. An apt illustration is the famous thought experiment of killing one person to save five patients who desperately need organs (Foot 1966). If we were to judge the wrongness of a person’s actions solely on the consequences, the murdering doctor would be doing the morally correct action by actively willing a person to die. Even if this seems intuitively implausible to us, we should consider what happens otherwise. If the doctor had not administered the transplant or organs, five other people would die, causing a great deal of harm to them, and sadness to their families. The disutility this causes would far outweigh the harm done to the one person who unfortunately has to be sacrificed for the greater good. Although this position appears extreme, we have to think about the negative consequences that will be caused if we do not allow the saving of five lives at the expense of one. Hence this example demonstrates that utility should be the overriding force.


In conclusion, judging a person’s actions based on the outcomes is the most logically consistent and justifiable position compared to judging a person’s actions based on intentions alone. It is intuitively plausible as it accords with the human psyche of pain and pleasure, the two sovereign masters we have been placed under the governance of. It is contractarian, as people behind a veil of ignorance would prefer to live in a world which maximises utility and gives moral obligations for people to help others at a small cost to themselves. Consequentialism also defends itself well from the main criticisms levied against it. This is especially true when compared to deontology, which faces many problems including logical inconsistency, allowing the moral purity of oneself to selfishly trump all considerations for others, and the allowance of egregious harms to be accrued, such as the killing of a million innocents, because one adamantly follows the rule against torturing one person. Hence, we should judge the wrongness of a person’s actions based on the action’s outcomes, not the person’s intentions.


Recent Posts

See All


bottom of page