It just requires that people follow the rules and do their duty. II. Extrapolating from contemporary narrow artificial intelligence (which is necessarily teleological at least, as are all machines that have a function), it is intuitive to see the other two characteristics as at least plausible potential characteristics of any future AGI. 2011. In this paper, I will argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. Int J Epidemiol 2002;3 1:1 50-1 53. Label this decision S2. isbn: 978-0-511-97803-6. doi:10.1017/CBO9780511978036.023. In consequentialism, it is the results of action that define right behavior. When 23 of the world’s most eminent artificial intelligence experts were asked for their predictions regarding the emergence of artificial general intelligence (AGI), the average date offered was 2099. https://www.cambridge.org/core/product/identifier/CBO9780511978036A021/type/ book_part. A’s collision with C would be avoided, and its five occupants spared harm. One criticism of consequentialism is that it seems to lack empathy, or any general concern for the well-being of individuals. Vehicle B would remain in place, forcing A to choose either S0 or S1. Another important criticism is that the “greater good” in general presupposes that there is some common thing that is undeniably “best”, similarly to the flaw in deontology. It is possible to give an account of the value of states of affairs and thus a comparative ranking of alternative states of affairs without appeal to the concept of right action. The development and psychometric properties of LIWC2007. However, and summarising substantially, this Kantian approach which relies on nonmonotonic logic fails to account for supererogation, cannot account for semidecidability of set membership (an important feature of first-order logic), and is typified by excessive specificity (not conducive to generalisability). Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. This is to say that, at minimum, for a machine to be ethical will entail a commitment to some token normative theory informed by Consequentialism in kind. So, if there is to be a plausible foundational machine ethic, it will strictly be an ethic–Consequentialist, Deontological, Aretaic, to name a few, and not interplay between some or all of ethics. Google Scholar. Things I find interesting! For example, Anderson et. It contradicts moral relativism and assumes that one way is the right way, and that everyone should act in a way that best works to achieve this greater good. A common example used against this system of ethics is the case where a Nazi officer asks you if you’re hiding a Jew in your house, and you don’t lie about doing it because lying is seen as “wrong”. http://www.degruyter.com/view/j/ pjbr.2018.9.issue-1/pjbr-2018-0024/pjbr-2018-0024.xml. As you might know, I’ve been reading more about philosophy and psychology recently and have been thinking of a lot of interesting questions and learning a ton of new things. the ends justify the means). ... Keywords moral justifications, deontology, consequentialism, emotion, persuasive appeals, moral foundations. https://www.cambridge.org/core/product/identifier/ CBO9780511978036A032/type/book_part. Deontology is the moral theory that an action is right or wrong depending on the nature of the act itself. can bend if the consequence of adhering to that moral code is negative. Consequentialism is generally divided into a number of theories, including: utilitarianism and ethical egoism. While learning approaches are theoretically promising, and have solid applications in other domains, they have yet to result in anything resembling a foundational ethical theory. Does this, however, leave open the possibility that certain features of other ethical theories (Deontological, Rossian Moral Pluralistic, Aretaic, to name a few) may find an important place in some ‘complete’ machine ethic on top of a Consequentialist foundation? While the former question is of primary concern, the latter will also be considered. Topics Discussed: consequentialism, machine ethics, artificial moral agents, deontology, autonomous vehicles, free will The most I will suggest here is that the designers of AGI ought not set up its AMA to fail by giving it what might be described as an incomplete ethic: one which cannot account for moral conflict. While most of the challenges faced by this Deontological approach do not disqualify it outright, I believe that its failure to account for moral conflict–a situation where two equally weighty moral precepts apply–does. Like humans, AMAs would be capable of adopting heuristics to come to decisions. In narrow implementations, does the AMA throw an exception and simply cease operation? There is disagreement about how consequentialism can best be formulated as a precise theory, and so there are various versions of consequentialism. I believe this is overselling the size of the set of what is morally relevant–that which would be required for a moral decision. Allen, Colin, Gary Varner, and Jason Zinser. http://www.oxfordscholarship.com/view/ 10.1093/acprof:oso/9780195374049.001.0001/acprof-9780195374049. Bearing in mind that when we are designing an AMA, the goal first and foremost is to have an agent who is ethically greater than or equal to a human moral agent. Or, does the AMA simply continue towards its goal as if no moral dilemma was encountered at all? 2005. In deontological ethics an action is considered morally good because of some characteristic of the action itself, not because the product of … Deontological (or "duty-based") Ethics 1. 2011. 1 (November): 337–357. I’ll likely give my opinion on this in another post, but for now let’s just keep it in mind. Seeing as this joint decision between A and B results in the best state of affairs within their power, this course of action is morally obligatory and is therefore enacted by the two vehicles. Oxford University Press, February. Can you think of a time when your moral code was tested (other than that last question). Common to both of these bottom-up strategies, though, are their difficulty and uncertainty. © MONTREAL AI ETHICS INSTITUTE. The latter ensures the AMA knows when it has behaved ethically since what makes an action right is given solely in terms of the value of the consequences that are related to the action. The problem of moral conflict is, of course, a general issue in normative ethics for all normative theories which are not monistic, but its consequences for AMA behaviour are particularly concerning. Depending on your opinions on moral relativism, this might be a downside to this philosophy. Better Deontology Through Consequentialism Properly understood, the Categorical Imperative emphasizes both obedience to rules consistent with universal moral law regardless of the circumstances and the necessary achievement of desirable ends such as the development of rational faculties that enable individuals to agree to be bound by universal
Victrola Vsc-550bt Power Cord, Auto Brights Tahoe, Paper Shell Pecan Trees For Sale Texas, Iwi Masada Mag Extension, Snap-on Pocket Knife Kershaw, Which Of The Following Is True About Information Literacy?,

deontology and consequentialism in software development 2021