ISBN-13: 9781119755579 / Angielski / Miękka / 2023 / 276 str.
ISBN-13: 9781119755579 / Angielski / Miękka / 2023 / 276 str.
"Overall, This is Technology Ethics: An Introduction is well written, philosophically engaging, thought provoking, and timely--considering our society is in the midst of working out the ethical implications surrounding emerging technologies. This book will be of benefit to instructors teaching undergraduate and graduate courses in technology ethics or introductory ethics courses that want to incorporate technology ethics into the curriculum. Reading the entire book has academic merit and is a fascinating read, but you could also choose to only read certain chapters. This is a testament to the well-organized structure of the book. Finally, footnotes in philosophy books tend to be exclusively concerned with written material. Nyholm extends the content of his foot-notes by pointing readers to podcasts and videos that are sure to be useful for students, instructors, and those that want to take a deep dive into the fascinating world of technology ethics." (AI and Ethics, 2023).
Preface xiiiAcknowledgments xvii1 What is Technology (From an Ethical Point of View)? 11.1 A Hut in the Black Forest 11.2 The Question Concerning Technology: The Instrumental Theory of Technology from Martin Heidegger to Joanna Bryson 31.3 "Post- Phenomenology" and the Mediation Theory of Technology 71.4 Technologies Conceived of as Being More Than Mere Means or Instruments 111.5 Technologies Regarded as Moral Agents 131.6 Technologies Regarded as Moral Patients 161.7 Some of the Key Types of Technologies That Will Be Discussed at Greater Length in Later Chapters of the Book 19Annotated Bibliography 242 What is Ethics? (and, in Particular, What is Technology Ethics)? 252.1 Two Campaigns 252.2 The Ethics of Virtue and Human Flourishing in Ancient Greece 282.3 Ancient Chinese Confucianism and Traditional Southern African Ubuntu Ethics 322.4 Kantian Ethics 362.5 Utilitarianism and Consequentialist Ethical Theories 392.6 If Ethics More Generally Can Be All the Things Discussed in the Previous Sections, then What Does this Mean for Technology Ethics in Particular? 442.7 How Technology Ethics Can Challenge and Create a Need for Extensions of More General Ethical Theory 46Annotated Bibliography 493 Methods of Technology Ethics: The Ethics of Self- Driving Cars as a Case Study 513.1 Methodologies of Ethics? 513.2 The Ethics of Self- Driving Cars 533.3 Ethics by Committee 563.4 Ethics by Analogy: The Trolley Problem Comparison 583.5 Empirical Ethics 613.6 Applying Traditional Ethical Theories 653.7 Which Method(s) Should We Use in Technology Ethics? Only One or Many? 70Annotated Bibliography 744 Artificial Intelligence, Value Alignment, and the Control Problem 764.1 Averting a Nuclear War 764.2 What Is Artificial Intelligence and What Is the Value Alignment Problem? 794.3 The Good and the Bad, and Instrumental and Non- Instrumental Values and Principles 834.4 Instrumentally Positive Value- Alignment of Technologies 864.5 Instrumentally Negative Misalignment of Technologies 874.6 Positive Non- Instrumental Value Alignment of Technologies 904.7 Negative Non- Instrumental Value Misalignment of Technologies 944.8 The Control Problem 964.9 Control as a Value: Instrumental or Non- Instrumental? And Are There Some Technologies It Might Be Wrong to Try to Control? 99Annotated Bibliography 1025 Behavior Change Technologies, Gamification, Personal Autonomy, and the Value of Control 1045.1 A Better You? 1045.2 Behavior Change Technologies and Gamification 1075.3 Control: Three Basic Observations 1105.4 Key Dimensions of Control Discussed in Different Areas of Philosophy 1125.5 Behavior Change Technologies and the "Subjects" and "Objects" of Control 1165.6 The Value and Ethical Importance of Control 1205.7 Concluding This Chapter 123Annotated Bibliography 1256 Responsibility and Technology: Mind the Gap(s)? 1276.1 Two Events 1276.2 What Is Responsibility? Different Ways in Which People Can Be Held Responsible and Different Things for Which People Can Be Held Responsible 1306.3 Responsibility Gaps: General Background 1346.4 Responsibility Gaps Created by Technologies 1386.5 Filling Responsibility Gaps by Having People Voluntarily Take Responsibility 1426.6 Should We Perhaps Welcome Responsibility Gaps? 1456.7 Responsible Machines? 1486.8 Human-Machine Teams and Responsibility 1536.9 Concluding This Chapter 155Annotated Bibliography 1567 Can a Machine be a Moral Agent? Should any Machines be Moral Agents? 1587.1 Machine Ethics 1587.2 Arguments in Favor of Machine Ethics and Types of Artificial Moral Agents 1607.3 Objections to the Machine Ethics Project 1637.3.1 First Objection: Morality Cannot Be Fully Codified 1647.3.2 Second Objection: It Is Unethical to Create Machines that We Allow to Make Life- and- Death Decisions About Human Beings 1657.3.3 Third Objection: Moral Agents Need to Have Moral Emotions and Machines Do Not/Cannot Have Emotions 1677.3.4 Fourth Objection: Machines Are Not Able to Act for Reasons 1697.3.5 Brief Reminder of the Objections to Machine Ethics Considered Above 1717.4 Possible Ways of Responding to the Critiques of the Machine Ethics Project 1727.4.1 First Response: Bottom- Up Learning Rather Than Top- Down Rule- Following 1737.4.2 Second Response: Resisting the Idea That Machines/Technologies Should Ever Be Full Moral Agents 1757.4.3 Third Response: Switching to Thinking in Terms of Human-Machine Teams Rather Than in Terms of Independent Artificial Moral Agents 1767.5 Concluding This Chapter 179Annotated Bibliography 1808 Can Robots be Moral Patients, with Moral Status? 1828.1 The Tesla Bot and Erica the Robot 1828.2 What Is a Humanoid Robot? And Why Would Anybody Want to Create a Humanoid Robot? 1868.3 Can People Act Rightly or Wrongly Toward Robots? 1908.4 Can Robots Have Morally Relevant Properties or Abilities? 1938.5 Can Robots Imitate or Simulate Morally Relevant Properties or Abilities? 1978.6 Can Robots Represent or Symbolize Morally Relevant Properties or Abilities? 2008.7 Should We Be Discussing-- Or Perhaps Better Be Avoiding-- the Question of Whether Robots Can Be Moral Patients, with Moral Status? 205Annotated Bibliography 2109 Technological Friends, Lovers, and Colleagues 2139.1 Replikas, Chuck and Harmony, and Boomer 2139.2 Ethical Issues That Arise in This Context Independently of Whether Technologies Can Be Our Friends, Lovers, or Colleagues 2169.3 Technological Friends 2199.4 Technological Lovers and Romantic Partners 2229.5 Robotic Colleagues 2279.6 Are These All- or- Nothing Matters? Respect for Different Points of View 2329.7 The Technological Future of Relationships 234Annotated Bibliography 23810 Merging with the Machine: The Future of Human-Technology Relations 24010.1 The Experience Machine 24010.2 Different Ways of Merging with-- Or Merging with the Help of-- Technology 24410.3 Transhumanism, Posthumanism, and Whether We Should Become-- Or Perhaps Already Are-- Cyborgs 24810.4 Some Critical Reflections on the Proposals to Merge with Technologies and the Arguments and Outlooks Used in Favor of Such Proposals 25110.5 Concluding Reflections: Revisiting the Hut in the Black Forest 256Annotated Bibliography 263Index 265
SVEN NYHOLM is Professor of the Ethics of Artificial Intelligence at the Ludwig Maximilian University of Munich, a member of the Ethics Advisory Board of the Human Brain Project, and an Associate Editor of Science and Engineering Ethics. He is the author of Humans and Robots: Ethics, Agency, and Anthropomorphism and Revisiting Kant's Universal Law and Humanity Formulas.
1997-2024 DolnySlask.com Agencja Internetowa