
Moral AI
Description
Book Introduction
World-renowned philosopher, neuroscientist, and computer scientist
A unique and balanced introduction to AI ethics, written together.
“Despite centuries of effort, humans have not found a way to prevent moral errors.
Perhaps an idealized moral AI could avoid that mistake.” Moral AI is a book co-written by three world-renowned philosophers, neuroscientists, and computer scientists in the field of ‘AI ethics.’
This book strikes a balance between excessive optimism and fear about artificial intelligence, and is a cutting-edge guide for those concerned about the ethical development and use of AI.
Covering the latest research in rapidly evolving AI technologies, including deepfakes, self-driving cars, autonomous weapons, and medical robots, it presents new ethical issues surrounding AI, such as algorithmic bias, privacy violations, and liability for accidents, with compelling case studies.
In particular, by proposing the unique idea of developing artificial intelligence equipped with human 'morality', it expands the scope of the 'AI ethics' discourse, which has been focused on the safety of human life and body and the protection of basic rights, to the development and use of technological tools that prevent human moral mistakes.
The authors' position on the use of AI technology is clear.
It is difficult to “throw out the ‘AI baby’ with the bathwater.”
The idea is that it might be 'immoral' to halt the development of AI that has already proven beneficial.
He argues that now is not the time to debate whether or not to use AI technology, but rather to closely examine the potential benefits and risks that AI will bring, and to pool our wisdom to address the moral values (security, fairness, privacy, transparency, freedom, and deception) intertwined with AI technology.
Comprised of seven chapters, the first half of this book primarily examines the concept and operating principles of artificial intelligence, its application cases, and ethical issues. The second half introduces how to implement "moral AI" technology and presents specific practical measures for making AI products ethical. This book will serve as a valuable guide for readers interested in emerging trends and ethical debates in AI technology, IT professionals grappling with the application of "AI ethics" to product development, and citizens contemplating how to leverage AI as a tool to lead a more just and democratic society.
A unique and balanced introduction to AI ethics, written together.
“Despite centuries of effort, humans have not found a way to prevent moral errors.
Perhaps an idealized moral AI could avoid that mistake.” Moral AI is a book co-written by three world-renowned philosophers, neuroscientists, and computer scientists in the field of ‘AI ethics.’
This book strikes a balance between excessive optimism and fear about artificial intelligence, and is a cutting-edge guide for those concerned about the ethical development and use of AI.
Covering the latest research in rapidly evolving AI technologies, including deepfakes, self-driving cars, autonomous weapons, and medical robots, it presents new ethical issues surrounding AI, such as algorithmic bias, privacy violations, and liability for accidents, with compelling case studies.
In particular, by proposing the unique idea of developing artificial intelligence equipped with human 'morality', it expands the scope of the 'AI ethics' discourse, which has been focused on the safety of human life and body and the protection of basic rights, to the development and use of technological tools that prevent human moral mistakes.
The authors' position on the use of AI technology is clear.
It is difficult to “throw out the ‘AI baby’ with the bathwater.”
The idea is that it might be 'immoral' to halt the development of AI that has already proven beneficial.
He argues that now is not the time to debate whether or not to use AI technology, but rather to closely examine the potential benefits and risks that AI will bring, and to pool our wisdom to address the moral values (security, fairness, privacy, transparency, freedom, and deception) intertwined with AI technology.
Comprised of seven chapters, the first half of this book primarily examines the concept and operating principles of artificial intelligence, its application cases, and ethical issues. The second half introduces how to implement "moral AI" technology and presents specific practical measures for making AI products ethical. This book will serve as a valuable guide for readers interested in emerging trends and ethical debates in AI technology, IT professionals grappling with the application of "AI ethics" to product development, and citizens contemplating how to leverage AI as a tool to lead a more just and democratic society.
- You can preview some of the book's contents.
Preview
index
Entering
Introduction | What's the Problem?
Chapter 1 What is Artificial Intelligence?
'Narrow AI', 'General AI', 'Strong AI' ─ Choose a challenge │ Words to know when preparing a quiz show: GOFAI │ Teaching machines to learn │ What about deep learning and neural networks? │ Which AI is intelligent? │ What AI is lacking today │ It is becoming increasingly difficult to figure out what AI cannot do │ Who is making AI today?
Chapter 2: Can Artificial Intelligence Be Safe?
Is life with AI really a horrifying fantasy, straight out of science fiction? │Safety Issues Surrounding AI Today │A Few Case Studies │Predictable Unpredictability
Chapter 3: Can AI Respect Privacy?
What is privacy? Why should we care about it? How does AI violate privacy? The AI ecosystem encourages the hoarding and sale of personal data. How can we protect our privacy from AI threats? Do we value our privacy?
Chapter 4: Can Artificial Intelligence Be Fair? … 157
What is justice? Who goes to jail before trial? Human judges vs. AI: Who is more accurate? Making fairness explicit Human judges vs. AI: Who is more biased? What about procedural justice? Will interpretability solve the problem? Fair AI
Chapter 5: Can we hold artificial intelligence (or its creators and users) accountable?
What is accountability? │Is the human driver responsible? │Is the pedestrian responsible? │Is the AI involved responsible? │Is Uber responsible? │Is the Arizona state government responsible? │Is AI responsible? │The accountability gap
Chapter 6: Can artificial intelligence be equipped with human morality?
Top-down Morality? Bottom-up Morality? Combining the Best of Both Worlds. Who Will Get a Kidney? How to Embed Morality in Kidney Distribution AI. Artificially Enhanced Democracy.
Chapter 7 What Can We Do?
The Complexity of AI Creation │Why is there a gap between principles and practice in ethical AI? │What can be done to bridge the gap between principles and practice in ethical AI? │First Call to Action │Second Call to Action │Third Call to Action │Fourth Call to Action │Fifth Call to Action │The Big Picture
Conclusion | It's up to us
main
Search
Introduction | What's the Problem?
Chapter 1 What is Artificial Intelligence?
'Narrow AI', 'General AI', 'Strong AI' ─ Choose a challenge │ Words to know when preparing a quiz show: GOFAI │ Teaching machines to learn │ What about deep learning and neural networks? │ Which AI is intelligent? │ What AI is lacking today │ It is becoming increasingly difficult to figure out what AI cannot do │ Who is making AI today?
Chapter 2: Can Artificial Intelligence Be Safe?
Is life with AI really a horrifying fantasy, straight out of science fiction? │Safety Issues Surrounding AI Today │A Few Case Studies │Predictable Unpredictability
Chapter 3: Can AI Respect Privacy?
What is privacy? Why should we care about it? How does AI violate privacy? The AI ecosystem encourages the hoarding and sale of personal data. How can we protect our privacy from AI threats? Do we value our privacy?
Chapter 4: Can Artificial Intelligence Be Fair? … 157
What is justice? Who goes to jail before trial? Human judges vs. AI: Who is more accurate? Making fairness explicit Human judges vs. AI: Who is more biased? What about procedural justice? Will interpretability solve the problem? Fair AI
Chapter 5: Can we hold artificial intelligence (or its creators and users) accountable?
What is accountability? │Is the human driver responsible? │Is the pedestrian responsible? │Is the AI involved responsible? │Is Uber responsible? │Is the Arizona state government responsible? │Is AI responsible? │The accountability gap
Chapter 6: Can artificial intelligence be equipped with human morality?
Top-down Morality? Bottom-up Morality? Combining the Best of Both Worlds. Who Will Get a Kidney? How to Embed Morality in Kidney Distribution AI. Artificially Enhanced Democracy.
Chapter 7 What Can We Do?
The Complexity of AI Creation │Why is there a gap between principles and practice in ethical AI? │What can be done to bridge the gap between principles and practice in ethical AI? │First Call to Action │Second Call to Action │Third Call to Action │Fourth Call to Action │Fifth Call to Action │The Big Picture
Conclusion | It's up to us
main
Search
Detailed image

Into the book
At what point does written code become AI, rather than a simple computer program? Is AI truly intelligent? Can AI be conscious? Can AI be original and creative? Or does it simply blindly follow its creator's orders?
--- p.22
Despite its monumental success, Deep Blue was retired to the Computer History Museum in the United States shortly after its victory over Kasparov.
Why would that be? Because its creators knew that Deep Blue couldn't do anything but play chess.
This is the definition of a narrow AI system.
--- p.33
Imagine someone tricking an AI system into incorrectly diagnosing a politician with a serious medical problem.
As a result, he will face a blow to his future political career and will undergo unnecessary treatment.
--- p.64~65
A team of security researchers was able to trick a Tesla Model S into moving into the oncoming lane of a virtual car simply by placing three stickers on the road.
The car's AI vision system misinterpreted the sticker to mean the lane was turning left, but failed to detect that the misinterpreted lane was actually heading towards the actual lane.
--- p.96
One autonomous weapons expert said:
It's not hard to imagine a world where algorithms, reacting faster than humans and antagonizing each other, are monitoring the situation.
…if war were to progress at a pace that we, as humans, could no longer control, that would be truly horrific.” It would be unsettling if an uncontrolled “AI-AI battle” were to break out between existing autonomous drones or tanks.
--- p.104~105
To quote Stephen Hawking, “Our future will be a race between the ever-growing power of technology and the wisdom to use it.”
--- p.117
We have privacy to the extent that we control whether others can observe us, interfere with us, learn specific information about us, or determine who we are.
On the other hand, lack of privacy means that you have no control over information about those aspects of your life.
--- p.123
Deepfakes (bots, videos, recordings, and photos designed to imitate humans) could be tricked into thinking they are real bank or IRS employees and sharing your bank account information or Social Security number.
Fraudulent acts like this are called 'phishing' attacks.
--- p.132
Microsoft and Amazon have also warned their employees not to share sensitive information with ChatGPT.
Because the information may be included in the output someday.
--- p.136
As the headline of the Economist article declared, “The world’s most valuable resource is no longer oil, but data.”
--- p.138
Most privacy policies take about 8 to 12 minutes to read and are written in language that even experts find difficult to understand.
So it's not surprising that most people skip over it without reading it.
--- p.145
Consent will never protect privacy if you don't realistically understand what your choice means or if you are effectively manipulated into agreeing to the terms and conditions.
--- p.147
With the help of AI, even anonymous geographic location data can be easily re-identified. AI experts have demonstrated that in a large dataset containing hourly GPS data, it is possible to identify a person using just four GPS spatiotemporal points.
It can also identify 95 percent of individuals.
--- p.151
There is a fundamental reason why machine learning often exhibits bias.
One reason is that it is very difficult (and often expensive) to collect datasets that equally represent all demographic groups and interests, and that trained AI makes more accurate predictions about groups that are well-represented in the data than about groups that are not.
--- p.160
The trial judge in Loomis's case had to be able to make a wise judgment about when (and how much) to rely on Compass's predictions to justify his finding that Loomis was an "extremely high risk of recidivism."
Without this knowledge, judges may blindly accept or reject the algorithm's predictions, even when they do not trust the predictions.
--- p.184~185
Let's say a hospital uses AI to detect cancer, but the AI makes a diagnostic error, causing the patient to receive unnecessary treatment or treatment too late.
So who is responsible for the misdiagnosis? If someone is liable for damages, who should it be? And what if the military uses AI to guide missiles and drones, but the AI-driven weapons kill innocent families instead of the targeted terrorists?
--- p.221
Despite centuries of effort, humans have not found a way to prevent moral errors.
Perhaps an idealized moral AI could avoid mistakes.
Furthermore, if we learn how to correct for biases and mistakes in the training data and models of moral AI, its predictions and actions will better align with our own fundamental moral values.
--- p.247~248
How the story unfolds going forward will depend as much on human moral decision-making and moral intelligence as on artificial decision-making and artificial intelligence.
We need to be clear about our role when thinking about AI.
In short, in the story of moral AI, both the director and the lead character are human. AI is simply a being that accompanies us.
At least for now.
--- p.22
Despite its monumental success, Deep Blue was retired to the Computer History Museum in the United States shortly after its victory over Kasparov.
Why would that be? Because its creators knew that Deep Blue couldn't do anything but play chess.
This is the definition of a narrow AI system.
--- p.33
Imagine someone tricking an AI system into incorrectly diagnosing a politician with a serious medical problem.
As a result, he will face a blow to his future political career and will undergo unnecessary treatment.
--- p.64~65
A team of security researchers was able to trick a Tesla Model S into moving into the oncoming lane of a virtual car simply by placing three stickers on the road.
The car's AI vision system misinterpreted the sticker to mean the lane was turning left, but failed to detect that the misinterpreted lane was actually heading towards the actual lane.
--- p.96
One autonomous weapons expert said:
It's not hard to imagine a world where algorithms, reacting faster than humans and antagonizing each other, are monitoring the situation.
…if war were to progress at a pace that we, as humans, could no longer control, that would be truly horrific.” It would be unsettling if an uncontrolled “AI-AI battle” were to break out between existing autonomous drones or tanks.
--- p.104~105
To quote Stephen Hawking, “Our future will be a race between the ever-growing power of technology and the wisdom to use it.”
--- p.117
We have privacy to the extent that we control whether others can observe us, interfere with us, learn specific information about us, or determine who we are.
On the other hand, lack of privacy means that you have no control over information about those aspects of your life.
--- p.123
Deepfakes (bots, videos, recordings, and photos designed to imitate humans) could be tricked into thinking they are real bank or IRS employees and sharing your bank account information or Social Security number.
Fraudulent acts like this are called 'phishing' attacks.
--- p.132
Microsoft and Amazon have also warned their employees not to share sensitive information with ChatGPT.
Because the information may be included in the output someday.
--- p.136
As the headline of the Economist article declared, “The world’s most valuable resource is no longer oil, but data.”
--- p.138
Most privacy policies take about 8 to 12 minutes to read and are written in language that even experts find difficult to understand.
So it's not surprising that most people skip over it without reading it.
--- p.145
Consent will never protect privacy if you don't realistically understand what your choice means or if you are effectively manipulated into agreeing to the terms and conditions.
--- p.147
With the help of AI, even anonymous geographic location data can be easily re-identified. AI experts have demonstrated that in a large dataset containing hourly GPS data, it is possible to identify a person using just four GPS spatiotemporal points.
It can also identify 95 percent of individuals.
--- p.151
There is a fundamental reason why machine learning often exhibits bias.
One reason is that it is very difficult (and often expensive) to collect datasets that equally represent all demographic groups and interests, and that trained AI makes more accurate predictions about groups that are well-represented in the data than about groups that are not.
--- p.160
The trial judge in Loomis's case had to be able to make a wise judgment about when (and how much) to rely on Compass's predictions to justify his finding that Loomis was an "extremely high risk of recidivism."
Without this knowledge, judges may blindly accept or reject the algorithm's predictions, even when they do not trust the predictions.
--- p.184~185
Let's say a hospital uses AI to detect cancer, but the AI makes a diagnostic error, causing the patient to receive unnecessary treatment or treatment too late.
So who is responsible for the misdiagnosis? If someone is liable for damages, who should it be? And what if the military uses AI to guide missiles and drones, but the AI-driven weapons kill innocent families instead of the targeted terrorists?
--- p.221
Despite centuries of effort, humans have not found a way to prevent moral errors.
Perhaps an idealized moral AI could avoid mistakes.
Furthermore, if we learn how to correct for biases and mistakes in the training data and models of moral AI, its predictions and actions will better align with our own fundamental moral values.
--- p.247~248
How the story unfolds going forward will depend as much on human moral decision-making and moral intelligence as on artificial decision-making and artificial intelligence.
We need to be clear about our role when thinking about AI.
In short, in the story of moral AI, both the director and the lead character are human. AI is simply a being that accompanies us.
At least for now.
--- p.318
Publisher's Review
Why 'moral AI'?
The title, ‘Moral AI’, arouses curiosity.
Science fiction fans might be hoping this book will predict the emergence of a new type of artificial intelligence with morality, autonomy, and free will.
However, the 'moral AI' proposed in this book is not an artificial intelligence of such a distant future.
Broadly speaking, it is “AI that learns and implements human values,” and narrowly speaking, it is “a system that assists human moral judgment.”
A representative research case that demonstrates this possibility is 'AI that determines the priority of patients receiving kidney transplants.'
In hospitals, transplant priorities are generally determined by 1) the transplant surgeon's medical judgment and 2) the hospital staff's medical and practical judgments (suitability, age, health, organ quality, waiting period, etc.) that determine kidney transplant policy (moral judgments are excluded).
'Moral AI' learns in two ways: first, by programming the medical decisions a transplant surgeon would make if he were in ideal physical and mental condition, and second, by modeling the decisions of the group that decides the hospital's transplant policy.
Especially in the latter case, by involving not only hospital staff but also various citizens, including lawyers, patients, and non-experts, a common moral judgment can be built into an automated system.
This technological tool can help prevent surgeon error and bias, and ensure kidney allocation priorities are aligned with the moral values of the population.
This book presents current research ideas on how to extract moral characteristics, measure their weights, model moral judgments, synthesize group judgments, and idealize moral judgments to implement this technology in systems (see pages 239–253).
AI with human morality
It may still seem tacky to readers who expected the "emergence of AI with morality," but the potential of this technology is enormous.
There are two key points.
First, it can help people make moral judgments.
Imperfect humans can make moral mistakes at any time.
Because we cannot escape “bias that unconsciously influence decision-making, such as cognitive biases, favoritism, and racial or gender biases.”
An “idealized moral AI” that removes such biases could help us make “more informed, rational, and unbiased moral judgments” even when we are imperfect (e.g., half-asleep or angry).
Second, it can help improve social injustice and prevent unethical decisions. Extending this technology, which embeds morality in AI, to other areas could lead to the development of "artificially enhanced democracy (AID)" systems that automate the moral judgments of community members (see pages 254-256).
In particular, if AID is incorporated into decision support tools, it can be utilized in a wide range of fields, not just the medical field.
For example, human agents could be helped to make the right choices when recruiters interview candidates or make hiring decisions, when military operators consider when and where to launch missiles, and in other areas where fairness and morality may be at stake.
Meanwhile, the authors propose various practical measures ('moral AI' strategies) that can help AI developers strengthen ethical intervention from the early stages of AI product development while simultaneously disseminating 'moral AI' technology to various fields.
The core areas are “technology dissemination, organizational practices, education, civic engagement, and public policy,” and developers can use them as concrete guidelines for their “ethical AI” strategy, such as improving organizational culture, strengthening the role of ethics officers, establishing ethical indicators, and reflecting community values in AI products.
AI Technology and New Ethical Issues
The authors also present a broad range of new ethical issues surrounding artificial intelligence technology that we must consider today.
Here are just a few key examples from the cases highlighted in this book.
First, the issue of responsibility (or the "liability gap"). Who should be held accountable for safety accidents caused by AI? In 2018, a Volvo XC90 self-driving car owned by Uber struck a pedestrian (Hertzberg) during a test drive.
The vehicle's sensors and AI recognition system failed to recognize the pedestrian, and the test driver in the driver's seat was also looking away.
Who is responsible for Hertzberg's death? The test driver? The car manufacturer? The Uber engineers who created the AI? Or is it that no one can be held accountable because no one anticipated the consequences of AI?
The problem of 'accountability gap' can arise in various fields where AI is utilized.
Because there are so many people involved, the severity of legal and moral responsibility is unclear.
For example, if an AI makes a diagnostic error in a hospital, resulting in unnecessary or delayed treatment for a patient, who will be held responsible? If the military uses AI to induce a drone strike, resulting in civilian casualties, who will be held accountable? To take a more troubling example, if a startup builds a medical consultation chatbot based on OpenAI's GPT model and the chatbot provides harmful advice that results in the death of a person, who will be held accountable? The startup or OpenAI, which provided the GPT model technology? The authors worry that if our society fails to address these issues of responsibility, social unrest is inevitable.
Second, the issue of privacy invasion.
Personal information can be easily leaked and used for criminal purposes.
For example, the combination of 'facial recognition AI' and location tracking technology could allow someone to reveal sensitive medical information simply by seeing them on the street, and could leave people vulnerable to 'phishing attacks' using AI deepfake technology (imagine chatbots, videos, recordings, and photos pretending to be real bank or IRS employees).
But there is another privacy threat that the authors really focus on in this book.
It's an AI ecosystem that relies on "collecting as much personal data as possible, storing it permanently, and selling it to the highest bidder."
Many AI companies today are investing heavily in collecting personal data, as “the more diverse and extensive the training data, the more accurate the AI becomes and the more predictions it can make.”
The collection method is also clever.
In the case of PayPal, an online payment company in the United States, “If a user agrees to integrate PayPal with a website, his or her shopping history, personal preferences, photos, and disability status will be shared with that website (don’t be fooled by the phrase “We do not sell your personal data” in the privacy policy, you’ll be in big trouble).
Because this phrase does not mean “we do not share data with other companies”).
Personal data collected in this way is used to train various AI models (such as facial recognition, text completion, or chatbot systems), and even the trained AI models can be rented or sold to other companies.
Meanwhile, due to the nature of AI models to 'remember' some of the information from the training data ('inability to forget' phenomenon), it is not impossible for someone's personal information to be revealed.
“Microsoft and Amazon have also warned their employees not to share sensitive information with ChatGPT.
Because that information may be included in the output someday.”
Third, the issue of data bias. Data bias has long been a concern in the field of AI ethics.
“Every time we decide what data to collect, what information to feed into an AI algorithm, and how to respond to its predictions, we introduce human bias into the AI.”
AI models trained on biased data will inevitably produce biased results (“bias goes in, bias goes out”), and one study found that “common AI used for hiring, firing, promotions, home loans, and business loans often disadvantage applicants, particularly Black people, women, immigrants, people living in poverty, and people with disabilities, and skin cancer detection AI doesn’t work as well on people with dark skin (people of color).”
Fourth, the issue of transparency in the algorithm processing process.
It is known to be a rather troublesome problem, especially in deep learning AI.
Deep learning AI is known to have superior predictive capabilities compared to other AI technologies, but the problem is that the algorithm's processing process is like a black box, making it impossible to interpret or understand.
The controversy surrounding the 'Loomis v. Wisconsin' decision is a prime example.
Eric Loomis, who was charged with involvement in the shooting, was sentenced to a heavy sentence after being deemed a “threat to the community” based on predictions from Compass (a risk assessment tool used in US sentencing courts).
Loomis appealed, arguing that Compass's "predictive model was so proprietary and complex that there was no realistic way to know how or why it arrived at its risk predictions, or to respond," but ultimately lost.
But as the authors point out, “if the AI that generates the predictions is incomprehensible to anyone but AI experts, or even impossible for experts to understand… then the legal process will ultimately lose its fairness.”
The authors worry that AI's opacity will raise questions about fairness in many fields where the technology is used in the future.
The Competition Between the Power of Technology and Wisdom
Geoffrey Hinton, winner of the 2024 Nobel Prize in Physics, warned that “AI could lead to the extinction of humanity within 30 years,” but as the economic value of AI is confirmed, a very different mood is being read.
Some might argue that ethical issues are a nonsense at a time when the entire world is risking its life to develop new AI products.
Some companies may think that releasing a new product and then addressing issues as they arise (the 'Lean-Agile' method) has been the practice of IT companies and has been working successfully.
That is what this book warns us about.
“If companies don’t feel responsible for the harm caused by AI, they will feel increasing pressure to build AI products that are potentially unsafe but still profitable.” Given the destructive power of AI, the scale of the harm will go beyond the individual company.
This is why many governments and institutions today are rushing to predict the safety and potential risks of AI, establish "AI ethics" principles, and conduct research and institutional efforts to prevent the loss of human control over machines in the long term (in Korea, the AI Safety Research Institute will be established by the end of 2024, and the AI Basic Act will be passed in the plenary session).
Perhaps what we should strive to equip AI with is not more powerful cutting-edge technology, but rather human 'morality' that will safeguard the future of humanity.
Stephen Hawking says, “Our future will be a race between the ever-increasing power of technology and the wisdom with which we use it.”
The development of 'moral AI' that learns and embodies human values could be the key to that wisdom.
As the authors note, for now, "both the director and the lead actor in the story of moral AI are humans." However, if the first half of this story ends without considering the values we intend to instill in artificial intelligence, humanity's place in the second half of the story, written by overwhelming AI, may be in grave danger.
The title, ‘Moral AI’, arouses curiosity.
Science fiction fans might be hoping this book will predict the emergence of a new type of artificial intelligence with morality, autonomy, and free will.
However, the 'moral AI' proposed in this book is not an artificial intelligence of such a distant future.
Broadly speaking, it is “AI that learns and implements human values,” and narrowly speaking, it is “a system that assists human moral judgment.”
A representative research case that demonstrates this possibility is 'AI that determines the priority of patients receiving kidney transplants.'
In hospitals, transplant priorities are generally determined by 1) the transplant surgeon's medical judgment and 2) the hospital staff's medical and practical judgments (suitability, age, health, organ quality, waiting period, etc.) that determine kidney transplant policy (moral judgments are excluded).
'Moral AI' learns in two ways: first, by programming the medical decisions a transplant surgeon would make if he were in ideal physical and mental condition, and second, by modeling the decisions of the group that decides the hospital's transplant policy.
Especially in the latter case, by involving not only hospital staff but also various citizens, including lawyers, patients, and non-experts, a common moral judgment can be built into an automated system.
This technological tool can help prevent surgeon error and bias, and ensure kidney allocation priorities are aligned with the moral values of the population.
This book presents current research ideas on how to extract moral characteristics, measure their weights, model moral judgments, synthesize group judgments, and idealize moral judgments to implement this technology in systems (see pages 239–253).
AI with human morality
It may still seem tacky to readers who expected the "emergence of AI with morality," but the potential of this technology is enormous.
There are two key points.
First, it can help people make moral judgments.
Imperfect humans can make moral mistakes at any time.
Because we cannot escape “bias that unconsciously influence decision-making, such as cognitive biases, favoritism, and racial or gender biases.”
An “idealized moral AI” that removes such biases could help us make “more informed, rational, and unbiased moral judgments” even when we are imperfect (e.g., half-asleep or angry).
Second, it can help improve social injustice and prevent unethical decisions. Extending this technology, which embeds morality in AI, to other areas could lead to the development of "artificially enhanced democracy (AID)" systems that automate the moral judgments of community members (see pages 254-256).
In particular, if AID is incorporated into decision support tools, it can be utilized in a wide range of fields, not just the medical field.
For example, human agents could be helped to make the right choices when recruiters interview candidates or make hiring decisions, when military operators consider when and where to launch missiles, and in other areas where fairness and morality may be at stake.
Meanwhile, the authors propose various practical measures ('moral AI' strategies) that can help AI developers strengthen ethical intervention from the early stages of AI product development while simultaneously disseminating 'moral AI' technology to various fields.
The core areas are “technology dissemination, organizational practices, education, civic engagement, and public policy,” and developers can use them as concrete guidelines for their “ethical AI” strategy, such as improving organizational culture, strengthening the role of ethics officers, establishing ethical indicators, and reflecting community values in AI products.
AI Technology and New Ethical Issues
The authors also present a broad range of new ethical issues surrounding artificial intelligence technology that we must consider today.
Here are just a few key examples from the cases highlighted in this book.
First, the issue of responsibility (or the "liability gap"). Who should be held accountable for safety accidents caused by AI? In 2018, a Volvo XC90 self-driving car owned by Uber struck a pedestrian (Hertzberg) during a test drive.
The vehicle's sensors and AI recognition system failed to recognize the pedestrian, and the test driver in the driver's seat was also looking away.
Who is responsible for Hertzberg's death? The test driver? The car manufacturer? The Uber engineers who created the AI? Or is it that no one can be held accountable because no one anticipated the consequences of AI?
The problem of 'accountability gap' can arise in various fields where AI is utilized.
Because there are so many people involved, the severity of legal and moral responsibility is unclear.
For example, if an AI makes a diagnostic error in a hospital, resulting in unnecessary or delayed treatment for a patient, who will be held responsible? If the military uses AI to induce a drone strike, resulting in civilian casualties, who will be held accountable? To take a more troubling example, if a startup builds a medical consultation chatbot based on OpenAI's GPT model and the chatbot provides harmful advice that results in the death of a person, who will be held accountable? The startup or OpenAI, which provided the GPT model technology? The authors worry that if our society fails to address these issues of responsibility, social unrest is inevitable.
Second, the issue of privacy invasion.
Personal information can be easily leaked and used for criminal purposes.
For example, the combination of 'facial recognition AI' and location tracking technology could allow someone to reveal sensitive medical information simply by seeing them on the street, and could leave people vulnerable to 'phishing attacks' using AI deepfake technology (imagine chatbots, videos, recordings, and photos pretending to be real bank or IRS employees).
But there is another privacy threat that the authors really focus on in this book.
It's an AI ecosystem that relies on "collecting as much personal data as possible, storing it permanently, and selling it to the highest bidder."
Many AI companies today are investing heavily in collecting personal data, as “the more diverse and extensive the training data, the more accurate the AI becomes and the more predictions it can make.”
The collection method is also clever.
In the case of PayPal, an online payment company in the United States, “If a user agrees to integrate PayPal with a website, his or her shopping history, personal preferences, photos, and disability status will be shared with that website (don’t be fooled by the phrase “We do not sell your personal data” in the privacy policy, you’ll be in big trouble).
Because this phrase does not mean “we do not share data with other companies”).
Personal data collected in this way is used to train various AI models (such as facial recognition, text completion, or chatbot systems), and even the trained AI models can be rented or sold to other companies.
Meanwhile, due to the nature of AI models to 'remember' some of the information from the training data ('inability to forget' phenomenon), it is not impossible for someone's personal information to be revealed.
“Microsoft and Amazon have also warned their employees not to share sensitive information with ChatGPT.
Because that information may be included in the output someday.”
Third, the issue of data bias. Data bias has long been a concern in the field of AI ethics.
“Every time we decide what data to collect, what information to feed into an AI algorithm, and how to respond to its predictions, we introduce human bias into the AI.”
AI models trained on biased data will inevitably produce biased results (“bias goes in, bias goes out”), and one study found that “common AI used for hiring, firing, promotions, home loans, and business loans often disadvantage applicants, particularly Black people, women, immigrants, people living in poverty, and people with disabilities, and skin cancer detection AI doesn’t work as well on people with dark skin (people of color).”
Fourth, the issue of transparency in the algorithm processing process.
It is known to be a rather troublesome problem, especially in deep learning AI.
Deep learning AI is known to have superior predictive capabilities compared to other AI technologies, but the problem is that the algorithm's processing process is like a black box, making it impossible to interpret or understand.
The controversy surrounding the 'Loomis v. Wisconsin' decision is a prime example.
Eric Loomis, who was charged with involvement in the shooting, was sentenced to a heavy sentence after being deemed a “threat to the community” based on predictions from Compass (a risk assessment tool used in US sentencing courts).
Loomis appealed, arguing that Compass's "predictive model was so proprietary and complex that there was no realistic way to know how or why it arrived at its risk predictions, or to respond," but ultimately lost.
But as the authors point out, “if the AI that generates the predictions is incomprehensible to anyone but AI experts, or even impossible for experts to understand… then the legal process will ultimately lose its fairness.”
The authors worry that AI's opacity will raise questions about fairness in many fields where the technology is used in the future.
The Competition Between the Power of Technology and Wisdom
Geoffrey Hinton, winner of the 2024 Nobel Prize in Physics, warned that “AI could lead to the extinction of humanity within 30 years,” but as the economic value of AI is confirmed, a very different mood is being read.
Some might argue that ethical issues are a nonsense at a time when the entire world is risking its life to develop new AI products.
Some companies may think that releasing a new product and then addressing issues as they arise (the 'Lean-Agile' method) has been the practice of IT companies and has been working successfully.
That is what this book warns us about.
“If companies don’t feel responsible for the harm caused by AI, they will feel increasing pressure to build AI products that are potentially unsafe but still profitable.” Given the destructive power of AI, the scale of the harm will go beyond the individual company.
This is why many governments and institutions today are rushing to predict the safety and potential risks of AI, establish "AI ethics" principles, and conduct research and institutional efforts to prevent the loss of human control over machines in the long term (in Korea, the AI Safety Research Institute will be established by the end of 2024, and the AI Basic Act will be passed in the plenary session).
Perhaps what we should strive to equip AI with is not more powerful cutting-edge technology, but rather human 'morality' that will safeguard the future of humanity.
Stephen Hawking says, “Our future will be a race between the ever-increasing power of technology and the wisdom with which we use it.”
The development of 'moral AI' that learns and embodies human values could be the key to that wisdom.
As the authors note, for now, "both the director and the lead actor in the story of moral AI are humans." However, if the first half of this story ends without considering the values we intend to instill in artificial intelligence, humanity's place in the second half of the story, written by overwhelming AI, may be in grave danger.
GOODS SPECIFICS
- Date of issue: January 20, 2025
- Page count, weight, size: 360 pages | 143*210mm
- ISBN13: 9791194330639
You may also like
카테고리
korean
korean