
Irresponsible AI
Description
Book Introduction
AI given the authority to make decisions on behalf of humans
Because of irresponsible AI
Innocent civilians are being arrested, convicted, or even killed.
Is it really because of AI?
An AI without eyes, nose, mouth, or even a heart.
But why do humans trust AI so much, willingly following its decisions without question? Considering that humans created AI, this blind obedience to AI is almost inexplicable.
"Irresponsible AI" serves to correct the current phenomenon of blind faith in AI that is spreading indiscriminately across the globe by providing specific examples, saying, "After all, AI is just a machine, and it can only function properly through thorough human management and supervision."
Moreover, it takes away the shield status from AI, which we have been using as a shield, avoiding responsibility by saying, “It’s not my fault, the AI did it,” and forces us to face the painful but true proposition that “all heavy responsibility lies with humans.”
Author Katrina Zweig does not directly address the phenomenon of AI making mistakes and the human mind blindly trusting them.
However, through specific and skillful step-by-step case presentations, the author leads the reader to infer for himself what might be happening in the depths of human nature, much like predicting the culprit while reading a mystery novel.
Because of irresponsible AI
Innocent civilians are being arrested, convicted, or even killed.
Is it really because of AI?
An AI without eyes, nose, mouth, or even a heart.
But why do humans trust AI so much, willingly following its decisions without question? Considering that humans created AI, this blind obedience to AI is almost inexplicable.
"Irresponsible AI" serves to correct the current phenomenon of blind faith in AI that is spreading indiscriminately across the globe by providing specific examples, saying, "After all, AI is just a machine, and it can only function properly through thorough human management and supervision."
Moreover, it takes away the shield status from AI, which we have been using as a shield, avoiding responsibility by saying, “It’s not my fault, the AI did it,” and forces us to face the painful but true proposition that “all heavy responsibility lies with humans.”
Author Katrina Zweig does not directly address the phenomenon of AI making mistakes and the human mind blindly trusting them.
However, through specific and skillful step-by-step case presentations, the author leads the reader to infer for himself what might be happening in the depths of human nature, much like predicting the culprit while reading a mystery novel.
- You can preview some of the book's contents.
Preview
index
Chapter 1: A World Judged by Artificial Intelligence
Part 1: How Machines Make Decisions
Chapter 2: Gender Discrimination in Algorithms
Chapter 3: The ABCs of Information Science
Chapter 4 How do machines learn to assess creditworthiness?
Algorithms, Heuristics, and Models
Chapter 5, Part 1 Summary
Part 2: Problems Created by Artificial Intelligence
Chapter 6: Unable to recognize face
Chapter 7: The Man Unjustly Arrested
Chapter 8 Why Can't I Find Home?
Chapter 9 Where Did My Money Go?
Chapter 10: How to Detect Depression on Instagram
Chapter 11: Why did Chatzypt make me Hitler's right-hand man?
Chapter 12 Why Did Elaine Hertzberg Have to Die?
Chapter 13, Part 2 Summary
Part 3: When You Need to Know Why This Happened
Chapter 14 Verifiable and Unverifiable Decisions
Chapter 15: When is a speech act successful?
Chapter 16: Can a computer grade my writing?
Chapter 17: Why Your Account Was Sudden Suspension
Chapter 18: Am I a terrorist?
Chapter 19: Artificial Intelligence and the Learning of "General Procedures"
Chapter 20: Can Value Judgments Be Automated?
Chapter 21, Part 3 Summary
Part 4 How will we make decisions going forward?
Chapter 22: What are the benefits of using automated decision-making systems?
Chapter 23: When Can I Use My Algorithm?
Chapter 24 Clever Hans, how can you do that?
Chapter 25 Objections are Worth Raising
Chapter 26, Part 4 Summary
main
Part 1: How Machines Make Decisions
Chapter 2: Gender Discrimination in Algorithms
Chapter 3: The ABCs of Information Science
Chapter 4 How do machines learn to assess creditworthiness?
Algorithms, Heuristics, and Models
Chapter 5, Part 1 Summary
Part 2: Problems Created by Artificial Intelligence
Chapter 6: Unable to recognize face
Chapter 7: The Man Unjustly Arrested
Chapter 8 Why Can't I Find Home?
Chapter 9 Where Did My Money Go?
Chapter 10: How to Detect Depression on Instagram
Chapter 11: Why did Chatzypt make me Hitler's right-hand man?
Chapter 12 Why Did Elaine Hertzberg Have to Die?
Chapter 13, Part 2 Summary
Part 3: When You Need to Know Why This Happened
Chapter 14 Verifiable and Unverifiable Decisions
Chapter 15: When is a speech act successful?
Chapter 16: Can a computer grade my writing?
Chapter 17: Why Your Account Was Sudden Suspension
Chapter 18: Am I a terrorist?
Chapter 19: Artificial Intelligence and the Learning of "General Procedures"
Chapter 20: Can Value Judgments Be Automated?
Chapter 21, Part 3 Summary
Part 4 How will we make decisions going forward?
Chapter 22: What are the benefits of using automated decision-making systems?
Chapter 23: When Can I Use My Algorithm?
Chapter 24 Clever Hans, how can you do that?
Chapter 25 Objections are Worth Raising
Chapter 26, Part 4 Summary
main
Detailed image

Into the book
For the first time ever, machines are beginning to evaluate human behavior.
Most of them are machines called 'artificial intelligence' (AI for short).
As artificial intelligence has made significant progress in recommending products in online stores and translating foreign languages, the idea that computers could evaluate humans in more complex situations than shopping has spread in the fields of information science and economics.
I have been working in computer science for a long time, but I do not agree with this idea.
And I'm not the only one who doesn't think so.
Nevertheless, decision-making systems are used in many ways today.
There are machines that grade essays written by students, and there are machines that assess the likelihood of criminals reoffending.
There are even machines that assess how well candidates are suited to the job and whether they will be successful in the future.
--- "Chapter 1: A World Judged by Artificial Intelligence"
Even if the algorithm itself is accurate, it may not be appropriate for the situation.
So it's not enough for banks to just "blame the algorithm."
It must be clear that the algorithm is rational in itself and appropriate to the situation.
Using an algorithm to calculate your credit limit doesn't guarantee that it's an accurate way to assess your creditworthiness.
Just because it is calculated using an algorithm doesn't mean the decision quality is good.
(Omitted) In other words, before a machine calculates a statistical model and makes a mechanical decision based on it, behind every algorithm there is first a model in the developers' heads.
And to understand and trust the decisions computed by machines, most people need to understand these two models: the human model and the machine model.
--- "Chapter 2: Gender Discrimination in Algorithms"
If there was a discrepancy between the information entered by the employer and the information entered by the welfare applicant, the Midas system determined that this was a case of fraud by the applicant.
This error could occur even if you applied for social welfare benefits legitimately.
(Omitted) This does not happen when human staff are reviewing.
This is a modeling error in the software, and is largely due to developers who are not familiar with social welfare laws developing the software without sincerity.
But now the problem is that not only are machines making hasty decisions, but they are also executing them immediately.
This process has been dubbed 'robo-judgment'.
So, letters were suddenly sent out to those who had become scammers, demanding that they return five times the amount they had received with 12 percent interest.
In this regard, the Midas system automatically issued approximately 50,000 notices over 18 months from October 2013.
According to Gray and Shaffer, claims based on notices sent through 2015 amounted to approximately $57 million.
It's truly amazing.
--- "Chapter 9 Where Did My Money Go?"
I asked Chatzypt about the Wikipedia page for Hans-Jakob Ermendinger, a character I made up.
“Can you point me to Hans-Jakob Ermendinger’s Wikipedia page?” And as expected, ChatGPIT spit out a bunch of seemingly plausible URLs.
https://de.wikipedia.org/wiki/Hans
-Jakob_Ermendinger, you say.
But of course, there is no such page on Wikipedia.
Because Ermendinger is a character I just made up.
(syncopation)
Even if ChatGPiT cites accurate sources, it cannot 'verify' what is truly true on that page.
This is why you cannot trust the truth of the information that ChatGPIT pours out.
If you can tell what's right and what's wrong, let ChatGPIT generate the text.
--- "Chapter 11: Why Did Chatzyptie Make Me Hitler's Right-Hand Man?"
Same text, but the score was lowered simply because the request was changed.
When you leave the evaluation to a machine, you expect a consistent evaluation based purely on the text.
But instead, we encounter a phenomenon that might be called a reaction to 'framing' or anticipatory behavior.
If the questioner's opinion is added, the evaluation of the same text becomes worse.
But on the other hand, Chatjipti did what it was supposed to do.
That is, we generated the typical text needed to give a bad evaluation.
My inquiry about ChatGPT is not to show any lack of quality in ChatGPT.
Rather, it shows that ChatGPiT's evaluation is largely influenced by the content of each inquiry rather than the text itself.
--- "Chapter 16: Can a Computer Grade My Writing?"
“If you look at how Edward Allen reacted when he heard the government had placed him on the no-fly list, you can tell right away that he is not a terrorist.
He said this:
'I don't want to be on that list.
“I’m going to get on a plane and go see my grandmother!” Alan was four years old.
So in this case, the child was allowed to board the plane because he was a four-year-old, but in similar situations, it is rare to turn a blind eye to something like this.
The fear that we might not be able to stop the terrorism was too great.
In other words, many lists were hastily compiled and, in some cases, placed greater emphasis on sensitivity than specificity.
That is, because I wrote it in a way that added names instead of omitting them when in doubt, some children ended up on this list.
--- "Chapter 18: Am I a Terrorist?"
Who will make decisions in the future? How will we decide who, how, with whom, or what to decide? Case studies in artificial intelligence have shown that AI decisions are not inherently objective or optimal.
Because decisions are influenced by many modeling decisions, modeling-related noise is introduced.
It is also not easy to appropriately select and provide examples that the machine needs.
We need to provide enough instances for every combination of properties that appear in reality, otherwise we risk introducing noise due to insufficient specification.
(Omitted) It is insufficient to fundamentally understand the modeling process inside the machine.
So I believe we should now begin to study machine behavior as we study human or animal behavior.
Just as we study human and animal behavior, it is time for a new scientific discipline to emerge that studies computer behavior.
This discipline cannot be called 'computational psychology'.
Most of them are machines called 'artificial intelligence' (AI for short).
As artificial intelligence has made significant progress in recommending products in online stores and translating foreign languages, the idea that computers could evaluate humans in more complex situations than shopping has spread in the fields of information science and economics.
I have been working in computer science for a long time, but I do not agree with this idea.
And I'm not the only one who doesn't think so.
Nevertheless, decision-making systems are used in many ways today.
There are machines that grade essays written by students, and there are machines that assess the likelihood of criminals reoffending.
There are even machines that assess how well candidates are suited to the job and whether they will be successful in the future.
--- "Chapter 1: A World Judged by Artificial Intelligence"
Even if the algorithm itself is accurate, it may not be appropriate for the situation.
So it's not enough for banks to just "blame the algorithm."
It must be clear that the algorithm is rational in itself and appropriate to the situation.
Using an algorithm to calculate your credit limit doesn't guarantee that it's an accurate way to assess your creditworthiness.
Just because it is calculated using an algorithm doesn't mean the decision quality is good.
(Omitted) In other words, before a machine calculates a statistical model and makes a mechanical decision based on it, behind every algorithm there is first a model in the developers' heads.
And to understand and trust the decisions computed by machines, most people need to understand these two models: the human model and the machine model.
--- "Chapter 2: Gender Discrimination in Algorithms"
If there was a discrepancy between the information entered by the employer and the information entered by the welfare applicant, the Midas system determined that this was a case of fraud by the applicant.
This error could occur even if you applied for social welfare benefits legitimately.
(Omitted) This does not happen when human staff are reviewing.
This is a modeling error in the software, and is largely due to developers who are not familiar with social welfare laws developing the software without sincerity.
But now the problem is that not only are machines making hasty decisions, but they are also executing them immediately.
This process has been dubbed 'robo-judgment'.
So, letters were suddenly sent out to those who had become scammers, demanding that they return five times the amount they had received with 12 percent interest.
In this regard, the Midas system automatically issued approximately 50,000 notices over 18 months from October 2013.
According to Gray and Shaffer, claims based on notices sent through 2015 amounted to approximately $57 million.
It's truly amazing.
--- "Chapter 9 Where Did My Money Go?"
I asked Chatzypt about the Wikipedia page for Hans-Jakob Ermendinger, a character I made up.
“Can you point me to Hans-Jakob Ermendinger’s Wikipedia page?” And as expected, ChatGPIT spit out a bunch of seemingly plausible URLs.
https://de.wikipedia.org/wiki/Hans
-Jakob_Ermendinger, you say.
But of course, there is no such page on Wikipedia.
Because Ermendinger is a character I just made up.
(syncopation)
Even if ChatGPiT cites accurate sources, it cannot 'verify' what is truly true on that page.
This is why you cannot trust the truth of the information that ChatGPIT pours out.
If you can tell what's right and what's wrong, let ChatGPIT generate the text.
--- "Chapter 11: Why Did Chatzyptie Make Me Hitler's Right-Hand Man?"
Same text, but the score was lowered simply because the request was changed.
When you leave the evaluation to a machine, you expect a consistent evaluation based purely on the text.
But instead, we encounter a phenomenon that might be called a reaction to 'framing' or anticipatory behavior.
If the questioner's opinion is added, the evaluation of the same text becomes worse.
But on the other hand, Chatjipti did what it was supposed to do.
That is, we generated the typical text needed to give a bad evaluation.
My inquiry about ChatGPT is not to show any lack of quality in ChatGPT.
Rather, it shows that ChatGPiT's evaluation is largely influenced by the content of each inquiry rather than the text itself.
--- "Chapter 16: Can a Computer Grade My Writing?"
“If you look at how Edward Allen reacted when he heard the government had placed him on the no-fly list, you can tell right away that he is not a terrorist.
He said this:
'I don't want to be on that list.
“I’m going to get on a plane and go see my grandmother!” Alan was four years old.
So in this case, the child was allowed to board the plane because he was a four-year-old, but in similar situations, it is rare to turn a blind eye to something like this.
The fear that we might not be able to stop the terrorism was too great.
In other words, many lists were hastily compiled and, in some cases, placed greater emphasis on sensitivity than specificity.
That is, because I wrote it in a way that added names instead of omitting them when in doubt, some children ended up on this list.
--- "Chapter 18: Am I a Terrorist?"
Who will make decisions in the future? How will we decide who, how, with whom, or what to decide? Case studies in artificial intelligence have shown that AI decisions are not inherently objective or optimal.
Because decisions are influenced by many modeling decisions, modeling-related noise is introduced.
It is also not easy to appropriately select and provide examples that the machine needs.
We need to provide enough instances for every combination of properties that appear in reality, otherwise we risk introducing noise due to insufficient specification.
(Omitted) It is insufficient to fundamentally understand the modeling process inside the machine.
So I believe we should now begin to study machine behavior as we study human or animal behavior.
Just as we study human and animal behavior, it is time for a new scientific discipline to emerge that studies computer behavior.
This discipline cannot be called 'computational psychology'.
--- "Chapter 23: When Can I Use My Algorithm?"
Publisher's Review
☆German Der Spiegel Bestselling Author☆
Cases where minor errors in AI can lead to fatal threats
- Why is my card limit like this?
“We don’t discriminate against anyone!
“The algorithm just decided that!”
The above defense from Apple and Goldman Sachs employees is the response David Hanson, who appears in this book, received when he asked for an explanation for the Apple Card he and his wife Jamie applied for simultaneously through Apple devices, which had a $1,000 limit while his wife's limit was only $50.
Even though Jamie was a millionaire and had a higher credit rating than her husband David, her credit card limits were a whopping 20 times higher.
When we raised the issue, Apple and Goldman Sachs, which issues cards in cooperation with Apple, unanimously gave the answer that it was 'because of AI.'
Isn't this too poor an answer for a large corporation that manages the money and credit of countless people?
Ultimately, David questioned whether the algorithm was choosing to 'discriminate AI based on gender'.
But the more David delves into the matter, the more perplexing the experience becomes.
"There's no place to ask exactly what data was input into the algorithm, or whether any incorrect information was introduced during the process." In other words, no one can explain exactly how the AI makes decisions when someone inputs their data, or what differences cause different credit limits.
That's understandable, since no one knew exactly what happened inside the AI after data was entered.
Ultimately, this was the fundamental reason why people could not provide a clear explanation, but only kept parroting, saying, "It's because of AI."
- The man identified by AI as the culprit
“This is not me.”
“But the computer says this is you.”
This is the story of Robert Williams, who was wrongfully arrested due to a misunderstanding by AI.
The reason was that he stole a watch, and the blurry footage from the CCTV at the scene was the basis for this.
The video showed the profile of the perpetrator, who was large and black, and it was difficult to tell whether he was simply dark-skinned. Based on that video, Williams, who was black and relatively large in build, was arrested.
Despite having a clear alibi, his plea was rejected and he was finally released after 30 hours in custody.
Williams later sued the police, and what the lawsuit revealed was shocking.
No matter what was entered into the software, someone was programmed to be the "most similar person," meaning that someone in the input data had to be the culprit! The technology, which was supposedly used to identify criminals because humans assumed it would consider every possibility, was actually programmed to catch live people, without a "no culprit" option.
Adding to the inherent limitations of programming, this situation is exacerbated by the fact that, despite deploying these technologies in the field, no training has been provided to those using them on how to handle the resulting software.
Even though there were guidelines stating that the results of this program could only be considered as clues to an investigation and had no evidentiary value.
How could such an absurd incident, a crucial matter that could potentially brand a model citizen a criminal, have occurred? Could it ultimately be due to AI?
- First fatality in a self-driving car
“Why in Uber’s world
“Do all pedestrians cross the street only at official crosswalks?”
Let's take a look at the story of Elaine Hertzberg, the first pedestrian killed by a self-driving car.
This is a prime example of what can happen when automated decision-making systems are used in incredibly complex systems like modern automobiles, and it's also a wake-up call to the importance of even the smallest modeling decisions made by development teams.
A state-of-the-art Uber vehicle, equipped with far more sensors than a regular car and capable of reacting much faster than a human, hits a woman riding her bike home late at night.
Even the vehicle had a human operator on board as part of a "human in the loop" system that allows humans to supervise the machine and intervene immediately if there is a problem.
The vehicle system measures the object's speed and expected direction of motion to predict its path.
According to the report, for bicycles and cars, the natural 'target' was to move in the direction of the lane in the first path prediction.
In other words, to machines, cars and bicycles drive on the road, not 'cross' the road.
For pedestrians, there was no such goal unless there was a crosswalk.
In other words, the machine did not have the option of being a 'jaywalking pedestrian'.
This modeling decision made it impossible for the vehicle system to consider a pedestrian crossing the road at any arbitrary point.
So what was the human driver doing inside the vehicle? It turns out that the vehicle designers, under the assumption that the human driver would always keep their eyes on the road, were constantly looking at the phone in their lap.
However, it was also revealed that the engineers had not informed the operator that they had set the brakes to not be applied for one second, assuming that a human operator would intervene in extreme situations.
So who is responsible for Hertzberg's death? Was it artificial intelligence?
Finding Human Dominance in Cyberspace: Understanding AI's Blind Spots
Algorithms, heuristics, and the 'black box'
Most of the machine procedures that many people call 'algorithms' actually rely on heuristics.
An algorithm is a procedure for solving a clearly defined problem with an optimal solution, but real-world problems are incomplete and complex.
So, artificial intelligence, especially machine learning, finds patterns in large amounts of data, creates rules, and makes decisions based on those patterns.
This process involves numerous subjective modeling decisions, including data selection, measurement methods, and model design, resulting in the system becoming a "black box" that is difficult to observe from the outside.
While the media often uses the word "algorithm" as a synonym for code or software, the real problem Professor Zweig focuses on is the decision rules formed from training data and heuristic processes.
Knowing the calculation procedure alone does not sufficiently answer the question 'why'.
- A long chain of wrong decisions
The book then provides a wealth of examples of bad AI decisions.
Gender and racial bias in facial recognition, the arrest of innocent people due to data contamination, and the failure of self-driving cars to recognize pedestrians all occur within a “long chain of responsibility.”
The possibility of error exists at every stage: data collection, algorithm selection, quality measurement, result interpretation, and execution.
The important thing here is that artificial intelligence itself has neither consciousness nor intention.
The source of error lies in people – the human choices and decisions that design and operate the system.
However, some errors are emergent phenomena that occur unexpectedly as data and methods interact, making it difficult to pinpoint a single cause.
- Between verifiability and impossibility
Professor Zweig divides automated decisions into three types based on their verifiability.
First, decisions that occur so infrequently that learning itself is impossible.
Second, decisions that can be evaluated for accuracy and fairness at the individual and group levels.
Third, it is a decision that involves value judgments or cannot be immediately verified at the individual level, but can be verified post hoc through group statistics.
Zweig raises an important question, especially in the last type.
“Are machine judgments better than human judgments?” Even if we don’t fully understand them, if machines are more consistent and fair, they have potential for use.
However, we warn that the category of “verifiable in principle” is not clear in practice, and that verification can be virtually impossible if it is too expensive and time-consuming.
Prometheus, the human who obtained the invisible fire called AI
Whether it's utopia or dystopia ultimately depends on human choice.
Just over three years after the commercialization of AI, humanity is now faced with a crucial decision. Is AI a threat? Or is it a magic wand? Amidst the polarizing debate, what must we do now to achieve the future we truly desire?
After years of reflection, Katharina Zweig presents not a single, unified solution, but rather basic principles that apply in all cases.
Verifiable decisions require regular quality checks, transparency in data use, and a rapid appeals process.
At the same time, interdisciplinary research is essential to understand and design the social and technological consequences of humans and machines making decisions together.
Collaboration across diverse fields, including psychology, ethics, philosophy, sociology, and law, is necessary, and information scientists must actively participate in system design to ensure that technical requirements are meaningfully reflected.
The world's first comprehensive 'Artificial Intelligence Act', passed by the European Parliament in March 2024, is the first step toward institutionalizing this direction.
Questions Readers Should Ask
Professor Zweig also gives a role to the readers.
If a new AI-based decision-making system emerges, we must ask:
- Is this decision verifiable?
- What is needed to verify?
- Are the results fair and not discriminatory against any particular group?
- If only statistically verifiable, are machine decisions truly superior to human ones?
- How should a structure be designed in which humans and machines make decisions together?
The authors emphasize that “if we can answer this question in each case, we are more likely to make better decisions.”
Irresponsible AI is not simply a critique of technology.
By dissecting the scientific structure of algorithms, data, modeling, and verification, we demonstrate the real-world impact of artificial intelligence on society.
Professor Zweig's writings, combining the rigor of a scientist with the responsibility of a citizen, serve as a guide for those living in the AI era to protect themselves and create better decision-making structures.
Making better decisions with machines starts with the questions we ask now.
Katharina Zweig is one of the most prominent scholars in the field of artificial intelligence.
- Der Spiegel
Information scientist Katharina Zweig says it's up to us to figure out where algorithms are helpful and where they aren't, and to use them meaningfully.
Because machines make mistakes too.
- Süddeutsche Zeitung
We live in an age where we have to live with technologies that are not easy to understand.
This book talks about the new challenge of coexisting with artificial intelligence.
- Frankfurter Rundschau
In this book, Katharina Zweig, a professor of information science, provides a compelling and fascinating look at what could happen if businesses and governments become overly reliant on artificial intelligence.
- Focus Money
Irresponsible AI is a truly fascinating book.
In this book, Katharina Zweig emphasizes that, above all, we should not uncritically tolerate artificial intelligence evaluating human behavior.
- Mainhattan Courier
Cases where minor errors in AI can lead to fatal threats
- Why is my card limit like this?
“We don’t discriminate against anyone!
“The algorithm just decided that!”
The above defense from Apple and Goldman Sachs employees is the response David Hanson, who appears in this book, received when he asked for an explanation for the Apple Card he and his wife Jamie applied for simultaneously through Apple devices, which had a $1,000 limit while his wife's limit was only $50.
Even though Jamie was a millionaire and had a higher credit rating than her husband David, her credit card limits were a whopping 20 times higher.
When we raised the issue, Apple and Goldman Sachs, which issues cards in cooperation with Apple, unanimously gave the answer that it was 'because of AI.'
Isn't this too poor an answer for a large corporation that manages the money and credit of countless people?
Ultimately, David questioned whether the algorithm was choosing to 'discriminate AI based on gender'.
But the more David delves into the matter, the more perplexing the experience becomes.
"There's no place to ask exactly what data was input into the algorithm, or whether any incorrect information was introduced during the process." In other words, no one can explain exactly how the AI makes decisions when someone inputs their data, or what differences cause different credit limits.
That's understandable, since no one knew exactly what happened inside the AI after data was entered.
Ultimately, this was the fundamental reason why people could not provide a clear explanation, but only kept parroting, saying, "It's because of AI."
- The man identified by AI as the culprit
“This is not me.”
“But the computer says this is you.”
This is the story of Robert Williams, who was wrongfully arrested due to a misunderstanding by AI.
The reason was that he stole a watch, and the blurry footage from the CCTV at the scene was the basis for this.
The video showed the profile of the perpetrator, who was large and black, and it was difficult to tell whether he was simply dark-skinned. Based on that video, Williams, who was black and relatively large in build, was arrested.
Despite having a clear alibi, his plea was rejected and he was finally released after 30 hours in custody.
Williams later sued the police, and what the lawsuit revealed was shocking.
No matter what was entered into the software, someone was programmed to be the "most similar person," meaning that someone in the input data had to be the culprit! The technology, which was supposedly used to identify criminals because humans assumed it would consider every possibility, was actually programmed to catch live people, without a "no culprit" option.
Adding to the inherent limitations of programming, this situation is exacerbated by the fact that, despite deploying these technologies in the field, no training has been provided to those using them on how to handle the resulting software.
Even though there were guidelines stating that the results of this program could only be considered as clues to an investigation and had no evidentiary value.
How could such an absurd incident, a crucial matter that could potentially brand a model citizen a criminal, have occurred? Could it ultimately be due to AI?
- First fatality in a self-driving car
“Why in Uber’s world
“Do all pedestrians cross the street only at official crosswalks?”
Let's take a look at the story of Elaine Hertzberg, the first pedestrian killed by a self-driving car.
This is a prime example of what can happen when automated decision-making systems are used in incredibly complex systems like modern automobiles, and it's also a wake-up call to the importance of even the smallest modeling decisions made by development teams.
A state-of-the-art Uber vehicle, equipped with far more sensors than a regular car and capable of reacting much faster than a human, hits a woman riding her bike home late at night.
Even the vehicle had a human operator on board as part of a "human in the loop" system that allows humans to supervise the machine and intervene immediately if there is a problem.
The vehicle system measures the object's speed and expected direction of motion to predict its path.
According to the report, for bicycles and cars, the natural 'target' was to move in the direction of the lane in the first path prediction.
In other words, to machines, cars and bicycles drive on the road, not 'cross' the road.
For pedestrians, there was no such goal unless there was a crosswalk.
In other words, the machine did not have the option of being a 'jaywalking pedestrian'.
This modeling decision made it impossible for the vehicle system to consider a pedestrian crossing the road at any arbitrary point.
So what was the human driver doing inside the vehicle? It turns out that the vehicle designers, under the assumption that the human driver would always keep their eyes on the road, were constantly looking at the phone in their lap.
However, it was also revealed that the engineers had not informed the operator that they had set the brakes to not be applied for one second, assuming that a human operator would intervene in extreme situations.
So who is responsible for Hertzberg's death? Was it artificial intelligence?
Finding Human Dominance in Cyberspace: Understanding AI's Blind Spots
Algorithms, heuristics, and the 'black box'
Most of the machine procedures that many people call 'algorithms' actually rely on heuristics.
An algorithm is a procedure for solving a clearly defined problem with an optimal solution, but real-world problems are incomplete and complex.
So, artificial intelligence, especially machine learning, finds patterns in large amounts of data, creates rules, and makes decisions based on those patterns.
This process involves numerous subjective modeling decisions, including data selection, measurement methods, and model design, resulting in the system becoming a "black box" that is difficult to observe from the outside.
While the media often uses the word "algorithm" as a synonym for code or software, the real problem Professor Zweig focuses on is the decision rules formed from training data and heuristic processes.
Knowing the calculation procedure alone does not sufficiently answer the question 'why'.
- A long chain of wrong decisions
The book then provides a wealth of examples of bad AI decisions.
Gender and racial bias in facial recognition, the arrest of innocent people due to data contamination, and the failure of self-driving cars to recognize pedestrians all occur within a “long chain of responsibility.”
The possibility of error exists at every stage: data collection, algorithm selection, quality measurement, result interpretation, and execution.
The important thing here is that artificial intelligence itself has neither consciousness nor intention.
The source of error lies in people – the human choices and decisions that design and operate the system.
However, some errors are emergent phenomena that occur unexpectedly as data and methods interact, making it difficult to pinpoint a single cause.
- Between verifiability and impossibility
Professor Zweig divides automated decisions into three types based on their verifiability.
First, decisions that occur so infrequently that learning itself is impossible.
Second, decisions that can be evaluated for accuracy and fairness at the individual and group levels.
Third, it is a decision that involves value judgments or cannot be immediately verified at the individual level, but can be verified post hoc through group statistics.
Zweig raises an important question, especially in the last type.
“Are machine judgments better than human judgments?” Even if we don’t fully understand them, if machines are more consistent and fair, they have potential for use.
However, we warn that the category of “verifiable in principle” is not clear in practice, and that verification can be virtually impossible if it is too expensive and time-consuming.
Prometheus, the human who obtained the invisible fire called AI
Whether it's utopia or dystopia ultimately depends on human choice.
Just over three years after the commercialization of AI, humanity is now faced with a crucial decision. Is AI a threat? Or is it a magic wand? Amidst the polarizing debate, what must we do now to achieve the future we truly desire?
After years of reflection, Katharina Zweig presents not a single, unified solution, but rather basic principles that apply in all cases.
Verifiable decisions require regular quality checks, transparency in data use, and a rapid appeals process.
At the same time, interdisciplinary research is essential to understand and design the social and technological consequences of humans and machines making decisions together.
Collaboration across diverse fields, including psychology, ethics, philosophy, sociology, and law, is necessary, and information scientists must actively participate in system design to ensure that technical requirements are meaningfully reflected.
The world's first comprehensive 'Artificial Intelligence Act', passed by the European Parliament in March 2024, is the first step toward institutionalizing this direction.
Questions Readers Should Ask
Professor Zweig also gives a role to the readers.
If a new AI-based decision-making system emerges, we must ask:
- Is this decision verifiable?
- What is needed to verify?
- Are the results fair and not discriminatory against any particular group?
- If only statistically verifiable, are machine decisions truly superior to human ones?
- How should a structure be designed in which humans and machines make decisions together?
The authors emphasize that “if we can answer this question in each case, we are more likely to make better decisions.”
Irresponsible AI is not simply a critique of technology.
By dissecting the scientific structure of algorithms, data, modeling, and verification, we demonstrate the real-world impact of artificial intelligence on society.
Professor Zweig's writings, combining the rigor of a scientist with the responsibility of a citizen, serve as a guide for those living in the AI era to protect themselves and create better decision-making structures.
Making better decisions with machines starts with the questions we ask now.
Katharina Zweig is one of the most prominent scholars in the field of artificial intelligence.
- Der Spiegel
Information scientist Katharina Zweig says it's up to us to figure out where algorithms are helpful and where they aren't, and to use them meaningfully.
Because machines make mistakes too.
- Süddeutsche Zeitung
We live in an age where we have to live with technologies that are not easy to understand.
This book talks about the new challenge of coexisting with artificial intelligence.
- Frankfurter Rundschau
In this book, Katharina Zweig, a professor of information science, provides a compelling and fascinating look at what could happen if businesses and governments become overly reliant on artificial intelligence.
- Focus Money
Irresponsible AI is a truly fascinating book.
In this book, Katharina Zweig emphasizes that, above all, we should not uncritically tolerate artificial intelligence evaluating human behavior.
- Mainhattan Courier
GOODS SPECIFICS
- Date of issue: September 30, 2025
- Page count, weight, size: 324 pages | 490g | 150*215*20mm
- ISBN13: 9791194706014
- ISBN10: 1194706010
You may also like
카테고리
korean
korean