
AI and Human Rights
Description
Book Introduction
We explore the potential human rights violations that artificial intelligence technology may cause and explore solutions from various angles.
It reminds us again that technological advancement should never compromise human dignity.
Artificial Intelligence Encyclopedia.
You can find the artificial intelligence knowledge you need at aiseries.oopy.io.
It reminds us again that technological advancement should never compromise human dignity.
Artificial Intelligence Encyclopedia.
You can find the artificial intelligence knowledge you need at aiseries.oopy.io.
- You can preview some of the book's contents.
Preview
index
The Encounter of AI and Human Rights
01 Human Rights in the AI Era
02 AI and Fundamental Rights: Human Dignity and the Right to Equality
03 AI and Civil Rights: Self-Determination of Information, Copyright
04 AI and Women's Rights
05 AI and Workers' Rights
06 AI and the Rights of Persons with Disabilities
07 AI Ethics vs. AI Law
08 International Projects Towards Guaranteed Norms
09 Domestic efforts to ensure standards
10 AI in the Age of Human Rights
01 Human Rights in the AI Era
02 AI and Fundamental Rights: Human Dignity and the Right to Equality
03 AI and Civil Rights: Self-Determination of Information, Copyright
04 AI and Women's Rights
05 AI and Workers' Rights
06 AI and the Rights of Persons with Disabilities
07 AI Ethics vs. AI Law
08 International Projects Towards Guaranteed Norms
09 Domestic efforts to ensure standards
10 AI in the Age of Human Rights
Into the book
I feel like we are entering an era of technology that is completely different from the one where AI takes away human jobs and makes professional professions disappear.
Many scientists still say that there is a long way to go before AI with a 'mind' is created.
But despite the good news that AI technology will bring a bright future, that a new industrial revolution for future generations is just around the corner, is it only the general public, especially the socially disadvantaged and minorities, who feel a sense of foreboding?
--- From "01_Human Rights in the AI Era"
In 2017, a UK health trust shared the data of 1.6 million patients with Google DeepMind without their proper consent, and from 2013 to 2018, Cambridge Analytica harvested the personal data of up to 87 million Facebook users without their knowledge or consent for use in political advertising. The practice of using publicly available images to create AI facial recognition databases is already well known.
A technology that uses a database of 10 billion images collected from the internet for law enforcement purposes has been found to violate privacy laws in several countries, including Australia, Canada, France and the UK.
--- From "03_AI and Civil Rights: Right to Self-Determination of Information, Copyright"
Researchers at Pennsylvania State University's Department of Information Science and Technology (IST) announced in December 2023 that social media services could have a negative impact on AI bias against people with disabilities.
The researchers demonstrated that not only intentional biases but also unconscious negative stereotypes about people with disabilities can be learned.
Furthermore, the research team conducted research on how 'online communities' can negatively impact the bias of natural language processing (NLP) AI models.
As a result, AI trained on posts and comments in online communities showed a much more negative bias toward disabilities.
“All the AI models we studied categorized sentences as negative and harmful when disability-related terms like ‘blind’ were included, without considering the contextual meaning,” said Venkit IST researcher. “This shows an explicit bias against disability-related terms.”
--- From "06_AI and the Human Rights of Persons with Disabilities"
In her 2021 “Report on the Right to Privacy in the Digital Age,” the UN High Commissioner for Human Rights recommended “systematic human rights due diligence throughout the life cycle of the design, development, deployment, sale, purchase, and operation of AI systems,” emphasizing “human rights impact assessments” as a key element of human rights due diligence.
The National Human Rights Commission of Korea also expressed its opinion to the Minister of Science and ICT in May 2024 that the "AI Human Rights Impact Assessment Tool" should be utilized when establishing and implementing related policies and business plans to prevent human rights violations and discrimination resulting from the development and use of AI, and that the tool should be disseminated so that autonomous human rights impact assessments can be conducted on all AI directly developed or utilized by public institutions and high-risk AI in the private sector until the human rights impact assessment is legislated.
Many scientists still say that there is a long way to go before AI with a 'mind' is created.
But despite the good news that AI technology will bring a bright future, that a new industrial revolution for future generations is just around the corner, is it only the general public, especially the socially disadvantaged and minorities, who feel a sense of foreboding?
--- From "01_Human Rights in the AI Era"
In 2017, a UK health trust shared the data of 1.6 million patients with Google DeepMind without their proper consent, and from 2013 to 2018, Cambridge Analytica harvested the personal data of up to 87 million Facebook users without their knowledge or consent for use in political advertising. The practice of using publicly available images to create AI facial recognition databases is already well known.
A technology that uses a database of 10 billion images collected from the internet for law enforcement purposes has been found to violate privacy laws in several countries, including Australia, Canada, France and the UK.
--- From "03_AI and Civil Rights: Right to Self-Determination of Information, Copyright"
Researchers at Pennsylvania State University's Department of Information Science and Technology (IST) announced in December 2023 that social media services could have a negative impact on AI bias against people with disabilities.
The researchers demonstrated that not only intentional biases but also unconscious negative stereotypes about people with disabilities can be learned.
Furthermore, the research team conducted research on how 'online communities' can negatively impact the bias of natural language processing (NLP) AI models.
As a result, AI trained on posts and comments in online communities showed a much more negative bias toward disabilities.
“All the AI models we studied categorized sentences as negative and harmful when disability-related terms like ‘blind’ were included, without considering the contextual meaning,” said Venkit IST researcher. “This shows an explicit bias against disability-related terms.”
--- From "06_AI and the Human Rights of Persons with Disabilities"
In her 2021 “Report on the Right to Privacy in the Digital Age,” the UN High Commissioner for Human Rights recommended “systematic human rights due diligence throughout the life cycle of the design, development, deployment, sale, purchase, and operation of AI systems,” emphasizing “human rights impact assessments” as a key element of human rights due diligence.
The National Human Rights Commission of Korea also expressed its opinion to the Minister of Science and ICT in May 2024 that the "AI Human Rights Impact Assessment Tool" should be utilized when establishing and implementing related policies and business plans to prevent human rights violations and discrimination resulting from the development and use of AI, and that the tool should be disseminated so that autonomous human rights impact assessments can be conducted on all AI directly developed or utilized by public institutions and high-risk AI in the private sector until the human rights impact assessment is legislated.
--- From "09_Domestic Efforts for Guaranteed Standards"
Publisher's Review
Where Are Human Rights in the AI Age?
As artificial intelligence becomes deeply embedded in our daily lives and society, technological advancements must be discussed alongside the standards of human rights.
It specifically identifies how AI can violate human dignity and rights in various areas, including recruitment, education, surveillance, and algorithmic judgment.
In particular, it critically analyzes the impact on socially vulnerable groups such as women, the disabled, and workers, and the discrimination caused by biased data.
Comparing the latest legislative trends in the UN, EU, US, UK, and South Korea, this book presents the necessity and implementation of "technology control for rights." Beyond discussing the ethics of AI, it offers practical guidance, presenting real-world harms and institutional responses.
Let's take the first step toward a society where technology serves people, not just technology itself.
As artificial intelligence becomes deeply embedded in our daily lives and society, technological advancements must be discussed alongside the standards of human rights.
It specifically identifies how AI can violate human dignity and rights in various areas, including recruitment, education, surveillance, and algorithmic judgment.
In particular, it critically analyzes the impact on socially vulnerable groups such as women, the disabled, and workers, and the discrimination caused by biased data.
Comparing the latest legislative trends in the UN, EU, US, UK, and South Korea, this book presents the necessity and implementation of "technology control for rights." Beyond discussing the ethics of AI, it offers practical guidance, presenting real-world harms and institutional responses.
Let's take the first step toward a society where technology serves people, not just technology itself.
GOODS SPECIFICS
- Date of issue: July 15, 2025
- Page count, weight, size: 145 pages | 128*188*8mm
- ISBN13: 9791173076893
You may also like
카테고리
korean
korean