Skip to product information
AI Atlas
€31,00
AI Atlas
Description
Book Introduction
Artificial intelligence's topography at a glance, like a map
We consider the issues of wealth, power, and justice surrounding AI and seek alternatives.


What happens when AI becomes deeply embedded in politics and economics, depleting the planet's resources? How is AI influencing the way we understand ourselves and society? Kate Crawford, a leading researcher on the social implications of AI, believes, based on over a decade of research, that AI is a technology of extraction.
Modern AI systems depend on access to Earth's energy and mineral resources, cheap labor, and massive amounts of data.
This book is a journey that explores how AI is actually being built, from lithium mines in Nevada, to Amazon warehouses, Chicago slaughterhouses, data centers, image databases, mountain villages in Papua New Guinea, Snowden's archives, and rocket bases in West Texas.
Exposés how these global networks amplify undemocratic governance and inequality, offering a material and political perspective on what it takes to build AI and how it centralizes power. Drawing on a range of sources and expert opinions, it compellingly illustrates what's at stake as tech companies use AI to reshape the world.
  • You can preview some of the book's contents.
    Preview

index
introduction
The World's Smartest Horse | What is AI? | Why AI Should Be Seen as a Map | The Topography of Computation | Extraction, Power, and Politics

1st district
Mining for AI│The Landscape of Computation│Mineralogical Layers│Black Lakes and White Latex│The Illusion of Clean Technology│The Layers of Logistics│AI as a Giant Machine
2 Labor
The Past History of Workplace AI│Potemkin AI and the Mechanical Turk│Concepts of Deconstruction and Workplace Automation: Babbage, Ford, and Taylor│Chicago's Slaughterhouse│Time Management, Time Privatization│Private Time as a Strategy of Power│The Ruthless Rhythm of Speed
3 data
Training Machines to See│A Brief History of Data Demand│Capturing Faces│From the Internet to ImageNet│No Consent Required│Data Myths and Metaphors│Who Cares Where the Rocket Lands│Capturing the Commons to Become a Billionaire
4 categories
Circular Argument Systems│Limitations of Debiasing Systems│Various Definitions of Bias│Training Sets as Classification Engines: The Case of ImageNet│Power in Defining "People"│Constructing Race and Gender│Limitations of Measurement
5 emotions
The Emotional Prophet: When Emotions Become Money│The World's Most Famous Physiognomer│Emotions: From Physiognomy to Photography│Capturing Emotions: The Art of Acting Emotions│Do Facial Expressions Really Express Emotions│The Politics of the Face
6 countries
The Third Offset Strategy | The Maven Plan | The Outsourcing of the State | From Terrorist Credit Scores to Social Credit Scores | The Transnational, the Nation, and My Daily Life
Conclusion - Power
A game without limits│AI's pipeline│The map is not the territory│Towards solidarity for justice
In addition - space

· Acknowledgements
Translator's Note
· main
· References
· Search

Detailed image
Detailed Image 1

Into the book
Data centers are among the world's largest consumers of electricity.
Powering this multi-layered machine requires electricity from coal, gas, nuclear and renewable sources.
Some companies are increasingly taking a proactive approach to the energy consumption of large-scale computing: Apple and Google have declared themselves carbon neutral (meaning they offset their emissions by purchasing carbon credits), and Microsoft has pledged to become carbon negative by 2030.
But instead of buying indulgences to ease their environmental guilt, the company demanded that its employees reduce their overall emissions.
Moreover, Microsoft, Google, and Amazon are all granting fossil fuel companies access to their AI platforms, engineering talent, and infrastructure to help them mine and extract fuel from the ground, further fostering the industries most responsible for human-caused climate change.

---From "Earth 1"

Now employers can monitor their workforce without having to physically tour the factory.
Workers record their working hours by swiping their badges or placing their fingerprints on a reader attached to their electronic watches.
The timer in front of them displays the time in minutes or seconds by which the current task must be completed.
Sensors on workers' bodies constantly report on things like their body temperature, their physical distance from coworkers, and the amount of time they spend browsing websites instead of performing their assigned tasks.
WeWork, the coworking space giant that fell into disarray in 2019, secretly installed surveillance equipment in its workspaces as it sought new ways to profit from data.
When it acquired the spatial analytics startup Euclid in 2019, there were concerns that it was planning to track the movements of its paid members through its facilities.
Domino's Pizza has installed machine vision systems in its kitchens to inspect finished products to ensure employees are making pizzas according to specified standards.
The rationale for installing surveillance equipment is to feed information into algorithmic scheduling systems, to tease out behavioral signals that might correlate with high or low performance, or to sell the information to data brokers.

---From "2 Labor"

Fundamentally, the long-standing practice of data accumulation has fostered a logic of powerful extraction that is now a core feature of how the field of AI operates.
This logic has fattened tech companies with the largest data pipelines, while the space free from data collection has shrunk miserably.
As Vannevar Bush predicted, machines have a huge appetite.
However, what and how a machine is fed has a profound impact on how it understands the world, and the priorities of its owners will always shape how profit is generated from that perspective. Examining the layers of training data that shape and inform AI models and algorithms reveals that collecting and labeling data about the world—while disguised as a purely technical act—is in fact a social and political intervention.

---From "3 Data"

Automatic emotion detection systems are now widely adopted and are particularly active in the field of recruitment.
A London startup called Human uses emotion recognition to analyze video interviews of job applicants.
According to a report in the Financial Times, 'the company claims it can identify personality traits by capturing the emotional expressions of job applicants.'
Only then do they rate personality traits like honesty and work ethic.
HireVue, an AI recruiting company whose clients include Goldman Sachs, Intel, and Unilever, uses machine learning to assess facial cues to estimate people's job suitability.
In 2014, the company launched an AI system that extracts variables like micro-expressions and tone of voice from video job interviews, using this to compare job applicants to the company's top performers.

---From "5 Emotions"

While government contracts for AI systems are increasing in large numbers, the question of whether private AI technology companies should be held legally liable when damages occur during the government's use of these systems has received little attention.
As governments increasingly rely on contractors to supply the algorithmic architectures that power their decision-making processes (whether for security or welfare), technology contractors like Palantir are being held accountable for alleged misconduct, including discrimination.
Most countries currently refuse to accept any responsibility for problems caused by procured AI systems, arguing that “you can’t be responsible for what you don’t understand.”
This means that commercial algorithmic systems are interfering with government decision-making processes without meaningful accountability mechanisms.
I have argued with legal scholar Jason Schultz that developers of AI systems that directly influence government decisions should be government employees and, in certain contexts, be subject to constitutional responsibility.
Only then can they, like the state, be held legally responsible for the harm caused.
Until then, suppliers and contractors will have little incentive to ensure that their systems do not compound historical harm or create entirely new harm.
---Among the "6 countries"

Publisher's Review
How is AI changing the world we live in?
“Artificial intelligence is neither artificial nor intelligent!”


How do people today understand and accept "AI (artificial intelligence)"? Do they regard it as the pinnacle of modern science and technology, or as a wondrous entity capable of instantly solving problems previously intractable by humans? However, this book argues that this is merely an illusion born of blind trust.
Kate Crawford, co-founder of the AI ​​Now Institute at New York University and a long-time researcher on the social implications of artificial intelligence, takes a close look at the process of creating artificial intelligence in this book and delves into the pitfalls it creates.
Many people still believe that systems similar to the human mind can be created from scratch and that artificial intelligence is something that exists naturally and independently, but this is an overly simplistic and narrow-minded perception.

This book argues that AI is neither “artificial” nor “intelligent.”
Rather, AI is embodied and material intelligence, created through natural resources, fuel, human labor, infrastructure, logistics, history, and classification. AI systems are neither autonomous nor rational, and cannot discern anything without extensive and intensive training using large data sets or existing rules and rewards.
In fact, AI as we know it depends entirely on a much broader political and social structure.
Furthermore, because AI requires capital to build at scale and methods to optimize it, AI systems are ultimately designed to benefit vested interests.
In this sense, artificial intelligence is a registry of power.

This book broadly examines how artificial intelligence is created and explores the economic, political, cultural, and historical forces that shape it. Connecting AI to these broader structures and social systems allows us to break free from the conventional wisdom that AI is purely a technological domain.
At a fundamental level, AI is both a technological and a social act, an institution and a foundation, politics and culture.
Operational reasoning and embodied work are deeply interconnected. AI systems both reflect and produce understanding of social relationships and the world.

In this book, AI is used to mean 'a large-scale industrial structure encompassing politics, labor, culture, and capital.'
In fact, the term 'artificial intelligence' is used more in marketing.
The term AI is often thrown around during funding applications, when venture capitalists show up with checkbooks, or when researchers want to attract media attention for their new research.
Because of this, the term AI is constantly changing meaning, being adopted and rejected.

But how can a map help us understand the process of creating artificial intelligence? The map metaphor offers a new way to understand artificial intelligence.
We need an AI discourse that explains the nations and corporations that drive and control AI, the extractive mining that scars the planet, the massive data collection, and the unequal and exploitative labor practices that underpin it.
A geospatial approach offers a new perspective and scale that goes beyond the abstract promises of artificial intelligence or modern machine learning models.
The goal is to understand AI in a broader context by exploring the various terrains of computation and how they are connected.

What exists in the field of AI is not a black box to be opened, a secret to be exposed, but rather a myriad of intertwined systems of power.
Therefore, complete transparency is an impossible goal. To better understand the role AI plays in the world, we must pay attention to material structures, contextual environments, and dominant political characteristics, and trace how they are interconnected.
The author's ideas in this book are inspired by nearly a decade of experience working in AI research labs in academia and industry, with backgrounds in science and technology research, law, and political philosophy.

How is AI concentrating wealth and power?
Resource mining, labor rights, privacy, the state and corporations, inequality… all of these issues surrounding AI.


This book defines artificial intelligence as an “extractive industry.”
Creating modern AI systems requires exploiting the Earth's energy and mineral resources, cheap labor, and massive amounts of data.
To observe this happening in action, the author travels to places where AI is actually being created.
A good starting point for understanding what artificial intelligence is and what it's made of is a lithium mine in Nevada, USA, one of several mineral mines needed to power computers.


Mining is the sector that most clearly demonstrates the political nature of AI's extraction.
The tech sector's demand for rare earths, oil, and coal is enormous, but the true cost of mining these resources is nowhere near being borne by the AI ​​industry.
On the software front, building models for natural language processing and computer vision requires enormous amounts of energy, and the race to create faster and more efficient models has led to greedy computational techniques that increase AI's carbon footprint.
From the last trees in Malaysia harvested to produce the latex needed for the first transatlantic submarine cable to the massive artificial lakes in Inner Mongolia where toxic residues accumulate, we trace the environmental and human birthplaces of our global computing network and examine how these actions are transforming the planet on a grand scale.

In fact, artificial intelligence is created through human labor.
Digital laborers who earn pennies for clicking repetitive tasks, Amazon warehouse workers who meticulously follow the algorithms of a giant logistics empire, workers in Chicago slaughterhouses who butcher and process animal carcasses… How are they adapting to AI systems that intensify surveillance and control for their employers? By meticulously examining how time management mechanisms work to adapt human behavior to the repetitive movements of robots and assembly line machines, and what the problems are, we can glimpse what the future of work will look like.
The logic of extraction that defines the relationship between the Earth and human labor is also linked to how AI uses and understands data.


Any publicly accessible digital material is freely collected for training data sets used to create AI models, which are then used to improve algorithms that perform functions such as facial recognition, speech prediction, and object detection.
However, the current practice of AI utilizing data raises significant ethical, methodological, and epistemological concerns, in addition to serious issues such as privacy breaches and surveillance capitalism.
The most representative of these is the classification behavior in artificial intelligence systems.
We examine how current systems rely on binary gender, homogenous racial classifications, and questionable assessments of personality and creditworthiness as the primary basis for predicting identity, and how AI systems reinforce hierarchies and amplify inequality in the process.

How AI recognizes human emotions is also an important issue.
We examine the various claims surrounding emotion recognition through the research of psychologist Paul Ekman and others, and explore the history of emotion recognition through a trip to a mountain village in Papua New Guinea.
We explore how technology companies like Amazon, Microsoft, and IBM design and deploy emotion detection systems, and examine the scientific controversies, concerns, and potential side effects that arise.

Another key point of this book is how AI systems are being used as tools of state power.
Past and present military applications of artificial intelligence have shaped practices such as surveillance, data extraction, and risk assessment.
The close relationship between the technological and military sectors is controlled to fit a strong nationalistic agenda.
Meanwhile, illegal tools used in the intelligence sector are spreading from the military to commercial technology, finding their way into classrooms, police stations, workplaces, and employment support centers. The military logic that shaped AI systems is now part of local government operations, further distorting the relationship between the state and its citizens.

Understanding how artificial intelligence functions as a power structure and connects infrastructure, capital, and labor, and developing alternatives, is directly related to our present and future.
From Uber drivers being cleverly controlled, to undocumented immigrants being tracked, to public housing tenants protesting facial recognition systems in their homes, AI systems are built according to the logic of capital, policing, and militarization, a combination that further exacerbates existing power imbalances.
Therefore, AI systems must urgently open new paths toward justice and equality, rather than industrial extraction and discrimination.
There is a growing need to reject a technology-first approach and expand national and international movements to confront underlying inequalities and injustices, demanding justice for labor, climate, and data.
This book vividly examines how artificial intelligence is actually being created, helping us break free from previous biased perceptions and uncertain technological optimism and envision a realistic and sustainable future.
GOODS SPECIFICS
- Publication date: November 29, 2022
- Page count, weight, size: 392 pages | 604g | 145*218*25mm
- ISBN13: 9791188941896
- ISBN10: 1188941895

You may also like

카테고리