
AI Risk Society and Politics
Description
Book Introduction
We analyze how social media algorithms undermine the foundations of democracy by fostering confirmation bias and anger.
Drawing on the theory of risk society, we reinterpret the technological, political, and cognitive environments and propose a new political imagination for restoring our sovereignty over judgment. AI Encyclopedia.
You can find the artificial intelligence knowledge you need at aiseries.oopy.io.
Drawing on the theory of risk society, we reinterpret the technological, political, and cognitive environments and propose a new political imagination for restoring our sovereignty over judgment. AI Encyclopedia.
You can find the artificial intelligence knowledge you need at aiseries.oopy.io.
- You can preview some of the book's contents.
Preview
index
In the Age of Confirmation Bias, Who Are Algorithms for?
01 AI designed for humans, distorts humans
02 Between Likes and Clicks, Democracy Disappears
03 How AI polarizes society?
04 Facts disappear, only beliefs remain.
05 Risk Society: Uncertainty Created by Technology
06 Could Algorithms Become the New Disaster?
07 The Power of Algorithms, Transparent Yet Opaque
08 The moment when non-politics becomes politics
09 30 Years of Disaster Discourse: The Dangers of Politicizing Korean Society
10 Is an algorithm for democracy possible?
01 AI designed for humans, distorts humans
02 Between Likes and Clicks, Democracy Disappears
03 How AI polarizes society?
04 Facts disappear, only beliefs remain.
05 Risk Society: Uncertainty Created by Technology
06 Could Algorithms Become the New Disaster?
07 The Power of Algorithms, Transparent Yet Opaque
08 The moment when non-politics becomes politics
09 30 Years of Disaster Discourse: The Dangers of Politicizing Korean Society
10 Is an algorithm for democracy possible?
Into the book
Who will teach AI?
What kind of human being will design AI, with what values, and for what purpose?
When an imperfect teacher provides imperfect teachings, AI as its student can never escape those limitations.
Rather, we can replicate those limitations and expand them into a more sophisticated, faster, and more universal system.
Ultimately, we have entered an era where AI teaches humans, but in reality, we are in a situation where imperfect human-created AI is teaching humans back.
This is the paradox of the AI era.
While technology has become more accurate, more efficient, and capable of broader learning, its foundation remains rife with human imperfections and biases. No matter how much data AI learns, it cannot guarantee ethical reflection or philosophical thought.
--- From "01_“AI designed for humans, distorts humans”"
As we have seen, the media, which conveys information, is an actor with great social responsibility.
However, as this responsibility shifts to algorithm-based AI, a structural problem arises. AI is not held accountable and is more likely to choose more provocative and emotionally charged framing to increase clicks and dwell time.
In other words, the algorithm operates in a way that stimulates the user's interest, and in the process, the balance of information and public interest may be sacrificed.
--- From “03_“How AI is polarizing society?”
The problem now is that these risk interpretations and narratives are beginning to be orchestrated by AI algorithms.
When a disaster occurs, people watch the news and search for information.
But which news appears first? Which hashtags trend in real time? Who appears first on people's timelines? Algorithms now determine all of this.
The algorithm prioritizes clicks and responses.
That's why the more provocative and emotional the information, the more it spreads.
Fear, anger, disgust and conspiracy spread quickly, and calm explanations and complex analysis take a backseat.
In this way, algorithms bias our interpretation of disasters, make certain discourses “mainstream,” and restructure the very structure of risk.
--- From “06_“Can Algorithms Become a New Disaster?”
In recent years, the politicization of disaster discourse has become more complex.
And at its heart is a new technological actor: AI algorithms. AI provides different information and interpretations based on individual preferences and political leanings.
(…) AI algorithms are never neutral.
Algorithms provide information that appears objective and neutral, but in reality, they prioritize provocative content that users stay on longer and click on more frequently.
Ultimately, this further polarizes social perceptions surrounding disasters, creating disparate realities where communication is impossible.
What kind of human being will design AI, with what values, and for what purpose?
When an imperfect teacher provides imperfect teachings, AI as its student can never escape those limitations.
Rather, we can replicate those limitations and expand them into a more sophisticated, faster, and more universal system.
Ultimately, we have entered an era where AI teaches humans, but in reality, we are in a situation where imperfect human-created AI is teaching humans back.
This is the paradox of the AI era.
While technology has become more accurate, more efficient, and capable of broader learning, its foundation remains rife with human imperfections and biases. No matter how much data AI learns, it cannot guarantee ethical reflection or philosophical thought.
--- From "01_“AI designed for humans, distorts humans”"
As we have seen, the media, which conveys information, is an actor with great social responsibility.
However, as this responsibility shifts to algorithm-based AI, a structural problem arises. AI is not held accountable and is more likely to choose more provocative and emotionally charged framing to increase clicks and dwell time.
In other words, the algorithm operates in a way that stimulates the user's interest, and in the process, the balance of information and public interest may be sacrificed.
--- From “03_“How AI is polarizing society?”
The problem now is that these risk interpretations and narratives are beginning to be orchestrated by AI algorithms.
When a disaster occurs, people watch the news and search for information.
But which news appears first? Which hashtags trend in real time? Who appears first on people's timelines? Algorithms now determine all of this.
The algorithm prioritizes clicks and responses.
That's why the more provocative and emotional the information, the more it spreads.
Fear, anger, disgust and conspiracy spread quickly, and calm explanations and complex analysis take a backseat.
In this way, algorithms bias our interpretation of disasters, make certain discourses “mainstream,” and restructure the very structure of risk.
--- From “06_“Can Algorithms Become a New Disaster?”
In recent years, the politicization of disaster discourse has become more complex.
And at its heart is a new technological actor: AI algorithms. AI provides different information and interpretations based on individual preferences and political leanings.
(…) AI algorithms are never neutral.
Algorithms provide information that appears objective and neutral, but in reality, they prioritize provocative content that users stay on longer and click on more frequently.
Ultimately, this further polarizes social perceptions surrounding disasters, creating disparate realities where communication is impossible.
--- From "09_“30 Years of Disaster Discourse: The Dangers of Politicizing Korean Society”
Publisher's Review
The politics of outrage reshaped by algorithms: Re-examining democracy in an AI-risk society.
These days the square is full of anger.
It has become commonplace for people who barely know each other to express intense hatred for each other simply by their presence.
Tracing the politics of this unfamiliar anger, we reveal the face of the technology—the AI algorithms—that quietly operate behind the scenes. Social media and recommendation systems, under the guise of "knowing you well," reinforce users' confirmation bias, force them to see only what they want to see, and create citizens who live in different worlds.
This is not simply a technical issue; it is a restructuring of social perception and a structural change that shakes the foundations of democracy.
The author interprets AI as a new type of disaster based on Ulrich Beck's risk society theory.
This is so because, rather than causing physical harm, it causes cognitive polarization, social disconnection, and political excess.
Beyond a critique of technology, we ask what kind of algorithmic ecosystem we should demand, how democracy should protect citizens' cognitive environment, and how we can restore the sovereignty of political judgment.
Like the metaphor of the brakes and the steering wheel, it does not reject technology, but suggests a normative direction for humans to regain control.
This is a political guide for the AI era, helping citizens trapped in different worlds rediscover a common world.
These days the square is full of anger.
It has become commonplace for people who barely know each other to express intense hatred for each other simply by their presence.
Tracing the politics of this unfamiliar anger, we reveal the face of the technology—the AI algorithms—that quietly operate behind the scenes. Social media and recommendation systems, under the guise of "knowing you well," reinforce users' confirmation bias, force them to see only what they want to see, and create citizens who live in different worlds.
This is not simply a technical issue; it is a restructuring of social perception and a structural change that shakes the foundations of democracy.
The author interprets AI as a new type of disaster based on Ulrich Beck's risk society theory.
This is so because, rather than causing physical harm, it causes cognitive polarization, social disconnection, and political excess.
Beyond a critique of technology, we ask what kind of algorithmic ecosystem we should demand, how democracy should protect citizens' cognitive environment, and how we can restore the sovereignty of political judgment.
Like the metaphor of the brakes and the steering wheel, it does not reject technology, but suggests a normative direction for humans to regain control.
This is a political guide for the AI era, helping citizens trapped in different worlds rediscover a common world.
GOODS SPECIFICS
- Date of issue: November 25, 2025
- Page count, weight, size: 125 pages | 128*188*7mm
- ISBN13: 9791143011664
You may also like
카테고리
korean
korean