Skip to product information
The AI ​​bubble is coming
The AI ​​bubble is coming
Description
Book Introduction
"No company is immune to the AI ​​bubble." —Sundhir Pichai, Google CEO
"We're in an AI bubble right now." —Sam Altman, CEO of OpenAI

★ “If you haven’t read this book, don’t invest a single penny in AI stocks!”
★The first "AI bubble" detection guide written by a computer scientist named one of Time's "100 Most Influential People in AI."


AI will wipe out humanity? Wrong.
The real danger is 'fake AI' targeting your wallet.
The two authors, computer scientists at Princeton University and at the forefront of AI research, assert:
“We are buying a fantasy, not a technology.” This book dispels the exaggerated marketing of “generative AI,” represented by ChatGPT, and scientifically proves how “predictive AI,” which determines our lives in areas such as employment, healthcare, and public safety, operates like a 19th-century “snake oil” scam.
This book presents the most sobering and practical criteria for distinguishing between "real innovation" and "fake technology" in the trillion-dollar AI market.
  • You can preview some of the book's contents.
    Preview

index
Preface to the new edition
Chapter 1: Facing the AI ​​Landscape
Chapter 2: How Predictive AI Fails
Chapter 3: Why Can't AI Predict the Future?
Chapter 4: The Long Journey to Generative AI
Chapter 5: Is Advanced AI an Existential Threat?
Chapter 6: Why AI Can't Fix Social Media
Chapter 7: Why Does the AI ​​Myth Persist So Tenaciously?
Chapter 8 Where Should We Go?
Footnote
Acknowledgements

Detailed image
Detailed Image 1

Into the book
Imagine a world where there is only the collective noun 'vehicle' and no separate words for the various modes of transportation.
The people who live there rely solely on the word "vehicle" to refer to cars, buses, bicycles, spaceships, and any other means of transportation.
Conversation in such a world is bound to be confusing.
Some people think of bicycles, others of trucks, and there are heated debates about whether or not these 'vehicles' are environmentally friendly.
Now, let's take a closer look at the world we live in by replacing the word "vehicle" with "artificial intelligence."
--- p.17~18, from “Chapter 1: Facing the AI ​​Landscape”

AI used in recruitment is a bleak example.
In the United States, three-quarters of employers use automated tools to screen job applicants.
Some tools screen applicants based on their resumes, while others test applicants through video interviews.
These tools are used first in the applicant screening stage.
If a person fails the entrance exam, his or her resume is discarded without a single human glance.
But the process is unclear.
Companies don't share how their recruiting software is built, leaving applicants in the dark about how they're being judged.
--- p.78, from “Chapter 2: How Predictive AI Fails”

While the existing limitations of prediction can be overcome in some cases by improving the quantity and quality of data, they also exhibit inherent limitations.
In some cases, such as predicting which cultural products will do well, we never expect the predictability to improve much.
While predicting individual lives may improve somewhat, we cannot expect drastic changes.
Unfortunately, that doesn't stop companies from selling AI as a predictive tool for making important decisions about people.
Therefore, rather than passively hoping for improvements in predictive AI technology, it is important to resist the snake oil AI already widely used today.
--- p.147, from “Chapter 3: Why Can’t AI Predict the Future?”

A New York attorney wrote his brief with the help of ChatGPT.
It seems they forgot to include a disclaimer that this chatbot may generate inaccurate information.
The statement of purpose was full of cases that did not even have precedents.
The lawyer asked the chatbot if the incidents were real, and the chatbot answered yes.
Without even being able to recognize that the events in question were fake.
Even the overall verdict was manipulated.
The lawyer submitted a document based on the chatbot's answers, and, as expected, the presiding judge was left in a predicament. The sight of lawyers being penalized for submitting sloppy documents using AI has become a common sight.
--- p.154~155, from “Chapter 4: The Long Journey to Generative AI”

In 2023, the Future of Life Institute published an open letter urging AI labs worldwide to immediately stop training AI systems more powerful than GPT-4, even for just six months.
Simply put, many in the AI ​​community argue that AGI poses an imminent existential threat that requires bold global action to prevent.
If this is true, then nothing else in this book is important, and there is nothing more important in the world.
In the following, we will reveal why these claims are based on a foundation of unfounded wisdom. This is not to say that AGI will never be built, or that even if it were, there's nothing to worry about. AGI is entirely feasible in the distant future, and society already possesses the tools to calmly address the risks.
--- p.219~220, from “Chapter 5: Is Advanced AI an Existential Threat?”

In 2016, Facebook sparked widespread outrage by deleting a photo commonly known as the "Napalm Girl" as soon as it was posted.
The photo in question shows a nine-year-old girl named Phan Thi Kim Phuc running naked in agony, badly burned by a napalm attack.
It played a historically significant role in publicizing the horrors of the Vietnam War and changing public opinion about the war, and is considered one of the most iconic photographs in the world.
At first glance, this photo blocking incident seems like a good example of the limitations of AI we've been talking about.
--- p.288~289, from “Chapter 6: Why Can’t AI Fix Social Media?”

For example, OpenAI claimed that “ChatGPT-4 achieves human-level performance on most professional and academic exams,” and indeed, ChatGPT-4 scored in the top 10 percent on the bar exam.43 Many took this as a sign that AI would soon become good enough to replace lawyers.
But a lawyer's job isn't all about answering bar exam questions all day long.
Real-world usefulness is different from good performance in benchmarks.
Moreover, professional exams, especially the bar exam, tend to overemphasize the ability to memorize knowledge and underestimate practical skills, which are much more difficult to measure with standardized tests.
So, not only do these tests fail to capture the true utility of AI, they also only highlight what AI does well.
--- p.336~337, from “Chapter 7: Why Do Myths About AI Persist So Tenaciously?”

Today, companies spend tens of millions of dollars on advertising to avoid regulations.
When antitrust legislation was introduced to prevent Big Tech companies from favoring their products over those of their competitors, advocacy groups funded by the companies poured $36 million into advertising opposing the bill.
Supporters of the bill, who did not receive funding from these companies, spent $200,000 on pro-regulation ads.
Compared to the former, this was an amount almost 200 times less.
Big business has also funded advocacy groups to make it appear that small businesses are opposing regulations.
For example, a group called American Edge, backed by Facebook, has been using small business owners across the country to voice their opposition to regulations and has run ads that stoke fears that American businesses might be pushed out by China.
--- p.381~382, from “Chapter 8 │ Where Are We Going From Here?”

Publisher's Review
“If you can’t distinguish between the true potential of AI and the hype of marketing,
“When the bubble bursts, we will all fall together.”


In the second half of 2024, the mood on Wall Street was unusual.
Goldman Sachs, in a report, fueled the "AI bubble" theory by asking, "We're pouring astronomical amounts of money into AI, but where are the corresponding returns?"
Even Google CEO Sundar Pichai warned that “no company is immune to the AI ​​bubble.”
The "generative AI" craze fueled by ChatGPT has taken the world by storm over the past two years, but skepticism is rising in businesses, with some saying, "We've tried it, but it hasn't yielded any significant results."

In this age of unconditional praise and vague fear, what we need is not "faster AI," but "more accurate eyes."
"The AI ​​Bubble is Coming" is a timely report from Princeton University's Center for Information Technology Policy, at the forefront of technology.
The authors declare that AI is not magic, and much of what we believe to be "intelligence" is nothing more than statistical trickery.
The reason you should read this book now is clear.
This is because we need to identify the ‘real technologies’ that will survive even after the bubble bursts.

What can only AI do and what can't AI do?
A sensational piece that exposes the "inconvenient truth" about AI that Silicon Valley wants to hide!

The most unique value of this book is that it does not lump AI together, but clearly distinguishes between 'generative AI' and 'predictive AI'.
While many AI books on the market focus on how to use ChatGPT or offer vague future predictions, this book exposes ineffective technologies with concrete data and examples.

First, the authors point out that predictive AI, which companies use for hiring, crime prevention, and medical diagnosis, is a modern-day "snake oil."
The book exposes in detail the cases of Chicago's gun detection system 'ShotSpotter', which was introduced at a cost of hundreds of billions of won but wasted money without proving its effectiveness in preventing crime, and the sepsis prediction model of Epic, the largest medical company in the US, which showed an accuracy that was almost no different from flipping a coin.
The authors' scientific proof that the future of human society is inherently unpredictable, and that no amount of data can overcome this limitation, is painfully compelling.

Meanwhile, generative AI such as ChatGPT is also given a sober assessment.
While the authors acknowledge the utility of generative AI, they make it clear that it's not truly "intelligence," but rather a "probabilistic parrot" that generates plausible sentences based on probabilities. Cases like CNET's embarrassing story, which was riddled with errors after employing an AI reporter, or a lawyer's license being suspended after submitting a fake AI-generated precedent to court clearly demonstrate the limitations of this technology.
Without a single complex formula, the authors trace 80 years of computer science history and clearly draw the line between what AI can and cannot do.
This is not technological pessimism.
Rather, it is the most realistic technological optimism that true innovation is possible only by focusing on 'technology that works' and boldly discarding 'technology that doesn't work.'

“AI is not a panacea.
“It’s just an imperfect statistic we created.”
A must-read for investors who want to use technology without being fooled by it.


The moment you close this book, readers will no longer be anxious about the phrase “AI will change everything” that appears in the news.
Instead, you will be a sharp questioner, asking, “What data was this AI trained on?” and “What is the margin of error for this predictive model?”

For business leaders and investors preparing to implement AI solutions worth hundreds of millions of won, this book will serve as an investment guide that can prevent billions of won in losses.
It also provides policymakers, who must prevent tax waste and protect the safety of citizens, with the insight to filter out AI administration that is merely superficial.
Above all, for individuals who fear, "Will my job disappear?", it will serve as a reassuring prescription that confirms the human domain that AI can never replace.
Will you jump into the bubble and dance with it, or will you look with a cool head to separate the wheat from the chaff? The fear that AI will lead to the extinction of humanity, the illusion that AI will solve all our problems.
This book will thoroughly reveal the truth we are missing between these two extreme perspectives.
GOODS SPECIFICS
- Date of issue: December 24, 2025
- Page count, weight, size: 420 pages | 145*220*30mm
- ISBN13: 9791155818862
- ISBN10: 1155818865

You may also like

카테고리