Skip to product information
AI and semiconductors
AI and semiconductors
Description
Book Introduction
This article highlights the evolution and role of semiconductor technology leading the AI ​​era. It also explores the co-evolution of AI and semiconductors, from AI-specific chips like GPUs, TPUs, and NPUs to neuromorphic and quantum computing.
  • You can preview some of the book's contents.
    Preview

index
Semiconductors that led the arrival of the AI ​​era

01 The Emergence of AI and the Evolution of Semiconductors
02 Semiconductor Design for AI
03 A Look Inside the AI ​​Accelerator
04 Semiconductors for AI Learning and Inference
05 AI and Memory Semiconductors
06 AI Semiconductor Manufacturing Process and Heterogeneous Integration
07 AI Utilization in the Semiconductor Industry
08 Energy Efficiency and Sustainability of AI Semiconductors
09 Future Semiconductor Technology and AI
10 Social Challenges Surrounding AI and Semiconductors

Into the book
As the AI ​​industry grows, there are increasing instances of big tech companies developing their own AI semiconductors to be installed in data centers. As the enterprise market for operating IT infrastructure expands, the number of companies developing new NPUs is also increasing.
A representative company is Canada's Tenstorrent, led by 'semiconductor design legend' Jim Keller, which is attracting attention in the AI ​​hardware accelerator market, and in Korea, FuriosaAI and Rebellions are developing NPU chips specialized in inference.
--- From "01_The Emergence of AI and the Evolution of Semiconductors"

The performance of AI accelerators is closely tied to the limitations of memory bandwidth. The Von Neumann architecture, a traditional three-tiered computer architecture consisting of a CPU, main memory, and input/output devices, suffers from a bottleneck caused by data transfer speed limitations between memory and computing devices, known as the Von Neumann bottleneck.
In AI model computations that require large-scale data processing, bottlenecks can significantly reduce computational efficiency.
--- From "03_A Look Inside the AI ​​Accelerator"

Advanced packaging technologies are evolving from conventional two-dimensional (2D) to 2.5D and 3D technologies, playing a crucial role in increasing chip integration and improving power efficiency.
2.5D packaging technology is a method of horizontally arranging and connecting multiple chips using a silicon interposer.
This technology is effective in increasing data transmission speed and reducing power consumption by optimizing connections between chips.
For example, NVIDIA's GPU accelerators achieve high performance by horizontally connecting the GPU and multiple high-bandwidth memories (HBM) through an interposer using 2.5D packaging technology to facilitate data transfer.
3D packaging technology stacks chips vertically, enabling direct connections between chips using through-silicon vias (TSVs).
A representative example of the application of 3D packaging technology is HBM.
--- From "06_AI Semiconductor Manufacturing Process and Heterogeneous Integration"

Advances in AI semiconductor technology are expected to improve the performance and energy efficiency of AI systems.
For example, neuromorphic chip technology can perform complex AI tasks with very low power consumption by mimicking the structure of the human brain, and quantum computing technology, when combined with AI, will be able to efficiently process complex problems that are difficult to solve even with current supercomputers.
These technological advancements could increase the scale and complexity of AI models while reducing energy consumption, potentially enabling the construction of more sustainable AI systems.
--- From "09_Future Semiconductor Technology and AI"

Publisher's Review
Semiconductors are driving the AI ​​era.

We closely examine the trends and structural transformations of semiconductor technology that led to the advent of the AI ​​era, as well as the co-evolution of the two.
Nvidia's GPUs have built an AI ecosystem as AI accelerators, and Google has led AI innovation with its Transformer architecture and TPU.
We highlight the turning point of our time through key figures in AI and semiconductor development, such as Geoffrey Hinton and Jensen Huang.
This book explains how cutting-edge technologies, such as the 3nm process and HBM, have changed AI computation, based on semiconductor principles such as Moore's Law and Dennard scaling.
In particular, we compare and analyze the differentiation and use cases of AI-specific semiconductors, such as GPU, TPU, and NPU.
It also provides a fascinating look at how AI is revolutionizing circuit design and manufacturing processes in the semiconductor industry, and what future possibilities neuromorphic chips and quantum computing open up for AI. It compellingly demonstrates that AI semiconductors are no longer the exclusive domain of a specific industry, but rather a core infrastructure transforming future industries and society.
GOODS SPECIFICS
- Date of issue: April 14, 2025
- Page count, weight, size: 117 pages | 128*188*8mm
- ISBN13: 9791173077517

You may also like

카테고리