
Deep learning in your hands: from the basics to practical programming.
Description
Book Introduction
From the basic knowledge and mathematics of deep learning
From implementing practical programming using Python,
Finish everything in one volume
The best book for beginners in deep learning!
Unlike any other books published so far, it explains deep learning in a very easy-to-understand way using formulas and code.
Furthermore, the book's greatest strength lies in its concise and easy-to-understand example codes, which, at the end of the book, culminate in a complete, practical deep learning code, giving readers a sense of accomplishment and allowing them to continue reading without giving up.
This book explains in detail the essential elements of deep learning, starting with Python and basic mathematics and progressing to backpropagation and convolutional neural networks (CNNs).
If readers study Python programming step by step and learn it in order, they can perfectly acquire the basics of deep learning.
From implementing practical programming using Python,
Finish everything in one volume
The best book for beginners in deep learning!
Unlike any other books published so far, it explains deep learning in a very easy-to-understand way using formulas and code.
Furthermore, the book's greatest strength lies in its concise and easy-to-understand example codes, which, at the end of the book, culminate in a complete, practical deep learning code, giving readers a sense of accomplishment and allowing them to continue reading without giving up.
This book explains in detail the essential elements of deep learning, starting with Python and basic mathematics and progressing to backpropagation and convolutional neural networks (CNNs).
If readers study Python programming step by step and learn it in order, they can perfectly acquire the basics of deep learning.
- You can preview some of the book's contents.
Preview
index
[Chapter 1] What is Deep Learning?
1.1 What is Intelligence?
1.2 Artificial Intelligence (AI)
1.3 Machine Learning
1.4 Neural Networks
1.5 Deep Learning Overview
1.6 History of Artificial Intelligence and Deep Learning
__1.6.1 The First Golden Age of Artificial Intelligence: 1950s–1960s
__1.6.2 The Second Golden Age of Artificial Intelligence: 1980s to late 1990s
__1.6.3 The Third Golden Age of Artificial Intelligence: After the 2000s
[Chapter 2] Python Overview
2.1 Why Use Python?
2.2 Using Anaconda and Jupyter Notebook
__2.2.1 Anaconda Download
__2.2.2 Installing Anaconda
__2.2.3 Running Jupyter Notebook
__2.2.4 Using Jupyter Notebook
__2.2.5 Shut down your laptop
2.3 Python Grammar
__2.3.1 Variables and variable types
__2.3.2 Operator
__2.3.3 List
__2.3.4 Tuple
__2.3.5 Dictionary
__2.3.6 if statement
__2.3.7 for loop
__2.3.8 while loop
__2.3.9 Inclusion
__2.3.10 function
__2.3.11 Variable scope
__2.3.12 class
2.4 NumPy
__2.4.1 NumPy Import
__2.4.2 NumPy arrays
__2.4.3 Various functions for creating arrays
__2.4.4 Shape transformation using reshape
__2.4.5 Array Operations
__2.4.6 Broadcast
__2.4.7 Accessing the original value
__2.4.8 Slicing
__2.4.9 Axis and transpose method
__2.4.10 NumPy functions
2.5 Mattplotlip
__2.5.1 Module Import
__2.5.2 Graph Creation
__2.5.3 Graph Design
__2.5.4 Scatter plot display
__2.5.5 Image display
[Chapter 3] Mathematics for Deep Learning
3.1 Mathematical Symbols
__3.1.1 Displaying the grand total with sigma (Σ)
__3.1.2 Natural constant e
__3.1.3 Natural logarithm log
3.2 Linear Algebra
__3.2.1 Scala
__3.2.2 Vector
__3.2.3 Matrix
__3.2.4 Tensor
__3.2.5 Multiplication of scalars and matrices
__3.2.6 Multiplication between elements
__3.2.7 Matrix Multiplication
__3.2.8 Matrix Transpose
3.3 Differentiation
__3.3.1 Ordinary differentiation
__3.3.2 Basic formulas of differentiation
__3.3.3 Chain rule
__3.3.4 Partial differentiation
__3.3.5 Total Differentiation
__3.3.6 Chain rule of multiple variables
3.4 Normal distribution
[Chapter 4] Neural Networks
4.1 Neural Networks
4.2 Modeling of Neurons
4.3 Networking of neurons
4.4 Regression and Classification
__4.4.1 Regression
__4.4.2 Classification
4.5 Activation function
__4.5.1 Step function
__4.5.2 Sigmoid function
__4.5.3 tanh
__4.5.4 ReLU
__4.5.5 Leaky ReLU
__4.5.6 Identity function
__4.5.7 Softmax function
4.6 Neural Network Implementation
__4.6.1 Single Neuron Implementation
__4.6.2 The Effect of Weights and Bias
__4.6.3 Neural Network Implementation
__4.6.4 Implementation of each layer
__4.6.5 Neural Networks (Regression)
__4.6.6 Expressive power of neural networks
__4.6.7 Neural Network (Classification)
[Chapter 5] Backpropagation
5.1 Learning Rules
__5.1.1 Hepp's Rule
__5.1.2 Delta Rule
5.2 What is backpropagation?
5.3 Training and Test Data
5.4 Loss function
__5.4.1 Sum of squared errors
__5.4.2 Cross-entropy error
5.5 Gradient descent
__5.5.1 Overview of Gradient Descent
__5.5.2 How to find the slope
__5.5.3 Output layer gradient
__5.5.4 Gradient of input values in the output layer
__5.5.5 Hidden layer gradient
__5.5.6 Summary of equation for finding slope
__5.5.7 How to find the slope in a regression problem
__5.5.8 How to find the slope in a classification problem
5.6 Optimization Algorithm
__5.6.1 Overview of Optimization Algorithms
__5.6.2 Stochastic Gradient Descent
__5.6.3 Momentum
__5.6.4 Adagrad
__5.6.5 RMSProp
__5.6.6 Adam
5.7 Batch size
__5.7.1 Epochs and Batches
__5.7.2 Batch Learning
__5.7.3 Online Learning
__5.7.4 Mini-batch learning
5.8 Matrix Operations
__5.8.1 Matrix Format
__5.8.2 Forward propagation using matrices
__5.8.3 Backpropagation using matrices
5.9 Implementing Backpropagation in Regression Problems
__5.9.1 Regression Example (Learning the sine function)
__5.9.2 Output Layer Implementation
__5.9.3 Hidden Layer Implementation
__5.9.4 Backpropagation Implementation
__5.9.5 Full code for implementing backpropagation (regression)
__5.9.6 Execution results
5.10 Implementing Backpropagation in Classification Problems
__5.10.1 Classification Case (Learning about Affiliated Areas)
__5.10.2 Implementation of each layer
__5.10.3 Backpropagation implementation full code (classification)
__5.10.4 Execution Results
[Chapter 6] Deep Learning Implementation
6.1 Problems arising from multi-layering
__6.1.1 Local Optimality Trap
__6.1.2 Overfitting
__6.1.3 Slope vanishing
__6.1.4 Long-term learning time problem
6.2 Troubleshooting
__6.2.1 Hyperparameter Optimization
__6.2.2 Regulation
__6.2.3 Initial weight and bias values
__6.2.4 Early Termination
__6.2.5 Data Expansion
__6.2.6 Data Preprocessing
__6.2.7 Dropout
6.3 Classification of iris varieties
__6.3.1 Iris Data Set
__6.3.2 Training and Test Data
__6.3.3 Neural Network Configuration
__6.3.4 Each setting related to learning
6.4 Deep Learning Implementation
__6.4.1 Data Entry and Preprocessing
__6.4.2 Implementation of each layer
__6.4.3 Building a Neural Network
__6.4.4 Mini-batch implementation
__6.4.5 Correct answer rate measurement
__6.4.6 Complete code for iris data cultivar classification
__6.4.7 Execution results
__6.4.8 Measures to prevent overfitting
__6.4.9 Adagrad Implementation
__6.4.10 Dropout Implementation
__6.4.11 Results of overfitting prevention measures
__6.4.12 Varieties classification
[Chapter 7] Convolutional Neural Networks (CNNs)
7.1 Overview of Convolutional Neural Networks (CNNs)
__7.1.1 Visual Processing System
__7.1.2 CNN Structure
__7.1.3 Convolutional Layer
__7.1.4 Pooling layer
__7.1.5 Pre-bonded layer
__7.1.6 Padding
__7.1.7 Stride
__7.1.8 CNN Training
__7.1.9 Variable organization
7.2 im2col and col2im
__7.2.1 Overview of im2col and col2im
__7.2.2 im2col algorithm
__7.2.3 Simple im2col implementation
__7.2.4 Practical im2col code considering placement and channels
__7.2.5 col2im algorithm
__7.2.6 col2im implementation
7.3 Convolutional Layer Implementation
__7.3.1 Implementation Overview
__7.3.2 Forward propagation
__7.3.3 Backpropagation
7.4 Pooling Layer Implementation
__7.4.1 Implementation Process Overview
__7.4.2 Forward propagation
__7.4.3 Backpropagation
7.5 Implementation of the pre-coupled layer
7.6 Convolutional Neural Network Implementation
__7.6.1 Usage Data Set
__7.6.2 Neural network to be built
__7.6.3 CNN code
__7.6.4 Execution results
__7.6.5 Visualizing Convolutional Layers
__7.6.6 Convolutional Layer Effect
7.7 Deeper Neural Networks
__7.7.1 Building a Neural Network
__7.2.2 Execution results
[Chapter 8] Other Deep Learning Technologies
8.1 Recurrent Neural Networks (RNNs)
__8.1.1 Overview of RNN
__8.1.2 LSTM
__8.1.3 GRU
8.2 Natural Language Processing
__8 2.1 Morphological Analysis
__8.2.2 Word Embedding
8.3 Generative Model
__8.3.1 Generative Adversarial Networks (GANs)
__8.3.2 VAE
8.4 Reinforcement Learning
__8.4.1 Overview of Reinforcement Learning
__8.4.2 Deep Reinforcement Learning
8.5 GPU Utilization
__8.5.1 What is GPU?
__8.5.2 Using GPUs in Deep Learning
8.6 Deep Learning Frameworks
8.7 The Future of Deep Learning
1.1 What is Intelligence?
1.2 Artificial Intelligence (AI)
1.3 Machine Learning
1.4 Neural Networks
1.5 Deep Learning Overview
1.6 History of Artificial Intelligence and Deep Learning
__1.6.1 The First Golden Age of Artificial Intelligence: 1950s–1960s
__1.6.2 The Second Golden Age of Artificial Intelligence: 1980s to late 1990s
__1.6.3 The Third Golden Age of Artificial Intelligence: After the 2000s
[Chapter 2] Python Overview
2.1 Why Use Python?
2.2 Using Anaconda and Jupyter Notebook
__2.2.1 Anaconda Download
__2.2.2 Installing Anaconda
__2.2.3 Running Jupyter Notebook
__2.2.4 Using Jupyter Notebook
__2.2.5 Shut down your laptop
2.3 Python Grammar
__2.3.1 Variables and variable types
__2.3.2 Operator
__2.3.3 List
__2.3.4 Tuple
__2.3.5 Dictionary
__2.3.6 if statement
__2.3.7 for loop
__2.3.8 while loop
__2.3.9 Inclusion
__2.3.10 function
__2.3.11 Variable scope
__2.3.12 class
2.4 NumPy
__2.4.1 NumPy Import
__2.4.2 NumPy arrays
__2.4.3 Various functions for creating arrays
__2.4.4 Shape transformation using reshape
__2.4.5 Array Operations
__2.4.6 Broadcast
__2.4.7 Accessing the original value
__2.4.8 Slicing
__2.4.9 Axis and transpose method
__2.4.10 NumPy functions
2.5 Mattplotlip
__2.5.1 Module Import
__2.5.2 Graph Creation
__2.5.3 Graph Design
__2.5.4 Scatter plot display
__2.5.5 Image display
[Chapter 3] Mathematics for Deep Learning
3.1 Mathematical Symbols
__3.1.1 Displaying the grand total with sigma (Σ)
__3.1.2 Natural constant e
__3.1.3 Natural logarithm log
3.2 Linear Algebra
__3.2.1 Scala
__3.2.2 Vector
__3.2.3 Matrix
__3.2.4 Tensor
__3.2.5 Multiplication of scalars and matrices
__3.2.6 Multiplication between elements
__3.2.7 Matrix Multiplication
__3.2.8 Matrix Transpose
3.3 Differentiation
__3.3.1 Ordinary differentiation
__3.3.2 Basic formulas of differentiation
__3.3.3 Chain rule
__3.3.4 Partial differentiation
__3.3.5 Total Differentiation
__3.3.6 Chain rule of multiple variables
3.4 Normal distribution
[Chapter 4] Neural Networks
4.1 Neural Networks
4.2 Modeling of Neurons
4.3 Networking of neurons
4.4 Regression and Classification
__4.4.1 Regression
__4.4.2 Classification
4.5 Activation function
__4.5.1 Step function
__4.5.2 Sigmoid function
__4.5.3 tanh
__4.5.4 ReLU
__4.5.5 Leaky ReLU
__4.5.6 Identity function
__4.5.7 Softmax function
4.6 Neural Network Implementation
__4.6.1 Single Neuron Implementation
__4.6.2 The Effect of Weights and Bias
__4.6.3 Neural Network Implementation
__4.6.4 Implementation of each layer
__4.6.5 Neural Networks (Regression)
__4.6.6 Expressive power of neural networks
__4.6.7 Neural Network (Classification)
[Chapter 5] Backpropagation
5.1 Learning Rules
__5.1.1 Hepp's Rule
__5.1.2 Delta Rule
5.2 What is backpropagation?
5.3 Training and Test Data
5.4 Loss function
__5.4.1 Sum of squared errors
__5.4.2 Cross-entropy error
5.5 Gradient descent
__5.5.1 Overview of Gradient Descent
__5.5.2 How to find the slope
__5.5.3 Output layer gradient
__5.5.4 Gradient of input values in the output layer
__5.5.5 Hidden layer gradient
__5.5.6 Summary of equation for finding slope
__5.5.7 How to find the slope in a regression problem
__5.5.8 How to find the slope in a classification problem
5.6 Optimization Algorithm
__5.6.1 Overview of Optimization Algorithms
__5.6.2 Stochastic Gradient Descent
__5.6.3 Momentum
__5.6.4 Adagrad
__5.6.5 RMSProp
__5.6.6 Adam
5.7 Batch size
__5.7.1 Epochs and Batches
__5.7.2 Batch Learning
__5.7.3 Online Learning
__5.7.4 Mini-batch learning
5.8 Matrix Operations
__5.8.1 Matrix Format
__5.8.2 Forward propagation using matrices
__5.8.3 Backpropagation using matrices
5.9 Implementing Backpropagation in Regression Problems
__5.9.1 Regression Example (Learning the sine function)
__5.9.2 Output Layer Implementation
__5.9.3 Hidden Layer Implementation
__5.9.4 Backpropagation Implementation
__5.9.5 Full code for implementing backpropagation (regression)
__5.9.6 Execution results
5.10 Implementing Backpropagation in Classification Problems
__5.10.1 Classification Case (Learning about Affiliated Areas)
__5.10.2 Implementation of each layer
__5.10.3 Backpropagation implementation full code (classification)
__5.10.4 Execution Results
[Chapter 6] Deep Learning Implementation
6.1 Problems arising from multi-layering
__6.1.1 Local Optimality Trap
__6.1.2 Overfitting
__6.1.3 Slope vanishing
__6.1.4 Long-term learning time problem
6.2 Troubleshooting
__6.2.1 Hyperparameter Optimization
__6.2.2 Regulation
__6.2.3 Initial weight and bias values
__6.2.4 Early Termination
__6.2.5 Data Expansion
__6.2.6 Data Preprocessing
__6.2.7 Dropout
6.3 Classification of iris varieties
__6.3.1 Iris Data Set
__6.3.2 Training and Test Data
__6.3.3 Neural Network Configuration
__6.3.4 Each setting related to learning
6.4 Deep Learning Implementation
__6.4.1 Data Entry and Preprocessing
__6.4.2 Implementation of each layer
__6.4.3 Building a Neural Network
__6.4.4 Mini-batch implementation
__6.4.5 Correct answer rate measurement
__6.4.6 Complete code for iris data cultivar classification
__6.4.7 Execution results
__6.4.8 Measures to prevent overfitting
__6.4.9 Adagrad Implementation
__6.4.10 Dropout Implementation
__6.4.11 Results of overfitting prevention measures
__6.4.12 Varieties classification
[Chapter 7] Convolutional Neural Networks (CNNs)
7.1 Overview of Convolutional Neural Networks (CNNs)
__7.1.1 Visual Processing System
__7.1.2 CNN Structure
__7.1.3 Convolutional Layer
__7.1.4 Pooling layer
__7.1.5 Pre-bonded layer
__7.1.6 Padding
__7.1.7 Stride
__7.1.8 CNN Training
__7.1.9 Variable organization
7.2 im2col and col2im
__7.2.1 Overview of im2col and col2im
__7.2.2 im2col algorithm
__7.2.3 Simple im2col implementation
__7.2.4 Practical im2col code considering placement and channels
__7.2.5 col2im algorithm
__7.2.6 col2im implementation
7.3 Convolutional Layer Implementation
__7.3.1 Implementation Overview
__7.3.2 Forward propagation
__7.3.3 Backpropagation
7.4 Pooling Layer Implementation
__7.4.1 Implementation Process Overview
__7.4.2 Forward propagation
__7.4.3 Backpropagation
7.5 Implementation of the pre-coupled layer
7.6 Convolutional Neural Network Implementation
__7.6.1 Usage Data Set
__7.6.2 Neural network to be built
__7.6.3 CNN code
__7.6.4 Execution results
__7.6.5 Visualizing Convolutional Layers
__7.6.6 Convolutional Layer Effect
7.7 Deeper Neural Networks
__7.7.1 Building a Neural Network
__7.2.2 Execution results
[Chapter 8] Other Deep Learning Technologies
8.1 Recurrent Neural Networks (RNNs)
__8.1.1 Overview of RNN
__8.1.2 LSTM
__8.1.3 GRU
8.2 Natural Language Processing
__8 2.1 Morphological Analysis
__8.2.2 Word Embedding
8.3 Generative Model
__8.3.1 Generative Adversarial Networks (GANs)
__8.3.2 VAE
8.4 Reinforcement Learning
__8.4.1 Overview of Reinforcement Learning
__8.4.2 Deep Reinforcement Learning
8.5 GPU Utilization
__8.5.1 What is GPU?
__8.5.2 Using GPUs in Deep Learning
8.6 Deep Learning Frameworks
8.7 The Future of Deep Learning
Detailed image
.jpg)
Publisher's Review
The structure of this book is as follows:
Chapter 1: What is Deep Learning?
We introduce the relationship between machine learning, artificial intelligence, and deep learning, and briefly explain the path that artificial intelligence has taken so far.
The author, who is deeply interested in brain science, looks at deep learning from a brain science perspective and presents various interesting stories about the similarities between deep learning and the brain.
Chapter 2: Python Overview
I'll only introduce the core of Python syntax and Jupyter notebooks.
What I mean by core here is that I focus only on what's necessary for the deep learning code implemented throughout the book.
Therefore, the scope of the book is narrower than that of a book that introduces Python as a whole, but it explains everything necessary to follow the programming code in the book, so it is effective as a short-term course for readers who are not familiar with Python.
Chapter 3: Mathematics Required for Deep Learning
As in Chapter 2, we only introduce the core mathematics required to understand and implement deep learning.
It covers linear algebra and differentiation, and readers who have completed high school courses can follow along without difficulty.
For this section, I recommend that you do not just read it with your eyes, but try to trace the formulas by tracing them with a pencil on a blank piece of paper, just like when you study for the CSAT.
Linear Algebra provides practice code using Python NumPy. If you are a beginner, don't neglect this part and practice repeatedly.
Chapter 4 Neural Networks
Introducing neural networks, the background theory of deep learning.
The core concepts of neural networks, including the principles of neural networks, the connection between layers composed of neurons (nodes), forward and backward propagation, weights and biases, and activation functions, are explained with friendly code.
In particular, the part that shows the role and influence of weights and biases in neural networks with actual code is very impressive.
Chapter 5 Backpropagation
Backpropagation is the process of reducing the error between the output result and the actual value in a neural network.
In this process, various optimization algorithms such as gradient descent, Adagrad, and Adam are explained using formulas and codes.
Chapter 6: Deep Learning Implementation
Deep learning is, as the word "deep" suggests, a method of learning data in depth by stacking many layers in a neural network.
Stacking many layers like this improves the performance of the neural network, but it also causes various problems such as overfitting and vanishing gradient.
It is easy to understand intuitively because it presents the process of actually solving these problems using the famous Iris data in code.
Chapter 7 Convolutional Neural Networks (CNNs)
This is the ultimate goal of this book.
Just as deep learning emerged like a comet during an image recognition competition, deep learning is now achieving outstanding results in image classification tasks.
Chapter 7 is the stage where we will utilize everything we have learned so far and complete practical programming code that takes into account convolution, filters, channels, and batch sizes required for image processing.
This chapter is the longest in length compared to other chapters, so you may feel tired while studying. However, if you persevere and follow all the contents, you will be able to implement and understand the magic of deep learning to classify handwritten digit images with near-accuracy, and your efforts will be rewarded in one go.
Chapter 8: Other Deep Learning Technologies
Introducing the latest deep learning technology.
It's filled with content that gives you an idea of how far deep learning has advanced and what the future of deep learning will look like.
It's very helpful, and feels like a guide to taking you beyond the basics to a higher level.
[Key Features of this Book]
- Implementing deep learning algorithms in Python programming code without using a deep learning framework.
- Core Python grammar essential for implementing deep learning
- Basic programming knowledge using Python and the numerical computation library NumPy
- Basic mathematical theories and formula coding principles necessary to understand neural networks, such as differentiation and linear algebra.
- Through step-by-step practice, you will ultimately reach the goal of implementing and applying a convolutional neural network (CNN).
- Provides complete Python code that readers can apply on their own and develop into more advanced code.
- Intuitively and easily understand how deep learning works by matching it to the behavior of the human brain.
- Introduction to the current state of development and future technologies and examples of deep learning.
[For whom this book is intended]
- Complete beginners who want to enter this field, such as students and the general public who have a vague interest in machine learning, artificial intelligence, and deep learning but are wondering which book to start with, and developers from other fields.
- People who have a general understanding of deep learning but want to understand its history, theoretical background, and mathematical logic in more detail.
- People who want to clearly understand deep learning algorithms through formulas and implement them in programming code, and who want to solve the entire process in one book.
- Developers who want to write practical deep learning code, apply it immediately in work or on-site, and implement higher-level deep learning.
[Author's Story]
To our readers in Korea,
Thank you for your interest in my book, "Deep Learning in Your Hands: Practical Programming from Basics."
I'm proud to say that this book, unlike any other book published to date, explains deep learning in a very easy-to-understand way.
In Japan, it has been read by many readers since its publication in August 2018.
Artificial intelligence technology, represented by deep learning, is attracting global attention, and numerous companies and public institutions are exploring ways to utilize it in various fields.
However, most people still think that deep learning is a difficult field to learn.
To break down these barriers, this book thoroughly explains the essential elements of deep learning, from the Python programming language and basic mathematics to convolutional neural networks. It's structured so that by studying step-by-step, you can solidify your grasp of the fundamentals of deep learning.
Today, artificial intelligence is one of the most valuable technologies to learn, not only for its technical aspects but also as a liberal arts subject that fosters imagination for the future. As the author, I would be delighted if readers in Korea could develop their own perspectives on artificial intelligence through this book.
So, let's all explore the world of deep learning with me!
Chapter 1: What is Deep Learning?
We introduce the relationship between machine learning, artificial intelligence, and deep learning, and briefly explain the path that artificial intelligence has taken so far.
The author, who is deeply interested in brain science, looks at deep learning from a brain science perspective and presents various interesting stories about the similarities between deep learning and the brain.
Chapter 2: Python Overview
I'll only introduce the core of Python syntax and Jupyter notebooks.
What I mean by core here is that I focus only on what's necessary for the deep learning code implemented throughout the book.
Therefore, the scope of the book is narrower than that of a book that introduces Python as a whole, but it explains everything necessary to follow the programming code in the book, so it is effective as a short-term course for readers who are not familiar with Python.
Chapter 3: Mathematics Required for Deep Learning
As in Chapter 2, we only introduce the core mathematics required to understand and implement deep learning.
It covers linear algebra and differentiation, and readers who have completed high school courses can follow along without difficulty.
For this section, I recommend that you do not just read it with your eyes, but try to trace the formulas by tracing them with a pencil on a blank piece of paper, just like when you study for the CSAT.
Linear Algebra provides practice code using Python NumPy. If you are a beginner, don't neglect this part and practice repeatedly.
Chapter 4 Neural Networks
Introducing neural networks, the background theory of deep learning.
The core concepts of neural networks, including the principles of neural networks, the connection between layers composed of neurons (nodes), forward and backward propagation, weights and biases, and activation functions, are explained with friendly code.
In particular, the part that shows the role and influence of weights and biases in neural networks with actual code is very impressive.
Chapter 5 Backpropagation
Backpropagation is the process of reducing the error between the output result and the actual value in a neural network.
In this process, various optimization algorithms such as gradient descent, Adagrad, and Adam are explained using formulas and codes.
Chapter 6: Deep Learning Implementation
Deep learning is, as the word "deep" suggests, a method of learning data in depth by stacking many layers in a neural network.
Stacking many layers like this improves the performance of the neural network, but it also causes various problems such as overfitting and vanishing gradient.
It is easy to understand intuitively because it presents the process of actually solving these problems using the famous Iris data in code.
Chapter 7 Convolutional Neural Networks (CNNs)
This is the ultimate goal of this book.
Just as deep learning emerged like a comet during an image recognition competition, deep learning is now achieving outstanding results in image classification tasks.
Chapter 7 is the stage where we will utilize everything we have learned so far and complete practical programming code that takes into account convolution, filters, channels, and batch sizes required for image processing.
This chapter is the longest in length compared to other chapters, so you may feel tired while studying. However, if you persevere and follow all the contents, you will be able to implement and understand the magic of deep learning to classify handwritten digit images with near-accuracy, and your efforts will be rewarded in one go.
Chapter 8: Other Deep Learning Technologies
Introducing the latest deep learning technology.
It's filled with content that gives you an idea of how far deep learning has advanced and what the future of deep learning will look like.
It's very helpful, and feels like a guide to taking you beyond the basics to a higher level.
[Key Features of this Book]
- Implementing deep learning algorithms in Python programming code without using a deep learning framework.
- Core Python grammar essential for implementing deep learning
- Basic programming knowledge using Python and the numerical computation library NumPy
- Basic mathematical theories and formula coding principles necessary to understand neural networks, such as differentiation and linear algebra.
- Through step-by-step practice, you will ultimately reach the goal of implementing and applying a convolutional neural network (CNN).
- Provides complete Python code that readers can apply on their own and develop into more advanced code.
- Intuitively and easily understand how deep learning works by matching it to the behavior of the human brain.
- Introduction to the current state of development and future technologies and examples of deep learning.
[For whom this book is intended]
- Complete beginners who want to enter this field, such as students and the general public who have a vague interest in machine learning, artificial intelligence, and deep learning but are wondering which book to start with, and developers from other fields.
- People who have a general understanding of deep learning but want to understand its history, theoretical background, and mathematical logic in more detail.
- People who want to clearly understand deep learning algorithms through formulas and implement them in programming code, and who want to solve the entire process in one book.
- Developers who want to write practical deep learning code, apply it immediately in work or on-site, and implement higher-level deep learning.
[Author's Story]
To our readers in Korea,
Thank you for your interest in my book, "Deep Learning in Your Hands: Practical Programming from Basics."
I'm proud to say that this book, unlike any other book published to date, explains deep learning in a very easy-to-understand way.
In Japan, it has been read by many readers since its publication in August 2018.
Artificial intelligence technology, represented by deep learning, is attracting global attention, and numerous companies and public institutions are exploring ways to utilize it in various fields.
However, most people still think that deep learning is a difficult field to learn.
To break down these barriers, this book thoroughly explains the essential elements of deep learning, from the Python programming language and basic mathematics to convolutional neural networks. It's structured so that by studying step-by-step, you can solidify your grasp of the fundamentals of deep learning.
Today, artificial intelligence is one of the most valuable technologies to learn, not only for its technical aspects but also as a liberal arts subject that fosters imagination for the future. As the author, I would be delighted if readers in Korea could develop their own perspectives on artificial intelligence through this book.
So, let's all explore the world of deep learning with me!
GOODS SPECIFICS
- Date of issue: June 18, 2019
- Page count, weight, size: 364 pages | 180*235*18mm
- ISBN13: 9791189909024
- ISBN10: 1189909022
You may also like
카테고리
korean
korean