
Linear Algebra for Deep Learning
Description
Book Introduction
Covering the fundamentals of mathematics for data science and machine learning, this textbook is ideal as an introductory course. It fully captures the content of MIT Professor Gilbert Strang's lectures on linear algebra and deep learning.
You can properly cover the mathematical theories necessary to understand deep learning and neural networks, including linear algebra, optimization, probability, and statistics.
You can properly cover the mathematical theories necessary to understand deep learning and neural networks, including linear algebra, optimization, probability, and statistics.
- You can preview some of the book's contents.
Preview
index
Chapter 1: Linear Algebra Highlights
1.1 Multiplication Ax using columns of matrix A
1.2 Matrix Multiplication AB
1.3 Four basic subspaces
1.4 Elimination and A=LU
1.5 Orthogonal matrices and subspaces
1.6 Eigenvalues and Eigenvectors
1.7 Symmetric positive definite matrix
1.8 Singular Values and Singular Vectors in Singular Value Decomposition
1.9 Principal components and optimal low-rank matrices
1.10 Rayleigh Quotients and Generalized Eigenvalues
1.11 Norms of Vectors, Functions, and Matrices
1.12 Decomposition of Matrices and Tensors: Quantity and Sparseness
Chapter 2: Calculating Large Matrices
2.1 Numerical linear algebra
2.2 Four Least Squares
2.3 Three bases of thermal space
2.4 Randomized linear algebra
Chapter 3: Low-Rank and Compressed Sensing
3.1 Changes in A^{-1} due to changes in A
3.2 Eigenvalue interlacing and low-rank signals
3.3 Rapidly decreasing singular values
3.4 Decomposition algorithm for l²+l¹
3.5 Compressed Sensing and Matrix Completion
Chapter 4 Special Matrices
4.1 Fourier Transform: Discrete and Continuous
4.2 Movement matrix and circulation matrix
4.3 Kronecker product AB
4.4 Sine and cosine transformations via Kronecker sums
4.5 Toeplitz matrices and shift-invariant filters
4.6 Graphs, the Laplacian, and Kirchhoff's Laws
4.7 Clustering using spectral methods and K-means
4.8 Rank 1 Matrix Completion
4.9 Orthogonal Procrustes Problem
4.10 Distance matrix
Chapter 5 Probability and Statistics
5.1 Mean, Variance, and Probability
5.2 Probability distribution
5.3 Moment generating function, cumulative generating function, and statistical inequalities
5.4 Covariance matrix and joint probability
5.5 Multivariate Normal Distribution and Weighted Least Squares
5.6 Markov chain
Chapter 6 Optimization
6.1 Minimum Problems: Convexity and Newton's Method
6.2 Lagrange multipliers and cost derivatives
6.3 Linear programming, game theory, and duality
6.4 Gradient descent to the minimum
6.5 Stochastic Gradient Descent and ADAM
Chapter 7: Learning from Data
7.1 Structure of deep neural networks
7.2 Convolutional Neural Networks
7.3 Backpropagation and the chain rule
7.4 Hyperparameters: Fateful Decisions
7.5 The World of Machine Learning
Appendix A References
Appendix B Eigenvalues and Singular Values of Rank 1 Matrices
Appendix C: Code and Algorithms for Numerical Linear Algebra
Appendix D Counting the Number of Parameters in Basic Decomposition
Appendix E: List of Books on Machine Learning
Search
1.1 Multiplication Ax using columns of matrix A
1.2 Matrix Multiplication AB
1.3 Four basic subspaces
1.4 Elimination and A=LU
1.5 Orthogonal matrices and subspaces
1.6 Eigenvalues and Eigenvectors
1.7 Symmetric positive definite matrix
1.8 Singular Values and Singular Vectors in Singular Value Decomposition
1.9 Principal components and optimal low-rank matrices
1.10 Rayleigh Quotients and Generalized Eigenvalues
1.11 Norms of Vectors, Functions, and Matrices
1.12 Decomposition of Matrices and Tensors: Quantity and Sparseness
Chapter 2: Calculating Large Matrices
2.1 Numerical linear algebra
2.2 Four Least Squares
2.3 Three bases of thermal space
2.4 Randomized linear algebra
Chapter 3: Low-Rank and Compressed Sensing
3.1 Changes in A^{-1} due to changes in A
3.2 Eigenvalue interlacing and low-rank signals
3.3 Rapidly decreasing singular values
3.4 Decomposition algorithm for l²+l¹
3.5 Compressed Sensing and Matrix Completion
Chapter 4 Special Matrices
4.1 Fourier Transform: Discrete and Continuous
4.2 Movement matrix and circulation matrix
4.3 Kronecker product AB
4.4 Sine and cosine transformations via Kronecker sums
4.5 Toeplitz matrices and shift-invariant filters
4.6 Graphs, the Laplacian, and Kirchhoff's Laws
4.7 Clustering using spectral methods and K-means
4.8 Rank 1 Matrix Completion
4.9 Orthogonal Procrustes Problem
4.10 Distance matrix
Chapter 5 Probability and Statistics
5.1 Mean, Variance, and Probability
5.2 Probability distribution
5.3 Moment generating function, cumulative generating function, and statistical inequalities
5.4 Covariance matrix and joint probability
5.5 Multivariate Normal Distribution and Weighted Least Squares
5.6 Markov chain
Chapter 6 Optimization
6.1 Minimum Problems: Convexity and Newton's Method
6.2 Lagrange multipliers and cost derivatives
6.3 Linear programming, game theory, and duality
6.4 Gradient descent to the minimum
6.5 Stochastic Gradient Descent and ADAM
Chapter 7: Learning from Data
7.1 Structure of deep neural networks
7.2 Convolutional Neural Networks
7.3 Backpropagation and the chain rule
7.4 Hyperparameters: Fateful Decisions
7.5 The World of Machine Learning
Appendix A References
Appendix B Eigenvalues and Singular Values of Rank 1 Matrices
Appendix C: Code and Algorithms for Numerical Linear Algebra
Appendix D Counting the Number of Parameters in Basic Decomposition
Appendix E: List of Books on Machine Learning
Search
Detailed image

Publisher's Review
Uncovering the relationship between linear algebra and deep learning!
It contains the exact content of Professor Gilbert Strang's lectures on linear algebra and deep learning at MIT.
Linear algebra, optimization, probability and statistics, etc.
You can properly understand the mathematical theories necessary to understand deep learning and neural networks.
I hope to understand the relationship between mathematics and deep learning and become a leading expert in this field.
It's good to see people like this.
· College or graduate students in science and engineering fields interested in machine learning, deep learning, and data science
Deep learning experts who are curious about the mathematical foundations
Overseas book reviews
“Imagine your graduate school has a new Masters in Data Science program, and you’re designing a semester-long course covering the fundamentals of mathematics.
As a math professor, you'll likely be wondering what data science is, what role math plays in it, what content is most important to cover when time is limited, and how to design your classes.
I also faced a similar problem.
However, this concern was completely resolved the moment I read Professor Gilbert Strang's book, "Linear Algebra and Learning from Data."
Many people already know how well Professor Strang conveys the fundamentals of mathematics through his linear algebra courses and books.
(syncopation)
This book covers the fundamentals of mathematics for data science and machine learning, making it an ideal textbook for introductory courses.
The material presented in this book is invaluable, and several chapters deserve to be covered in depth in subsequent courses.
I recommend this perfect book as a textbook for introductory graduate courses.
And I will continue to use this book in my courses.
“My only complaint about this book is, ‘Why did it only come out now?’”
- Volker H.
Schultz (Volker H.
Professor Schulz (University of Trier, Germany), excerpt from a book review by the Society for Industrial and Applied Mathematics (SIAM)
Related books
Strang Calculus (Hanbit Academy, 2019)
Strang Differential Equations and Linear Algebra (Hanbit Academy, 2019)
Hands-On Machine Learning (2nd Edition) (Hanbit Media, 2020)
It contains the exact content of Professor Gilbert Strang's lectures on linear algebra and deep learning at MIT.
Linear algebra, optimization, probability and statistics, etc.
You can properly understand the mathematical theories necessary to understand deep learning and neural networks.
I hope to understand the relationship between mathematics and deep learning and become a leading expert in this field.
It's good to see people like this.
· College or graduate students in science and engineering fields interested in machine learning, deep learning, and data science
Deep learning experts who are curious about the mathematical foundations
Overseas book reviews
“Imagine your graduate school has a new Masters in Data Science program, and you’re designing a semester-long course covering the fundamentals of mathematics.
As a math professor, you'll likely be wondering what data science is, what role math plays in it, what content is most important to cover when time is limited, and how to design your classes.
I also faced a similar problem.
However, this concern was completely resolved the moment I read Professor Gilbert Strang's book, "Linear Algebra and Learning from Data."
Many people already know how well Professor Strang conveys the fundamentals of mathematics through his linear algebra courses and books.
(syncopation)
This book covers the fundamentals of mathematics for data science and machine learning, making it an ideal textbook for introductory courses.
The material presented in this book is invaluable, and several chapters deserve to be covered in depth in subsequent courses.
I recommend this perfect book as a textbook for introductory graduate courses.
And I will continue to use this book in my courses.
“My only complaint about this book is, ‘Why did it only come out now?’”
- Volker H.
Schultz (Volker H.
Professor Schulz (University of Trier, Germany), excerpt from a book review by the Society for Industrial and Applied Mathematics (SIAM)
Related books
Strang Calculus (Hanbit Academy, 2019)
Strang Differential Equations and Linear Algebra (Hanbit Academy, 2019)
Hands-On Machine Learning (2nd Edition) (Hanbit Media, 2020)
GOODS SPECIFICS
- Date of issue: August 5, 2020
- Page count, weight, size: 560 pages | 1,065g | 188*257*35mm
- ISBN13: 9791156644972
- ISBN10: 1156644976
You may also like
카테고리
korean
korean