Skip to product information
The world's easiest introduction to Bayesian statistics
The world's easiest introduction to Bayesian statistics
Description
Book Introduction
Bayesian statistics for business use

Bayesian statistics are being used in business in conjunction with the spread of the Internet.
On the Internet, customers' purchasing and search behavior histories are automatically collected, and Bayesian statistics are overwhelmingly superior to traditional statistics in estimating customer "types" from them.


Currently, many Internet companies are actually using Bayesian statistics.
Among them, Microsoft is famous for using Bayesian statistics in business from early on.
Bayesian statistics have also been introduced to the help function of the Windows OS, and software has also been developed that prioritizes promising guidelines when a user searches for, for example, "child illness symptoms" on the web.


In 1996, former Microsoft CEO Bill Gates announced in a newspaper that his company's competitive advantage was due to Bayesian statistics.
Meanwhile, Google is also known to have utilized Bayesian statistics technology in its search engine's automatic translation system.
Therefore, anyone engaged in business in this century will be at the forefront if they master Bayesian statistics.
This book contains case studies and commentary that will be helpful for business people to use in their daily lives.
  • You can preview some of the book's contents.
    Preview

index
Lesson 0: Understanding Bayesian Statistics Using Only the Four Arithmetic Operations
Special features of this book
0-1 You can reach a level where you can actually use it even with no prior knowledge.
0-2 Solve with area and arithmetic
Even Bill Gates took note! Bayesian statistics for business.
0-4 Bayesian statistics rely on human psychology.
Simple practice problems in the 0-5 fill-in-the-blank format are ideal for self-study.

Part 1
Attributes! Understanding the Essence of Bayesian Statistics


Lesson 1: Obtaining Information Changes the Probability
Basic usage of 'Bayesian estimation'
Summary of Lecture 1 / Practice Problems

Lecture 2: Bayesian Estimation Sometimes Counterintuitive?
Things to keep in mind when using objective data
Summary of Lecture 2 / Practice Problems

Lesson 3: Even subjective numbers can be estimated.
The principle of insufficient reason for use in difficult situations
Summary of Lecture 3 / Practice Problems

Lecture 4: Expanding the Range of Estimates Using the Probability of Probability
Summary of Lecture 4 / Practice Problems
What kind of person was column Bayes?

Lesson 5: The process of inference is highlighted
Features of Bayesian estimation
Summary of Lecture 5 / Practice Problems

Lesson 6: Clear and rigorous, but of limited use
Neyman-Pearson estimation
Summary of Lecture 6 / Practice Problems

Lesson 7 Bayesian Estimation with a Small Amount of Information
Draw a plausible conclusion
Differences from Neyman-Pearson estimation
Summary of Lecture 7 / Practice Problems

Lesson 8: Bayesian Inference is Based on the Principle of Maximum Likelihood
The intersection of Bayesian and Neyman-Pearson statistics
Summary of Lecture 8 / Practice Problems

Lesson 9: Bayesian Estimation Sometimes Counterintuitive?
Monty Hall problem and the three prisoners problem
Summary of Lecture 9 / Practice Problems
Two rules for column 'popular sayings'

Lesson 10: Estimation when multiple pieces of information are obtained?
Use the 'multiplication formula for the probability of independent trials'
Summary of Lecture 10 / Practice Problems

Lesson 11: Estimation when multiple pieces of information are obtained?
Example of a spam filter
Summary of Lecture 11 / Practice Problems

In Bayesian estimation, information can be used sequentially.
'Successive rationality'
Summary of Lecture 12 / Practice Problems

Lesson 13: Bayesian Estimation Becomes More Accurate as More Information is Gained
Summary of Lecture 13 / Practice Problems
Scholars who restored Bayes' inverse probability

Part 2
Completely self-taught! From "Probability Theory" to "Estimation Based on the Normal Distribution"


Lesson 14: 'Probability' Has the Same Properties as 'Area'
Fundamentals of Probability Theory
Summary of Lecture 14 / Practice Problems

Lesson 15: How to Represent Probability After Obtaining Information
Basic properties of 'conditional probability'
Summary of Lecture 15 / Practice Problems

Lecture 16: Probability Distributions for More General Estimation
Summary of Lecture 16 / Practice Problems

Lesson 17: The Beta Distribution: Characterized by Two Numbers
Summary of Lecture 17 / Practice Problems

Lesson 18: Expected Value: Determining the Characteristics of Probability Distributions
Summary of Lecture 18 / Practice Problems
What is column subjective probability?

Lesson 19: High-altitude estimation using probability distributions?
In the case of 'beta distribution'
Summary of Lecture 19 / Practice Problems

Lesson 20: Observations from coin tossing or astronomical observations
Normal distribution
Summary of Lecture 20 / Practice Problems

Lesson 21: High-altitude estimation using probability distributions?
In the case of 'normal distribution'
Summary of Lecture 21 / Practice Problems
Reinforcement ▶ Calculating the integral of the beta distribution

In conclusion
Practice Problem Answers

Into the book
Bayesian statistics techniques are being applied in various fields other than IT companies.
For example, in fax machines, Bayesian statistics are used to correct noise in the transmitted image to make it closer to the original image.
Bayesian statistics are also being used in the medical field, such as in ‘automatic diagnosis systems.’
As you will discover as you read this book, the strengths of Bayesian statistics lie in its ability to "make guesses with minimal data, and become more accurate with more data," and its ability to "automatically update guesses in real time in response to incoming information."
Through this, everyone will agree that Bayesian statistics is optimal for cutting-edge business.
--- p.009

It's called 'Bayesian update'.
If we change 'renewal' into the word we commonly use, it is 'update'.
In this book, the above process is called ‘Bayesian estimation’.
Bayesian estimation can be summarized as 'updating the prior probability to the posterior probability based on the observation (information) of the action.'
In this book, estimation in individual cases is called 'Bayesian estimation', and the entire set of such estimation methods is called 'Bayesian statistics'.
--- p.031

In an article I wrote for an entertainment magazine about Bayesian estimation, I used the results of a questionnaire survey.
We asked the editor in advance to conduct a survey on working women's Valentine's Day behavior.
What I wanted to know was, 'What is the probability that women give chocolate to men they like and men they don't like?'
The editor reported on a simple survey conducted on an Internet questionnaire bulletin board targeting working women, offering the options of '0%, 50%, 100%'.
When they processed it statistically, it was found that on average, they gave chocolate to someone they were 'sincere' about 42.5% of the time, and to someone they were 'not' about 22% of the time.
It was surprising that the probability of giving chocolate to someone you truly care about is less than 50%, but the fact that the probability of giving chocolate to someone you don't really care about is as high as 22% made me realize the greatness of the 'habit of giving chocolate out of courtesy'.
--- p.050

At that time, I took out a ball from the jar in front of me and it was black.
This black ball becomes the 'evidence' for the conjecture.
So, from this evidence, can we determine whether this complex is A or B? This is a fairly simple deduction; anyone could conclude that it's B.
The reasoning behind this is obvious enough that it doesn't need any explanation, but to clearly understand 'what is inference', let's describe the inference process in detail.
--- p.077

As we have seen, Bayesian estimation has the advantage of being able to make estimates 'once' in any environment, as there is no significance level setting like in the hypothesis test of Neyman-Pearson statistics.
Unlike the Neyman-Pearson method, it does not make a decision on either A or B, but rather leaves open the possibility of both and presents the ratio of the possibilities.
The job of looking at numbers and making judgments is left to the statistician.
That's why Bayesian estimation is sometimes called the 'president's probability'.
This means that Bayesian estimation is left to the employees, and it is up to the CEO to make a judgment based on the reported figures.
--- p.093

Bayesian estimation is not that new; it simply uses well-known probability formulas (those that high school students learn).
However, in the sense that subjectivity is attached to the prior probability being used, it can be said to be a theory on the borderline between mathematics and philosophy.
As evidence, using Bayesian estimation in special settings produces results that run counter to our common sense.
It may seem like a paradox.
In this lecture, we will introduce two paradoxes related to Bayesian estimation, and hope that through them you will gain a sense of Bayesian estimation from a different perspective than usual.
--- p.106

First, let's set the dictionary type as before, get one piece of information, and then calculate the posterior probability.
Here, we will explain it in the form of 'the computer functionally determines whether the email you received is spam or not', rather than 'the computer functionally determines whether the email you received is spam or not'.
First, before scanning the incoming mail, the computer assigns a prior probability to each type of mail, such as 'whether the mail is spam or regular mail.'
Here, let's apply the 'principle of insufficient reason' and assign 0.5 to each side.
This means that the filter evaluates the incoming mail as '0.5 chance that it is spam, and 0.5 chance that it is regular mail'.
At this time, if there is a probability known to be more credible than this, it is okay to set it as the prior probability.
--- p.133

Publisher's Review
What kind of person was Bayes?
He wrote only one mathematical paper in his life.


Thomas Bayes, the Englishman who discovered Bayes' inverse probability, was born in 1702 and died in 1761.
Bayes studied theology and mathematics at the University of Edinburgh in Scotland, and later followed in his father's footsteps and became a pastor.
Bayes studied mathematics while working as a pastor.
At that time, it was not unusual for many people who worked in the service of God to study mathematics.
Bayes wrote only one mathematical paper in his lifetime.
It was a paper titled [Considerations on the Solution of a Problem in the Thinking of Probability].
The origin of Bayesian inverse probability was in this paper.
Bayes did not seem to consider this discovery very important, leaving it unattended for a long time, so it is unclear in what year it was written.
It was probably in the late 1740s, probably around 1748 or 1749.
It was his friend Richard Price, a pastor, who made Bayes' discovery known to the world.
Price researched Bayes's writings at the request of Bayes' relatives.
Then, after discovering the aforementioned paper and organizing his method of thinking, he published the paper in the Royal Society's [Philosophical Journal] in 1764.
This is where Bayesian inverse probability made its debut.
But Price's report received little attention.
What changed the flow was the research of the French genius mathematician Laplace.
Laplace was a man who made many achievements in astronomy, physics, and mathematics, and before he learned of Bayes' research, he had already written a paper that came close to the idea of ​​Bayes' inverse probability.
After hearing about Price's research, he realized that it would lead to the completion of his own earlier research, and so he completed Bayes' inverse probability in its current formal form around 1787.
Therefore, Bayes' inverse probability can also be seen as Laplace's discovery.

How does it differ from standard statistics?
Bayesian statistics relies on human psychology.

In section 0-2, it was mentioned that 'Bayesian statistics has some questionable aspects'.
What does this mean? In other words, it means that the probabilities covered by Bayesian statistics are "subjective."
In other words, the probability derived from Bayesian statistics is not an objective number, but a subjective number that depends on ‘human psychology.’
In that sense, Bayesian statistics has an 'ideological' aspect.
That is why Bayesian statistics was once buried and branded as ‘fake’ by the scientific community that values ​​objectivity.
Unfortunately, most Bayesian statistics books do not cover this topic.
It is unclear whether this is because the authors dislike things being 'publicly known' or because they simply lack knowledge, but in any case, books that explicitly explain this are rare.
However, the ‘subjectivity’ and ‘idealism’ of Bayesian statistics are the essence of Bayesian statistics and the source of its convenience.
Therefore, if we explain it while ignoring this, the essence of Bayesian statistics will never be conveyed to the reader.
So, in this book, the 'subjectivity' and 'ideology' of Bayesian statistics are exposed and explained without hiding anything.
In particular, it was carefully explained how it differs from standard statistics.
I'm sure many readers will applaud and say, "Bayesian statistics, that's amazing! How interesting!"
GOODS SPECIFICS
- Date of issue: March 31, 2017
- Page count, weight, size: 300 pages | 535g | 153*225*18mm
- ISBN13: 9788965022718
- ISBN10: 8965022711

You may also like

카테고리