
LLM Prompt Utilization Textbook
Description
Book Introduction
The moment I read a book and immediately apply it, the AI I use changes!
Whether for work or daily life, it's hard to find someone who doesn't use generative AI.
But what's the point of having a tool called generative AI if you don't get the answers you expect? Ultimately, the success or failure of AI depends on how you formulate the questions—in other words, the prompts.
This book provides examples of real-world responses from various large-scale language models (LLMs), including ChatGPT, Gemini, and Claude, and systematically explains the design of situational prompts and the underlying thinking patterns.
Beyond simply listing examples, it explains how prompts work, so you can adapt or adapt the questions to your own specific situation.
Covering a wide range of topics, from basic applications like task automation, content creation, and data analysis to advanced prompts like coding, API integration, and leveraging open source models, the book offers prompts that you can apply directly or develop in your own unique way.
If you want to grow into someone who asks smart questions, not just someone who simply uses tools, then you should definitely read this book.
Whether for work or daily life, it's hard to find someone who doesn't use generative AI.
But what's the point of having a tool called generative AI if you don't get the answers you expect? Ultimately, the success or failure of AI depends on how you formulate the questions—in other words, the prompts.
This book provides examples of real-world responses from various large-scale language models (LLMs), including ChatGPT, Gemini, and Claude, and systematically explains the design of situational prompts and the underlying thinking patterns.
Beyond simply listing examples, it explains how prompts work, so you can adapt or adapt the questions to your own specific situation.
Covering a wide range of topics, from basic applications like task automation, content creation, and data analysis to advanced prompts like coding, API integration, and leveraging open source models, the book offers prompts that you can apply directly or develop in your own unique way.
If you want to grow into someone who asks smart questions, not just someone who simply uses tools, then you should definitely read this book.
- You can preview some of the book's contents.
Preview
index
Chapter 1: Prompt Engineering and Large-Scale Language Models
1.1 What is Prompt Engineering?
__1.1.1 The Trend of Generative AI
__1.1.2 'Prompt', the key to generative AI
__1.1.3 What is prompt engineering?
__1.1.4 Why is prompt engineering necessary?
__1.1.5 Will prompt engineering soon become obsolete?
__1.1.6 Prompt Engineering Research
__1.1.7 Prompt Engineering Beyond Large-Scale Language Models
1.2 Advantages and Disadvantages of Large-Scale Language Models
__1.2.1 What can you do with large-scale language models?
__1.2.2 Four Key Tasks Large-Scale Language Models Do Well
__1.2.3 More specific tasks
__1.2.4 What Large-Scale Language Models Can't Do and Do Wrong
1.3 The working principles and development process of large-scale language models
__1.3.1 History of the emergence of large-scale language models
__1.3.2 Three AI booms and the road to deep learning
__1.3.3 From the emergence of deep learning to generative AI
__1.3.4 Recurrent Neural Networks
__1.3.5 Transformer
__1.3.6 BERT
__1.3.7 GPT
1.4 Types of large-scale language models and how to use them
__1.4.1 How to use large-scale language models?
__1.4.2 Using as a web service
__1.4.3 Using Web API
__1.4.4 Installing and using open source LLM
1.5 How to Use ChatGPT
__1.5.1 ChatGPT and its developer OpenAI
__1.5.2 How to sign up and use ChatGPT
Chapter 2: Introduction to Prompt Engineering
2.1 Let's look at the basic operation of large-scale language models.
__2.1.1 What is the ‘probability’ of a large-scale language model?
__2.1.2 What is a plausible sentence that begins with 'in the big house'?
__2.1.3 When ‘plausibility’ shows ‘hallucination’
__2.1.4 Parameter temperature that specifies diversity
__2.1.5 Parameter top_p that limits diversity
2.2 Q&A format of the prompt
__2.2.1 Q&A templates can be used to elicit answers to questions.
__2.2.2 Let's ask about the color of the sea through Q&A
__2.2.3 Let's ask general questions that require reading comprehension.
__2.2.4 Q&A ensures stable responses from large-scale language models
__2.2.5 Let's see if we can solve general knowledge problems with Q&A.
__2.2.6 Let's ask creative questions with Q&A: Where should we travel?
__2.2.7 A variant of Q&A format that elicits multiple answers
__2.2.8 Let's combine Q and number
2.3 Instructions and Input Format
__2.3.1 Designing prompts including instructions and input
__2.3.2 Task of generating continuous text
__2.3.3 Changing the tone to polite
__2.3.4 Rephrasing sentences to make them easier to understand
__2.3.5 English-Korean translation work
__2.3.6 Selecting spicy food from the options
__2.3.7 Selecting a red object from the list
__2.3.8 Considerations on delimiters
__2.3.9 Markdown Grammar
Chapter 3: Basic Operations of Large-Scale Language Models
3.1 Summary Task: Summarizing with Style
__3.1.1 Large-Scale Language Models and Summary Tasks
__3.1.2 Be conscious of the purpose of the summary
__3.1.3 Key points of the summary
__3.1.4 Let's specify the summary style
__3.1.5 Information Omissions and Distortions
__3.1.6 Summary with Sources and Citations
__3.1.7 Token Restrictions
__3.1.8 What if I want to summarize a longer sentence?
3.2 Inference Task: Classifying Text and Analyzing Sentiment
__3.2.1 Large-Scale Language Models and Inference Tasks
__3.2.2 Let's classify the text
__3.2.3 Let's analyze emotions
__3.2.4 Let's evaluate and score the sentences
__3.2.5 Let's infer the results based on the information
3.3 Transformation tasks: Change the tone, correct sentences, and convert data formats.
__3.3.1 Conversion task
__3.3.2 Translation work
__3.3.3 Sentence Reconstruction Task
__3.3.4 Sentence proofreading and editing
__3.3.5 Data Format Conversion
3.4 Extended Work: Verifying Story Creation and Code Generation Abilities
__3.4.1 Extension Work
__3.4.2 Story Creation
__3.4.3 Idea Generation Framework
__3.4.4 Code Generation
3.5 Other tasks: chatting and extracting, etc.
__3.5.1 Other tasks
__3.5.2 Dialogue Tasks: Games and Role-Playing
__3.5.3 Information extraction task
Chapter 4: Few-Shot Prompts and Performance-Enhancing Techniques
4.1 Zero-shot, one-shot, and few-shot prompts
__4.1.1 What are zero-shot, one-shot, and few-shot?
__4.1.2 Zero-shot, one-shot, and few-shot comparison
__4.1.3 What should I provide for the photshot prompt?
__4.1.4 Comparison of sentences generated with zero-shot and few-shot
4.2 Links of Thought
__4.2.1 What is a link of thoughts?
__4.2.2 Let's use the Chain of Thinking (CoT)
__4.2.3 Let's use the Chain of Thinking (CoT) for the '3-person rock-paper-scissors problem'
__4.2.4 What is Zero Shot CoT?
__4.2.5 Let's solve the wallpaper calculation problem with zero-shot CoT.
__4.2.6 Winning phrase that leads to zero-shot CoT
__4.2.7 Thought-provoking phrases for large-scale language models
__4.2.8 Let's take a deep breath with large-scale language models
__4.2.9 Emotion Prompts
4.3 Self-consistency
__4.3.1 What is self-consistency?
__4.3.2 An example of using self-consistency in computational problems
__4.3.3 Let's use self-consistency to determine whether a message is important.
4.4 Tree of Thought
__4.4.1 The Tree of Thought
__4.4.2 Let's use the tree of thought to infer where the ball is.
__4.4.3 Let's put a Tree of Thought (ToT) in the prompt
__4.4.4 24 Let's solve the game
4.5 MAGI System 261
__4.5.1 Let's gather the wisdom of many people: Imitating the MAGI system
__4.5.2 The MAGI System Answers: Which Language is Better, Rust or Go?
__4.5.3 Let's transform the personality of the MAGI system.
__4.5.4 Let's try the reading comprehension problem
4.6 Virtual Script Engine, PAL
__4.6.1 What is a virtual script engine?
__4.6.2 Let's act according to the virtual program
__4.6.3 Let's make it act like a virtual script engine
__4.6.4 Let's simulate pseudocode
__4.6.5 Let's clearly specify the procedure like a program
__4.6.6 Let's create data using SQL
__4.6.7 PAL, Program Support Language Model
4.7 Mock prompt
__4.7.1 What is a mock prompt?
__4.7.2 Mock Prompt Practice
__4.7.3 Using mock prompts to improve sentence quality
A collection of 10-times more helpful prompts using Chapter 5 templates.
5.1 Profile and Resume Creation Prompts
__5.1.1 Prompt to think of an attention-grabbing social media profile
__5.1.2 Resume Writing Using Large-Scale Language Models
__5.1.3 Writing a Cover Letter
__5.1.4 Create Better Resumes Using Large-Scale Language Models
5.2 Naming Prompts: From Pet Names to Blog Titles
__5.2.1 Things to keep in mind when naming
__5.2.2 What name is good?
__5.2.3 Let's name our pets
__5.2.4 Let's come up with a blog article title
__5.2.5 Variable names and function names
5.3 Idea generation prompts using idea generation methods
__5.3.1 Idea Generation Method
__5.3.2 Idea Generation Framework
__5.3.3 The Basics of Idea Generation: Brainstorming
__5.3.4 Idea generation using SCAMPER
__5.3.5 Let's use the Osborn checklist
__5.3.6 Let's try the six-hat technique
__5.3.7 Idea generation using the persona technique
5.4 Task Automation 1: Excel, File Batch Processing Prompt
__5.4.1 Large-Scale Language Models and Programming
__5.4.2 Large-scale language model familiar with Excel usage
__5.4.3 Let's automate Excel work with large-scale language models.
__5.4.4 Read an Excel contact file and create a PDF invitation
__5.4.5 Let's unpack 100 ZIP files.
__5.4.6 Automatically back up files that haven't been updated for more than a year
5.5 Task Automation 2: Web Browser Control, Scraping Creation Prompt
__5.5.1 How to automatically control web browser behavior?
__5.5.2 Let's capture a screenshot of a specified web page.
__5.5.3 Log in to the membership website and download the CSV file.
__5.5.4 Add all works of a specific user to favorites
Chapter 6: Using Web APIs and Open Source LLM
6.1 How to Use the OpenAI ChatGPT API
__6.1.1 Advantages of using ChatGPT as an API
__6.1.2 Preparing to use the ChatGPT API
__6.1.3 Things to check to use ChatGPT API safely
__6.1.4 Obtain an API key from the OpenAI platform
__6.1.5 Register the API key in an environment variable
__6.1.6 Let's install the OpenAI Python package
__6.1.7 Let's run the simplest program
__6.1.8 Let's chat with the ChatGPT API
__6.1.9 Let's use the Azure OpenAI service
6.2 How to Use Open-Source Large-Scale Language Models
__6.2.1 Open-source large-scale language models that are easy to try
__6.2.2 Basic usage of Colab
__6.2.3 Let's try Meta's large-scale language model Llama 3.
__6.2.4 Easily run large-scale language models with llama.cpp
__6.2.5 Let's try Vicuna
__6.2.6 Let's install it on the local PC
Chapter 7: Advanced Prompt Engineering with AGI as the Goal
Self-consistency of API versions 7.1
__7.1.1 Self-consistency review and the program you will create here
__7.1.2 Calculate age using self-consistency
__7.1.3 Let's determine the importance of email using self-consistency.
__7.1.4 Let's calculate the number of cakes sold using self-consistency.
MAGI system with API version 7.2
__7.2.1 Let's configure the MAGI ToT system with API
__7.2.2 Let's decide on the lunch menu with MAGI ToT
__7.2.3 Let's discuss owning vs. renting with MAGI ToT
7.3 Planning and Resolution Prompts
__7.3.1 What are the planning and solution prompts?
__7.3.2 Specific Methods of Planning and Resolution Prompts
__7.3.3 Let's solve the candle length calculation problem.
__7.3.4 Let's create a Python program with planning and solution prompts.
__7.3.5 Let's automatically run Python programs with the API.
__7.3.6 Let's solve the pudding and chocolate combination problem using the planning and solution prompts.
7.4 Grounding: Leveraging External Resources, Such as Search
__7.4.1 Grounding
__7.4.2 Asking a large-scale language model what time it is
__7.4.3 Provide external tools for large-scale language models
__7.4.4 Let's use the Wikipedia API for search
7.5 Linking with vector databases
__7.5.1 Combining a large-scale language model with a vector database
__7.5.2 Embedding and Vector Databases
__7.5.3 Summarizing Long Documents Using Vector Databases
__7.5.4 Summarizing long documents without using a vector database
__7.5.5 Let's create a QA system that combines search and vector databases.
1.1 What is Prompt Engineering?
__1.1.1 The Trend of Generative AI
__1.1.2 'Prompt', the key to generative AI
__1.1.3 What is prompt engineering?
__1.1.4 Why is prompt engineering necessary?
__1.1.5 Will prompt engineering soon become obsolete?
__1.1.6 Prompt Engineering Research
__1.1.7 Prompt Engineering Beyond Large-Scale Language Models
1.2 Advantages and Disadvantages of Large-Scale Language Models
__1.2.1 What can you do with large-scale language models?
__1.2.2 Four Key Tasks Large-Scale Language Models Do Well
__1.2.3 More specific tasks
__1.2.4 What Large-Scale Language Models Can't Do and Do Wrong
1.3 The working principles and development process of large-scale language models
__1.3.1 History of the emergence of large-scale language models
__1.3.2 Three AI booms and the road to deep learning
__1.3.3 From the emergence of deep learning to generative AI
__1.3.4 Recurrent Neural Networks
__1.3.5 Transformer
__1.3.6 BERT
__1.3.7 GPT
1.4 Types of large-scale language models and how to use them
__1.4.1 How to use large-scale language models?
__1.4.2 Using as a web service
__1.4.3 Using Web API
__1.4.4 Installing and using open source LLM
1.5 How to Use ChatGPT
__1.5.1 ChatGPT and its developer OpenAI
__1.5.2 How to sign up and use ChatGPT
Chapter 2: Introduction to Prompt Engineering
2.1 Let's look at the basic operation of large-scale language models.
__2.1.1 What is the ‘probability’ of a large-scale language model?
__2.1.2 What is a plausible sentence that begins with 'in the big house'?
__2.1.3 When ‘plausibility’ shows ‘hallucination’
__2.1.4 Parameter temperature that specifies diversity
__2.1.5 Parameter top_p that limits diversity
2.2 Q&A format of the prompt
__2.2.1 Q&A templates can be used to elicit answers to questions.
__2.2.2 Let's ask about the color of the sea through Q&A
__2.2.3 Let's ask general questions that require reading comprehension.
__2.2.4 Q&A ensures stable responses from large-scale language models
__2.2.5 Let's see if we can solve general knowledge problems with Q&A.
__2.2.6 Let's ask creative questions with Q&A: Where should we travel?
__2.2.7 A variant of Q&A format that elicits multiple answers
__2.2.8 Let's combine Q and number
2.3 Instructions and Input Format
__2.3.1 Designing prompts including instructions and input
__2.3.2 Task of generating continuous text
__2.3.3 Changing the tone to polite
__2.3.4 Rephrasing sentences to make them easier to understand
__2.3.5 English-Korean translation work
__2.3.6 Selecting spicy food from the options
__2.3.7 Selecting a red object from the list
__2.3.8 Considerations on delimiters
__2.3.9 Markdown Grammar
Chapter 3: Basic Operations of Large-Scale Language Models
3.1 Summary Task: Summarizing with Style
__3.1.1 Large-Scale Language Models and Summary Tasks
__3.1.2 Be conscious of the purpose of the summary
__3.1.3 Key points of the summary
__3.1.4 Let's specify the summary style
__3.1.5 Information Omissions and Distortions
__3.1.6 Summary with Sources and Citations
__3.1.7 Token Restrictions
__3.1.8 What if I want to summarize a longer sentence?
3.2 Inference Task: Classifying Text and Analyzing Sentiment
__3.2.1 Large-Scale Language Models and Inference Tasks
__3.2.2 Let's classify the text
__3.2.3 Let's analyze emotions
__3.2.4 Let's evaluate and score the sentences
__3.2.5 Let's infer the results based on the information
3.3 Transformation tasks: Change the tone, correct sentences, and convert data formats.
__3.3.1 Conversion task
__3.3.2 Translation work
__3.3.3 Sentence Reconstruction Task
__3.3.4 Sentence proofreading and editing
__3.3.5 Data Format Conversion
3.4 Extended Work: Verifying Story Creation and Code Generation Abilities
__3.4.1 Extension Work
__3.4.2 Story Creation
__3.4.3 Idea Generation Framework
__3.4.4 Code Generation
3.5 Other tasks: chatting and extracting, etc.
__3.5.1 Other tasks
__3.5.2 Dialogue Tasks: Games and Role-Playing
__3.5.3 Information extraction task
Chapter 4: Few-Shot Prompts and Performance-Enhancing Techniques
4.1 Zero-shot, one-shot, and few-shot prompts
__4.1.1 What are zero-shot, one-shot, and few-shot?
__4.1.2 Zero-shot, one-shot, and few-shot comparison
__4.1.3 What should I provide for the photshot prompt?
__4.1.4 Comparison of sentences generated with zero-shot and few-shot
4.2 Links of Thought
__4.2.1 What is a link of thoughts?
__4.2.2 Let's use the Chain of Thinking (CoT)
__4.2.3 Let's use the Chain of Thinking (CoT) for the '3-person rock-paper-scissors problem'
__4.2.4 What is Zero Shot CoT?
__4.2.5 Let's solve the wallpaper calculation problem with zero-shot CoT.
__4.2.6 Winning phrase that leads to zero-shot CoT
__4.2.7 Thought-provoking phrases for large-scale language models
__4.2.8 Let's take a deep breath with large-scale language models
__4.2.9 Emotion Prompts
4.3 Self-consistency
__4.3.1 What is self-consistency?
__4.3.2 An example of using self-consistency in computational problems
__4.3.3 Let's use self-consistency to determine whether a message is important.
4.4 Tree of Thought
__4.4.1 The Tree of Thought
__4.4.2 Let's use the tree of thought to infer where the ball is.
__4.4.3 Let's put a Tree of Thought (ToT) in the prompt
__4.4.4 24 Let's solve the game
4.5 MAGI System 261
__4.5.1 Let's gather the wisdom of many people: Imitating the MAGI system
__4.5.2 The MAGI System Answers: Which Language is Better, Rust or Go?
__4.5.3 Let's transform the personality of the MAGI system.
__4.5.4 Let's try the reading comprehension problem
4.6 Virtual Script Engine, PAL
__4.6.1 What is a virtual script engine?
__4.6.2 Let's act according to the virtual program
__4.6.3 Let's make it act like a virtual script engine
__4.6.4 Let's simulate pseudocode
__4.6.5 Let's clearly specify the procedure like a program
__4.6.6 Let's create data using SQL
__4.6.7 PAL, Program Support Language Model
4.7 Mock prompt
__4.7.1 What is a mock prompt?
__4.7.2 Mock Prompt Practice
__4.7.3 Using mock prompts to improve sentence quality
A collection of 10-times more helpful prompts using Chapter 5 templates.
5.1 Profile and Resume Creation Prompts
__5.1.1 Prompt to think of an attention-grabbing social media profile
__5.1.2 Resume Writing Using Large-Scale Language Models
__5.1.3 Writing a Cover Letter
__5.1.4 Create Better Resumes Using Large-Scale Language Models
5.2 Naming Prompts: From Pet Names to Blog Titles
__5.2.1 Things to keep in mind when naming
__5.2.2 What name is good?
__5.2.3 Let's name our pets
__5.2.4 Let's come up with a blog article title
__5.2.5 Variable names and function names
5.3 Idea generation prompts using idea generation methods
__5.3.1 Idea Generation Method
__5.3.2 Idea Generation Framework
__5.3.3 The Basics of Idea Generation: Brainstorming
__5.3.4 Idea generation using SCAMPER
__5.3.5 Let's use the Osborn checklist
__5.3.6 Let's try the six-hat technique
__5.3.7 Idea generation using the persona technique
5.4 Task Automation 1: Excel, File Batch Processing Prompt
__5.4.1 Large-Scale Language Models and Programming
__5.4.2 Large-scale language model familiar with Excel usage
__5.4.3 Let's automate Excel work with large-scale language models.
__5.4.4 Read an Excel contact file and create a PDF invitation
__5.4.5 Let's unpack 100 ZIP files.
__5.4.6 Automatically back up files that haven't been updated for more than a year
5.5 Task Automation 2: Web Browser Control, Scraping Creation Prompt
__5.5.1 How to automatically control web browser behavior?
__5.5.2 Let's capture a screenshot of a specified web page.
__5.5.3 Log in to the membership website and download the CSV file.
__5.5.4 Add all works of a specific user to favorites
Chapter 6: Using Web APIs and Open Source LLM
6.1 How to Use the OpenAI ChatGPT API
__6.1.1 Advantages of using ChatGPT as an API
__6.1.2 Preparing to use the ChatGPT API
__6.1.3 Things to check to use ChatGPT API safely
__6.1.4 Obtain an API key from the OpenAI platform
__6.1.5 Register the API key in an environment variable
__6.1.6 Let's install the OpenAI Python package
__6.1.7 Let's run the simplest program
__6.1.8 Let's chat with the ChatGPT API
__6.1.9 Let's use the Azure OpenAI service
6.2 How to Use Open-Source Large-Scale Language Models
__6.2.1 Open-source large-scale language models that are easy to try
__6.2.2 Basic usage of Colab
__6.2.3 Let's try Meta's large-scale language model Llama 3.
__6.2.4 Easily run large-scale language models with llama.cpp
__6.2.5 Let's try Vicuna
__6.2.6 Let's install it on the local PC
Chapter 7: Advanced Prompt Engineering with AGI as the Goal
Self-consistency of API versions 7.1
__7.1.1 Self-consistency review and the program you will create here
__7.1.2 Calculate age using self-consistency
__7.1.3 Let's determine the importance of email using self-consistency.
__7.1.4 Let's calculate the number of cakes sold using self-consistency.
MAGI system with API version 7.2
__7.2.1 Let's configure the MAGI ToT system with API
__7.2.2 Let's decide on the lunch menu with MAGI ToT
__7.2.3 Let's discuss owning vs. renting with MAGI ToT
7.3 Planning and Resolution Prompts
__7.3.1 What are the planning and solution prompts?
__7.3.2 Specific Methods of Planning and Resolution Prompts
__7.3.3 Let's solve the candle length calculation problem.
__7.3.4 Let's create a Python program with planning and solution prompts.
__7.3.5 Let's automatically run Python programs with the API.
__7.3.6 Let's solve the pudding and chocolate combination problem using the planning and solution prompts.
7.4 Grounding: Leveraging External Resources, Such as Search
__7.4.1 Grounding
__7.4.2 Asking a large-scale language model what time it is
__7.4.3 Provide external tools for large-scale language models
__7.4.4 Let's use the Wikipedia API for search
7.5 Linking with vector databases
__7.5.1 Combining a large-scale language model with a vector database
__7.5.2 Embedding and Vector Databases
__7.5.3 Summarizing Long Documents Using Vector Databases
__7.5.4 Summarizing long documents without using a vector database
__7.5.5 Let's create a QA system that combines search and vector databases.
Detailed image

Publisher's Review
One expression changes the answer of generative AI!
Situational prompt design and patterns, all in one book!
Generative AI has now become an everyday tool used by everyone.
But haven't you ever felt disappointed because the answer wasn't what you expected? That's when you need to check whether the prompt you used was appropriate or whether you could have asked a more complete question.
Even with the same tool, the results can be completely different depending on how you structure your questions.
This book explains how to write prompts that are practically useful when using various large-scale language models (LLMs), such as ChatGPT, Claude, and Gemini.
It systematically teaches you how to design specific and situation-appropriate prompts, so you can apply them 100% in your work and daily life.
We cover a wide range of practical tips for tasks like task automation, coding, content creation, and analytics, as well as advanced prompt design techniques using web APIs and open-source LLMs.
We've compiled a variety of modifier expressions separately, so you can modify the prompts included in the book to suit your situation and create your own prompts.
Recommended for those who want to learn how to ask smarter questions and experience satisfying answers.
During the translation process, I was able to have some impressive experiences by running the examples one by one.
For example, the scene where the model accurately described the location of the photo upload was chilling, and seeing each model respond in different ways to the same prompt gave me a sense of each model's unique personality and characteristics.
Techniques like using pseudocode to guide thinking and summarizing information from Wikipedia were also helpful.
In particular, the various idea generation frameworks, such as brainstorming, and the systematic process for approaching problem solving were so interesting that I wanted to apply them to various tasks, even outside of the LLM program.
While a variety of practical LLM books have been published recently, if you build a solid foundation with this book, you'll be able to understand any other book much more easily.
-Kim Seong-hun
Situational prompt design and patterns, all in one book!
Generative AI has now become an everyday tool used by everyone.
But haven't you ever felt disappointed because the answer wasn't what you expected? That's when you need to check whether the prompt you used was appropriate or whether you could have asked a more complete question.
Even with the same tool, the results can be completely different depending on how you structure your questions.
This book explains how to write prompts that are practically useful when using various large-scale language models (LLMs), such as ChatGPT, Claude, and Gemini.
It systematically teaches you how to design specific and situation-appropriate prompts, so you can apply them 100% in your work and daily life.
We cover a wide range of practical tips for tasks like task automation, coding, content creation, and analytics, as well as advanced prompt design techniques using web APIs and open-source LLMs.
We've compiled a variety of modifier expressions separately, so you can modify the prompts included in the book to suit your situation and create your own prompts.
Recommended for those who want to learn how to ask smarter questions and experience satisfying answers.
During the translation process, I was able to have some impressive experiences by running the examples one by one.
For example, the scene where the model accurately described the location of the photo upload was chilling, and seeing each model respond in different ways to the same prompt gave me a sense of each model's unique personality and characteristics.
Techniques like using pseudocode to guide thinking and summarizing information from Wikipedia were also helpful.
In particular, the various idea generation frameworks, such as brainstorming, and the systematic process for approaching problem solving were so interesting that I wanted to apply them to various tasks, even outside of the LLM program.
While a variety of practical LLM books have been published recently, if you build a solid foundation with this book, you'll be able to understand any other book much more easily.
-Kim Seong-hun
GOODS SPECIFICS
- Date of issue: August 31, 2025
- Page count, weight, size: 500 pages | 908g | 185*235*20mm
- ISBN13: 9791140715558
You may also like
카테고리
korean
korean