Skip to product information
On-device AI
On-device AI
Description
Book Introduction
A step-by-step guide to mobile machine learning using ML Kit, Core ML, TFLite, and more!

With mobile devices becoming the primary computing device of the day, machine learning is no longer an option but a necessity for mobile developers.
This book focuses on explaining how to quickly get started with mobile machine learning development using various frameworks.
Quickly learn the concepts with the author's friendly explanations as a Google core developer, and then build practical knowledge by creating mobile computer vision and text processing models for both iOS and Android using tools such as ML Kit, Core ML, and TFLite.
Join the change that is leading the new mobile development paradigm through this book!
  • You can preview some of the book's contents.
    Preview

index
CHAPTER 1: Introduction to Artificial Intelligence and Machine Learning
_1.1 What is artificial intelligence?
_1.2 What is machine learning?
__1.2.1 Moving from Traditional Programming to Machine Learning
__1.2.2 How can computers learn?
__1.2.3 Differences between traditional programming and machine learning
_1.3 Creating a mobile model
_1.4 In conclusion

CHAPTER 2: INTRODUCTION TO COMPUTER VISION
_2.1 Using Neurons for Vision
__2.1.1 First Classifier: Distinguishing Clothing
__2.1.2 Data: Fashion MNIST
__2.1.3 Fashion MNIST Model Architecture
__2.1.4 Coding the Fashion MNIST Model
_2.2 Transfer Learning for Computer Vision
_2.3 In conclusion

CHAPTER 3 ML Kit Introduction
_3.1 Android Face Detection Application
Step 1: Create a project with Android Studio
Step 2: Add and configure the ML Kit library
__Creating a 3-step user interface
__Step 4: Create an assets folder and add images
__Step 5 Loading the base image into the UI
__Calling the 6-step face detector
Step 7: Drawing a Bounding Box
_3.2 iOS Face Detection Application
Step 1: Create an Xcode project
Step 2: Using CocoaPods and Podfiles
__Creating a 3-step user interface
__Step 4 Application Logic
_3.3 In conclusion

CHAPTER 4: Building Computer Vision Applications with ML Kit on Android
_4.1 Image Classification and Label Finding
Step 1: Create an application and set up ML Kit
__Creating a 2-step user interface
__Step 3: Create an assets folder and add images
__Step 4 Loading an image into the image view
__Write the 5-step button handler code
__Additional steps
_4.2 Object Detection
Step 1: Create an application and import ML Kit
__Step 2: Generate Activity Layout XML
__Step 3 Loading an image into the image view
__Setting options for the 4-step object detector
Interaction with the __5-step button
Step 6: Drawing a Bounding Box
__7-step object labeling
_4.3 Object Detection and Object Tracking in Video
__4.3.1 Check the layout
__4.3.2 GraphicOverlay class
__4.3.3 Outputting the video screen obtained from the camera
__4.3.4 ObjectAnalyzer class
__4.3.5 ObjectGraphic class
__4.3.6 Conclusion
_4.4 In conclusion

CHAPTER 5: Building Text Processing Applications with ML Kit on Android
_5.1 Extracting entity names
Step 1: Create an application
__Create a layout for the 2-step activity
__Write code to extract entity names in step 3
__Step 4 Completion
_5.2 Handwriting Recognition Model
Step 1: Create an application
Step 2: Create a surface for handwriting
__3-Step Ink Parsing with ML Kit
_5.3 Smart AI Answer Model
Step 1: Create an application
__Step 2: Setting up a virtual conversation
__Create a 3-step smart answer
_5.4 In conclusion

CHAPTER 6: Building Computer Vision Applications with ML Kit on iOS
_6.1 Image Classification and Label Finding
Step 1: Create an application project in Xcode
__Step 2: Create a Podfile
__Setting up the 3-step storyboard
Step 4: Modify the View Controller Code Using ML Kit
_6.2 Object Detection with ML Kit on iOS
__Getting started with Step 1
__Creating UI in Step 2 Storyboard
__Creating a subview to draw the 3-step bounding box
__Running Step 4 Object Detection
__Handling the 5-step callback
__Additional step: Combining object detection and image classification
__Object detection and object tracking in additional step video
_6.3 In conclusion

CHAPTER 7: Building Text Processing Applications with ML Kit on iOS
_7.1 Extracting entity names
Step 1: Create a new project and add ML Kit pods
Step 2: Create a storyboard and add actions and outlets
__Getting text view input in step 3 view controller
__Initialize the 4-step model
Step 5: Extracting Entity Names from Text
_7.2 Handwriting Recognition
Step 1: Create an application and add ML Kit pods
Step 2: Create a storyboard, actions, and outlets
__Step 3 Stroke, Point, Ink
__Step 4: Receiving user touch input
Initialize the __5-step model
__Step 6 Ink Recognition
_7.3 Smart Answering
Step 1: Create an application project and import ML Kit
Step 2: Create a storyboard, outlets, and actions
__Create a 3-step conversation
__Get a 4-step smart answer
_7.4 In conclusion

CHAPTER 8 Understanding TFLite Deeper
_8.1 What is TFLite?
_8.2 Getting Started with TFLite
__8.2.1 Saving the model
__8.2.2 Converting the model
__8.2.3 Testing the Model with a Standalone Interpreter
_8.3 Creating an Android Application Using TFLite
_8.4 Importing TFLite files
_8.5 Writing Kotlin Code for Model Inference
_8.6 Advanced Content
_8.7 Creating iOS Applications Using TFLite
Step 1: Create a Basic iOS Application
__Adding the TFLite library to the Step 2 project
__Creating a 3-step user interface
__Create and initialize the 4-step model inference class
__Step 5 Reasoning
__Adding a Model to the Step 6 Application
__Adding UI logic in step 7
_8.8 Beyond 'Hello World': Image Processing
_8.9 Examining Model Optimization
__8.9.1 Quantization
__8.9.2 Using representative data
_8.10 In conclusion

CHAPTER 9 CREATING A CUSTOM MODEL
_9.1 Creating a model with TFLite Model Maker
_9.2 Building Models with Cloud AutoML
__9.2.1 Using AutoML Vision Edge
_9.3 Building a Model with TensorFlow and Transfer Learning
_9.4 Building a Language Model
__9.4.1 Creating a Language Model with ModelMaker
_9.5 In conclusion

CHAPTER 10 USING CUSTOM MODELS IN Android
_Equipping the 10.1 model with Android
_10.2 Creating an Image Classification Application Using a Model Obtained with ModelMaker
_10.3 Using Model Maker models with ML Kit
_10.4 Using Language Models
_10.5 Creating an Android Application for Language Classification
__10.5.1 Creating a layout file
__10.5.2 Writing Activity Code
_10.6 In conclusion

CHAPTER 11 Using Custom Models in iOS
_11.1 Connecting a Model to iOS
_11.2 Custom Image Classifier Model
Step 1: Create an Application and Add a TFLite Pod
Step 2: Create UI and Image Assets
__Loading and exploring 3-step image assets
__Loading the 4-step model
__5-step image conversion into input tensor
__Inference with 6-step tensors
_11.3 Using Custom Models with ML Kit
_11.4 Building Natural Language Processing Applications with Swift
__Loading the 1st stage BoCap
__Step 2: Create a sequence of sentences
__Implementing array extensions for three-step unsafe data processing
__Copy the 4-step array to the data buffer
__Step 5: Inferring data and processing results
_11.5 In conclusion

CHAPTER 12 PRODUCING YOUR APPLICATION WITH FIREBASE
_12.1 Why should I use Firebase's custom model hosting?
_12.2 Creating multiple versions of a model
_12.3 Using Firebase Model Hosting
Step 1: Create a Firebase Project
__Using 2-Step Custom Model Hosting
Step 3: Create a Basic Android Application
__Add Firebase to your 4-step application
Step 5: Importing a Model from Firebase Model Hosting
__Using the 6-step remote setup
__Read remote settings from the 7-step application
__Additional steps
_12.4 In conclusion

CHAPTER 13: CreateML and CoreML for Simple iOS Applications
_13.1 Building a Core ML Image Classifier with Create ML
__13.1.1 Creating a Core ML Application Built with a Create ML Model
__13.1.2 Adding MLModel files
__13.1.3 Running Inference
_13.2 Creating a Text Classifier with Create ML
_13.3 Using Language Models in Applications
_13.4 In conclusion

CHAPTER 14 Accessing the Cloud Model with Mobile Applications
_14.1 Installing TensorFlow Serving
__14.1.1 Installing with Docker
__14.1.2 Installing directly on Linux
_14.2 Building and Serving the Model
_14.3 Accessing the Server Model with Android
_14.4 Accessing the Server Model with iOS
_14.5 In conclusion

CHAPTER 15 Ethics, Fairness, and Privacy in Mobile Applications
_15.1 Ethics, Fairness, and Privacy through Responsible Artificial Intelligence
__15.1.1 Defining the Problem Responsibly
__15.1.2 Preventing data bias
__15.1.3 Building and Training a Model
__15.1.4 Evaluating the Model
_15.2 Google AI Principles
_15.3 In conclusion

Detailed image
Detailed Image 1

Publisher's Review
Googler's "celebrity" Laurence Moroney introduces customized machine learning for mobile developers.

Machine learning is already integrated into our daily lives.
Of course, this also applies to the mobile devices we use every day.
As the name suggests, 'on-device AI' refers to hardware computing that collects information and performs calculations on-site.
On-device AI is a technology with significant advantages, including enhanced security as sensitive user information does not pass through servers, and reduced network latency.
As mobile devices become increasingly important in our daily lives, on-device AI will become the new path forward for machine learning technology.

In this book, author and Googler celebrity Lawrence Moroney explains various sample codes to help mobile developers easily apply machine learning to mobile applications.
In particular, it will be of great help to mobile developers as it introduces one example step by step with sample code for both Android and iOS.
Are you ready to embark on a journey into the world of mobile AI, introduced by the author and translator who lead mobile machine learning at Google? Let's go now!

Key Contents
● Implementing iOS and Android machine learning models
● Build mobile applications using ML Kit for iOS and Android, Core ML, and TFLite
● Explain the difference between cloud inference and on-device inference and introduce related technologies and tools.
● Introduction to technologies and tools that can be utilized when using high-level and low-level APIs.
Introduction to Privacy and Ethics Best Practices for Mobile Machine Learning
GOODS SPECIFICS
- Publication date: November 1, 2022
- Page count, weight, size: 388 pages | 704g | 183*235*16mm
- ISBN13: 9791169210423
- ISBN10: 1169210422

You may also like

카테고리