Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
57,15 €*
Versandkostenfrei per Post / DHL
Lieferzeit 1-2 Wochen
Kategorien:
Beschreibung
Dramatically accelerate the building process of complex models using PyTorch to extract the best performance from any computing environment
Key Features
- Reduce the model-building time by applying optimization techniques and approaches
- Harness the computing power of multiple devices and machines to boost the training process
- Focus on model quality by quickly evaluating different model configurations
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description
This book, written by an HPC expert with over 25 years of experience, guides you through enhancing model training performance using PyTorch. Here you'll learn how model complexity impacts training time and discover performance tuning levels to expedite the process, as well as utilize PyTorch features, specialized libraries, and efficient data pipelines to optimize training on CPUs and accelerators. You'll also reduce model complexity, adopt mixed precision, and harness the power of multicore systems and multi-GPU environments for distributed training. By the end, you'll be equipped with techniques and strategies to speed up training and focus on building stunning models.
What you will learn
- Compile the model to train it faster
- Use specialized libraries to optimize the training on the CPU
- Build a data pipeline to boost GPU execution
- Simplify the model through pruning and compression techniques
- Adopt automatic mixed precision without penalizing the model's accuracy
- Distribute the training step across multiple machines and devices
Who this book is for
This book is for intermediate-level data scientists who want to learn how to leverage PyTorch to speed up the training process of their machine learning models by employing a set of optimization strategies and techniques. To make the most of this book, familiarity with basic concepts of machine learning, PyTorch, and Python is essential. However, there is no obligation to have a prior understanding of distributed computing, accelerators, or multicore processors.
Table of Contents
- Deconstructing the Training Process
- Training Models Faster
- Compiling the Model
- Using Specialized Libraries
- Building an Efficient Data Pipeline
- Simplifying the Model
- Adopting Mixed Precision
- Distributed Training at a Glance
- Training with Multiple CPUs
- Training with Multiple GPUs
- Training with Multiple Machines
Key Features
- Reduce the model-building time by applying optimization techniques and approaches
- Harness the computing power of multiple devices and machines to boost the training process
- Focus on model quality by quickly evaluating different model configurations
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description
This book, written by an HPC expert with over 25 years of experience, guides you through enhancing model training performance using PyTorch. Here you'll learn how model complexity impacts training time and discover performance tuning levels to expedite the process, as well as utilize PyTorch features, specialized libraries, and efficient data pipelines to optimize training on CPUs and accelerators. You'll also reduce model complexity, adopt mixed precision, and harness the power of multicore systems and multi-GPU environments for distributed training. By the end, you'll be equipped with techniques and strategies to speed up training and focus on building stunning models.
What you will learn
- Compile the model to train it faster
- Use specialized libraries to optimize the training on the CPU
- Build a data pipeline to boost GPU execution
- Simplify the model through pruning and compression techniques
- Adopt automatic mixed precision without penalizing the model's accuracy
- Distribute the training step across multiple machines and devices
Who this book is for
This book is for intermediate-level data scientists who want to learn how to leverage PyTorch to speed up the training process of their machine learning models by employing a set of optimization strategies and techniques. To make the most of this book, familiarity with basic concepts of machine learning, PyTorch, and Python is essential. However, there is no obligation to have a prior understanding of distributed computing, accelerators, or multicore processors.
Table of Contents
- Deconstructing the Training Process
- Training Models Faster
- Compiling the Model
- Using Specialized Libraries
- Building an Efficient Data Pipeline
- Simplifying the Model
- Adopting Mixed Precision
- Distributed Training at a Glance
- Training with Multiple CPUs
- Training with Multiple GPUs
- Training with Multiple Machines
Dramatically accelerate the building process of complex models using PyTorch to extract the best performance from any computing environment
Key Features
- Reduce the model-building time by applying optimization techniques and approaches
- Harness the computing power of multiple devices and machines to boost the training process
- Focus on model quality by quickly evaluating different model configurations
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description
This book, written by an HPC expert with over 25 years of experience, guides you through enhancing model training performance using PyTorch. Here you'll learn how model complexity impacts training time and discover performance tuning levels to expedite the process, as well as utilize PyTorch features, specialized libraries, and efficient data pipelines to optimize training on CPUs and accelerators. You'll also reduce model complexity, adopt mixed precision, and harness the power of multicore systems and multi-GPU environments for distributed training. By the end, you'll be equipped with techniques and strategies to speed up training and focus on building stunning models.
What you will learn
- Compile the model to train it faster
- Use specialized libraries to optimize the training on the CPU
- Build a data pipeline to boost GPU execution
- Simplify the model through pruning and compression techniques
- Adopt automatic mixed precision without penalizing the model's accuracy
- Distribute the training step across multiple machines and devices
Who this book is for
This book is for intermediate-level data scientists who want to learn how to leverage PyTorch to speed up the training process of their machine learning models by employing a set of optimization strategies and techniques. To make the most of this book, familiarity with basic concepts of machine learning, PyTorch, and Python is essential. However, there is no obligation to have a prior understanding of distributed computing, accelerators, or multicore processors.
Table of Contents
- Deconstructing the Training Process
- Training Models Faster
- Compiling the Model
- Using Specialized Libraries
- Building an Efficient Data Pipeline
- Simplifying the Model
- Adopting Mixed Precision
- Distributed Training at a Glance
- Training with Multiple CPUs
- Training with Multiple GPUs
- Training with Multiple Machines
Key Features
- Reduce the model-building time by applying optimization techniques and approaches
- Harness the computing power of multiple devices and machines to boost the training process
- Focus on model quality by quickly evaluating different model configurations
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description
This book, written by an HPC expert with over 25 years of experience, guides you through enhancing model training performance using PyTorch. Here you'll learn how model complexity impacts training time and discover performance tuning levels to expedite the process, as well as utilize PyTorch features, specialized libraries, and efficient data pipelines to optimize training on CPUs and accelerators. You'll also reduce model complexity, adopt mixed precision, and harness the power of multicore systems and multi-GPU environments for distributed training. By the end, you'll be equipped with techniques and strategies to speed up training and focus on building stunning models.
What you will learn
- Compile the model to train it faster
- Use specialized libraries to optimize the training on the CPU
- Build a data pipeline to boost GPU execution
- Simplify the model through pruning and compression techniques
- Adopt automatic mixed precision without penalizing the model's accuracy
- Distribute the training step across multiple machines and devices
Who this book is for
This book is for intermediate-level data scientists who want to learn how to leverage PyTorch to speed up the training process of their machine learning models by employing a set of optimization strategies and techniques. To make the most of this book, familiarity with basic concepts of machine learning, PyTorch, and Python is essential. However, there is no obligation to have a prior understanding of distributed computing, accelerators, or multicore processors.
Table of Contents
- Deconstructing the Training Process
- Training Models Faster
- Compiling the Model
- Using Specialized Libraries
- Building an Efficient Data Pipeline
- Simplifying the Model
- Adopting Mixed Precision
- Distributed Training at a Glance
- Training with Multiple CPUs
- Training with Multiple GPUs
- Training with Multiple Machines
Über den Autor
Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his [...]. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Details
Erscheinungsjahr: | 2024 |
---|---|
Fachbereich: | Programmiersprachen |
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781805120100 |
ISBN-10: | 1805120107 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Alves, Maicon Melo |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 13 mm |
Von/Mit: | Maicon Melo Alves |
Erscheinungsdatum: | 30.04.2024 |
Gewicht: | 0,438 kg |
Über den Autor
Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his [...]. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Details
Erscheinungsjahr: | 2024 |
---|---|
Fachbereich: | Programmiersprachen |
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781805120100 |
ISBN-10: | 1805120107 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Alves, Maicon Melo |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 13 mm |
Von/Mit: | Maicon Melo Alves |
Erscheinungsdatum: | 30.04.2024 |
Gewicht: | 0,438 kg |
Warnhinweis