Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
75,50 €*
Versandkostenfrei per Post / DHL
Aktuell nicht verfügbar
Kategorien:
Beschreibung
The missing expert-led manual for the AWS ecosystem - go from foundations to building data engineering pipelines effortlessly
Purchase of the print or Kindle book includes a free eBook in the PDF format.
Key Features:Learn about common data architectures and modern approaches to generating value from big data
Explore AWS tools for ingesting, transforming, and consuming data, and for orchestrating pipelines
Learn how to architect and implement data lakes and data lakehouses for big data analytics
Book Description:
Knowing how to architect and implement complex data pipelines is a highly sought-after skill. Data engineers are responsible for building these pipelines that ingest, transform, and join raw datasets - creating new value from the data in the process.
Amazon Web Services (AWS) offers a range of tools to simplify a data engineer's job, making it the preferred platform for performing data engineering tasks.
This book will take you through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. The book also teaches you about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data.
By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.
What You Will Learn:Understand data engineering concepts and emerging technologies
Ingest streaming data with Amazon Kinesis Data Firehose
Optimize, denormalize, and join datasets with AWS Glue Studio
Use Amazon S3 events to trigger a Lambda process to transform a file
Run complex SQL queries on data lake data using Amazon Athena
Load data into a Redshift data warehouse and run queries
Create a visualization of your data using Amazon QuickSight
Extract sentiment data from a dataset using Amazon Comprehend
Who this book is for:
This book is for data engineers, data analysts, and data architects who are new to AWS and looking to extend their skills to the AWS cloud. Anyone who is new to data engineering and wants to learn about the foundational concepts while gaining practical experience with common data engineering services on AWS will also find this book useful.
A basic understanding of big data-related topics and Python coding will help you get the most out of this book but is not needed. Familiarity with the AWS console and core services is also useful but not necessary.
Purchase of the print or Kindle book includes a free eBook in the PDF format.
Key Features:Learn about common data architectures and modern approaches to generating value from big data
Explore AWS tools for ingesting, transforming, and consuming data, and for orchestrating pipelines
Learn how to architect and implement data lakes and data lakehouses for big data analytics
Book Description:
Knowing how to architect and implement complex data pipelines is a highly sought-after skill. Data engineers are responsible for building these pipelines that ingest, transform, and join raw datasets - creating new value from the data in the process.
Amazon Web Services (AWS) offers a range of tools to simplify a data engineer's job, making it the preferred platform for performing data engineering tasks.
This book will take you through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. The book also teaches you about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data.
By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.
What You Will Learn:Understand data engineering concepts and emerging technologies
Ingest streaming data with Amazon Kinesis Data Firehose
Optimize, denormalize, and join datasets with AWS Glue Studio
Use Amazon S3 events to trigger a Lambda process to transform a file
Run complex SQL queries on data lake data using Amazon Athena
Load data into a Redshift data warehouse and run queries
Create a visualization of your data using Amazon QuickSight
Extract sentiment data from a dataset using Amazon Comprehend
Who this book is for:
This book is for data engineers, data analysts, and data architects who are new to AWS and looking to extend their skills to the AWS cloud. Anyone who is new to data engineering and wants to learn about the foundational concepts while gaining practical experience with common data engineering services on AWS will also find this book useful.
A basic understanding of big data-related topics and Python coding will help you get the most out of this book but is not needed. Familiarity with the AWS console and core services is also useful but not necessary.
The missing expert-led manual for the AWS ecosystem - go from foundations to building data engineering pipelines effortlessly
Purchase of the print or Kindle book includes a free eBook in the PDF format.
Key Features:Learn about common data architectures and modern approaches to generating value from big data
Explore AWS tools for ingesting, transforming, and consuming data, and for orchestrating pipelines
Learn how to architect and implement data lakes and data lakehouses for big data analytics
Book Description:
Knowing how to architect and implement complex data pipelines is a highly sought-after skill. Data engineers are responsible for building these pipelines that ingest, transform, and join raw datasets - creating new value from the data in the process.
Amazon Web Services (AWS) offers a range of tools to simplify a data engineer's job, making it the preferred platform for performing data engineering tasks.
This book will take you through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. The book also teaches you about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data.
By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.
What You Will Learn:Understand data engineering concepts and emerging technologies
Ingest streaming data with Amazon Kinesis Data Firehose
Optimize, denormalize, and join datasets with AWS Glue Studio
Use Amazon S3 events to trigger a Lambda process to transform a file
Run complex SQL queries on data lake data using Amazon Athena
Load data into a Redshift data warehouse and run queries
Create a visualization of your data using Amazon QuickSight
Extract sentiment data from a dataset using Amazon Comprehend
Who this book is for:
This book is for data engineers, data analysts, and data architects who are new to AWS and looking to extend their skills to the AWS cloud. Anyone who is new to data engineering and wants to learn about the foundational concepts while gaining practical experience with common data engineering services on AWS will also find this book useful.
A basic understanding of big data-related topics and Python coding will help you get the most out of this book but is not needed. Familiarity with the AWS console and core services is also useful but not necessary.
Purchase of the print or Kindle book includes a free eBook in the PDF format.
Key Features:Learn about common data architectures and modern approaches to generating value from big data
Explore AWS tools for ingesting, transforming, and consuming data, and for orchestrating pipelines
Learn how to architect and implement data lakes and data lakehouses for big data analytics
Book Description:
Knowing how to architect and implement complex data pipelines is a highly sought-after skill. Data engineers are responsible for building these pipelines that ingest, transform, and join raw datasets - creating new value from the data in the process.
Amazon Web Services (AWS) offers a range of tools to simplify a data engineer's job, making it the preferred platform for performing data engineering tasks.
This book will take you through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. The book also teaches you about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data.
By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.
What You Will Learn:Understand data engineering concepts and emerging technologies
Ingest streaming data with Amazon Kinesis Data Firehose
Optimize, denormalize, and join datasets with AWS Glue Studio
Use Amazon S3 events to trigger a Lambda process to transform a file
Run complex SQL queries on data lake data using Amazon Athena
Load data into a Redshift data warehouse and run queries
Create a visualization of your data using Amazon QuickSight
Extract sentiment data from a dataset using Amazon Comprehend
Who this book is for:
This book is for data engineers, data analysts, and data architects who are new to AWS and looking to extend their skills to the AWS cloud. Anyone who is new to data engineering and wants to learn about the foundational concepts while gaining practical experience with common data engineering services on AWS will also find this book useful.
A basic understanding of big data-related topics and Python coding will help you get the most out of this book but is not needed. Familiarity with the AWS console and core services is also useful but not necessary.
Über den Autor
Gareth Eagar has worked in the IT industry for over 25 years, starting in South Africa, then working in the United Kingdom, and now based in the United States. In 2017, he started working at Amazon Web Services (AWS) as a solution architect, working with enterprise customers in the NYC metro area. Gareth has become a recognized subject matter expert for building data lakes on AWS, and in 2019 he launched the Data Lake Day educational event at the AWS Lofts in NYC and San Francisco. He has also delivered a number of public talks and webinars on topics relating to big data, and in 2020 Gareth transitioned to the AWS Professional Services organization as a senior data architect, helping customers architect and build complex data pipelines.
Details
Erscheinungsjahr: | 2021 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: | Kartoniert / Broschiert |
ISBN-13: | 9781800560413 |
ISBN-10: | 1800560419 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Eagar, Gareth |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 26 mm |
Von/Mit: | Gareth Eagar |
Erscheinungsdatum: | 29.12.2021 |
Gewicht: | 0,892 kg |
Über den Autor
Gareth Eagar has worked in the IT industry for over 25 years, starting in South Africa, then working in the United Kingdom, and now based in the United States. In 2017, he started working at Amazon Web Services (AWS) as a solution architect, working with enterprise customers in the NYC metro area. Gareth has become a recognized subject matter expert for building data lakes on AWS, and in 2019 he launched the Data Lake Day educational event at the AWS Lofts in NYC and San Francisco. He has also delivered a number of public talks and webinars on topics relating to big data, and in 2020 Gareth transitioned to the AWS Professional Services organization as a senior data architect, helping customers architect and build complex data pipelines.
Details
Erscheinungsjahr: | 2021 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: | Kartoniert / Broschiert |
ISBN-13: | 9781800560413 |
ISBN-10: | 1800560419 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Eagar, Gareth |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 26 mm |
Von/Mit: | Gareth Eagar |
Erscheinungsdatum: | 29.12.2021 |
Gewicht: | 0,892 kg |
Warnhinweis