TIENE EN SU CESTA DE LA COMPRA
en total 0,00 €
Ready to unlock the power of your data? With the fourth edition of this comprehensive guide, you'll learn how to build and maintain reliable, scalable, distributed systems with Apache Hadoop. This book is ideal for programmers looking to analyze datasets of any size, and for administrators who want to set up and run Hadoop clusters.
You'll find illuminating case studies that demonstrate how Hadoop is used to solve specific problems. This edition includes new case studies, updates on Hadoop 2, a refreshed HBase chapter, and new chapters on Crunch and Flume. Author Tom White also suggests learning paths for the book.
Store large datasets with the Hadoop Distributed File System (HDFS)
Run distributed computations with MapReduce
Use Hadoop's data and I/O building blocks for compression, data integrity, serialization (including Avro), and persistence
Discover common pitfalls and advanced features for writing real-world MapReduce programs
Design, build, and administer a dedicated Hadoop cluster-or run Hadoop in the cloud
Load data from relational databases into HDFS, using Sqoop
Perform large-scale data processing with the Pig query language
Analyze datasets with Hive, Hadoop's data warehousing system
Take advantage of HBase for structured and semi-structured data, and ZooKeeper for building distributed systems