Blue Genie – Data Prep (Hadoop) : Two-Minute Video

Blue Genie – Data Prep (Hadoop) : Two-Minute Video

Introduction

So, you’ve decided to use Hadoop. That’s great! If you’re new to Hadoop, you’ll probably want to know some things about how it works and what its advantages are. I’ve put together this short video that explains some of the basic concepts behind HDFS (Hadoop Distributed File System).

Blue Genie – Data Prep (Hadoop) : Two-Minute Video

The video covers the following topics:

The video covers the following topics:

  • HDFS
  • Distributed Copy (distcp)
  • Spark
  • Pig

How does HDFS work?

HDFS is a way for nodes to share data. It’s a file system, but it’s also a distributed file system–one that stores data on multiple nodes.

In short, HDFS stores small chunks of information in blocks (usually 64MB) and distributes them across all available machines in your cluster. When you access a file from HDFS, the system will find its location based on where it was last written or read and then retrieve it from there.

Why you can’t use the Hadoop distcp command to copy data from one cluster to another.

You can’t use the Hadoop distcp command to copy data from one cluster to another. The reason for this is that it’s a tool for moving data within a single cluster and not between clusters.

The Hadoop distcp command is a command line tool that allows you to copy files from one location on your server to another location on your server, but it won’t work for copying files across servers or even just across drives in the same server (unless those drives are mounted under /hadoop/distcp).

If you want to move large amounts of data around in your organization, there are better tools available than using Hadoop’s built-in methods like distcp or rsync–for example:

How you can use Spark to copy data from one cluster to another.

To copy data from one cluster to another, you’ll need to create a Spark job. A Spark job is similar to a MapReduce job–you specify the input and output files, as well as any other parameters that describe your processing logic. However, unlike MapReduce jobs which run on individual nodes in HDFS, Spark jobs run in parallel across all nodes in an Apache Hadoop cluster (or multiple clusters).

To get started with this example:

  • Create an RDD with your source dataset by calling rdd() on your source filepaths or URLs. This will return an RDD of String objects containing each line from each filepath/URL specified by its respective key value pair argument passed into rdd().
  • Call map() on each element within each partition of this new RDD using whatever transformation function(s) you want applied over all values within each partitioned entry such as transforming them into lowercase letters or converting them into binary numbers before storing them back out again onto disk under new names etc..

How do I get my data into my Hadoop cluster?

There are three ways to get your data into Hadoop.

  • The first is using the distcp command. This command allows you to copy files between HDFS clusters and local file systems, which allows you to copy data from a production environment into your development environment or vice versa. For example, if you have a large dataset that needs processing it’s best not to pull it all down on one machine because this could cause performance issues due to memory limitations and slow processing speed (if there are too many I/O operations happening at once). Instead, use distcp so that each node only pulls down what they need when they need it–this will ensure optimal performance while reducing the risk of crashing any individual machines during loading times!
  • The second method involves using Spark extractors like TextFileExtractor or AvroAvroExtractor . These tools allow users who aren’t familiar with MapReduce jobs yet want access without having much knowledge about how things work under-the-hood; however these methods do require some setup beforehand such as creating an S3 bucket first before uploading any files onto S3 using s3cmd .

What is a Pig script? Where can I find examples of code?

Pig is a high-level data-flow language that can be used to analyze large data sets. It uses a procedural programming model, but with a syntax similar to SQL. You can find examples of code at https://github.com/apache/pig

For more information on these topics and much more, please visit http://www.bluegenie.com/

Oops! Click Regenerate Content below to try generating this section again.

Conclusion

If you have any questions about the topics covered in this video, please feel free to contact us at [email protected] or by phone at (888) 817-6637.

Related Article