Thursday, March 28, 2013

Benchmarking a Hadoop Cluster




Benchmarks make good tests because you also get numbers that you can compare with other clusters as a sanity check on whether your new cluster is performing roughly as expected. And you can tune a cluster using benchmark results to squeeze the best performance out of it.


To get the best results, you should run benchmarks on a cluster that is not being used by others. In practice, this is just before it is put into service and users start relying on it. Once users have scheduled periodic jobs on a cluster, it is generally impossible to find a time when the cluster is not being used.


Hadoop comes with several benchmarks that you can run very easily with minimal setup cost. Benchmarks are packaged in the test JAR file, and you can get a list of them, with descriptions, by invoking the JAR file with no arguments:

% hadoop jar $HADOOP_INSTALL/hadoop-*-test.jar

Most of the benchmarks show usage instructions when invoked with no arguments.

For example:

% hadoop jar $HADOOP_INSTALL/hadoop-*-test.jar TestDFSIO 

TestFDSIO.0.0.4
Usage: TestFDSIO -read | -write | -clean [-nrFiles N] [-fileSize MB] [-resFile
resultFileName] [-bufferSize Bytes]


Benchmarking HDFS with TestDFSIO

TestDFSIO tests the I/O performance of HDFS. It does this by using a MapReduce job as a convenient way to read or write files in parallel. Each file is read or written in a separate map task, and the output of the map is used for collecting statistics related to the file just processed. The statistics are accumulated in the reduce to produce a summary.

The following command writes 10 files of 1,000 MB each:

% hadoop jar $HADOOP_INSTALL/hadoop-*-test.jar TestDFSIO -write -nrFiles 10
-fileSize 1000


At the end of the run, the results are written to the console and also recorded in a local
file (which is appended to, so you can rerun the benchmark and not lose old results):
% cat TestDFSIO_results.log
----- TestDFSIO ----- : write
Date & time: Sun Apr 12 07:14:09 EDT 2009
Number of files: 10
Total MBytes processed: 10000
Throughput mb/sec: 7.796340865378244
Average IO rate mb/sec: 7.8862199783325195
IO rate std deviation: 0.9101254683525547
Test exec time sec: 163.387
The files are written under the /benchmarks/TestDFSIO directory


To run a read benchmark, use the -read argument. Note that these files must already
exist (having been written by TestDFSIO -write):

% hadoop jar $HADOOP_INSTALL/hadoop-*-test.jar TestDFSIO -read -nrFiles 10
-fileSize 1000

Here are the results for a real run:

----- TestDFSIO ----- : read
Date & time: Sun Apr 12 07:24:28 EDT 2009
Number of files: 10
Total MBytes processed: 10000
Throughput mb/sec: 80.25553361904304
Average IO rate mb/sec: 98.6801528930664
IO rate std deviation: 36.63507598174921
Test exec time sec: 47.624
When you’ve finished benchmarking, you can delete all the generated files from HDFS
using the -clean argument:

% hadoop jar $HADOOP_INSTALL/hadoop-*-test.jar TestDFSIO -clean


Benchmarking MapReduce with Sort

Hadoop comes with a MapReduce program that does a partial sort of its input. It is very useful for benchmarking the whole MapReduce system, as the full input dataset is transferred through the shuffle. The three steps are: generate some random data, perform the sort, then validate the results.

First, we generate some random data using RandomWriter. It runs a MapReduce job with 10 maps per node, and each map generates (approximately) 1 GB of random binary data, with keys and values of various sizes.



Here’s how to invoke RandomWriter (found in the example JAR file, not the test one) to write its output to a directory called random-data:

% hadoop jar $HADOOP_INSTALL/hadoop-*-examples.jar randomwriter random-data

Next, we can run the Sort program:


% hadoop jar $HADOOP_INSTALL/hadoop-*-examples.jar sort random-data sorted-data

The overall execution time of the sort is the metric we are interested in, but it’s instructive to watch the job’s progress via the web UI (http://jobtracker-host:50030/), where you can get a feel for how long each phase of the job takes.


sanity check, we validate that the data in sorted-data is, in fact, correctly sorted:

% hadoop jar $HADOOP_INSTALL/hadoop-*-test.jar testmapredsort -sortInput random-data \

-sortOutput sorted-data

This command runs the SortValidator program, which performs a series of checks on the unsorted and sorted data to check whether the sort is accurate. It reports the outcome to the console at the end of its run:
SUCCESS! Validated the MapReduce framework's 'sort' successfully.


MRBench (invoked with mrbench) runs a small job a number of times. It acts as a good
counterpoint to sort, as it checks whether small job runs are responsive.

NNBench (invoked with nnbench) is useful for load-testing namenode hardware.

Gridmix is a suite of benchmarks designed to model a realistic cluster workload by
mimicking a variety of data-access patterns seen in practice. See the documentation
in the distribution for how to run Gridmix








Wednesday, March 27, 2013

HADOOP Security Configuration




Authentication :  mechanism to assure Hadoop that the user seeking to perform an operation on the cluster is who she claims to be and therefore can be trusted. 

HDFS file permissions provide only a mechanism for authorization, which controls what a particular user can do to a particular file.

However, authorization is not enough by itself,because the system is still open to abuse via spoofing by a malicious user who can gain network access to the cluster.


In 2009 Yahoo!  led a team of engineers there to implement secure authentication for Hadoop. In their design, Hadoop itself does not manage user credentials; instead, it relies on Kerberos, a mature open-source network authentication protocol, to authenticate the user. In turn, Kerberos doesn’t manage permissions. Kerberos says that a user is who he says he is; it’s Hadoop’s job to determine whether that user has permission to perform a given action.


There are three steps that a client must take to access a service when using Kerberos, each of which involves a message exchange with a server:

1. Authentication : The client authenticates itself to the Authentication Server and
receives a timestamped Ticket-Granting Ticket (TGT).

2. Authorization : The client uses the TGT to request a service ticket from the Ticket Granting Server.

3. Service request : The client uses the service ticket to authenticate itself to the server that is providing the service the client is using. In the case of Hadoop, this might be the namenode or the jobtracker.





The authorization and service request steps are not user-level actions; the client performs these steps on the user’s behalf. The authentication step, however, is normally carried out explicitly by the user using the kinit command, which will prompt for a password. However, this doesn’t mean you need to enter your password every time you run a job or access HDFS, since TGTs last for 10 hours by default (and can be renewed for up to a week). It’s common to automate authentication at operating system login time, thereby providing single sign-on to Hadoop. 

In cases where you don’t want to be prompted for a password (for running an unattended MapReduce job, for example), you can create a Kerberos keytab file using the ktutil command. A keytab is a file that stores passwords and may be supplied to kinit with the -t option.


With Kerberos authentication turned on, let’s see what happens when we try to copy a local file to HDFS:

% hadoop fs -put quangle.txt 

10/07/03 15:44:58 WARN ipc.Client: Exception encountered while connecting to the server: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:80 20 failed on local exception: java.io.IOException: javax.security.sasl.SaslExcep tion: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]




The operation fails because we don’t have a Kerberos ticket. We can get one by authenticating to the KDC, using kinit:

% kinit
Password for hadoop-user@LOCALDOMAIN: password
% hadoop fs -put quangle.txt .
% hadoop fs -stat %n quangle.txt
quangle.txt

And we see that the file is successfully written to HDFS. Notice that even though we carried out two filesystem commands, we only needed to call kinit once, since the Kerberos ticket is valid for 10 hours (use the klist command to see the expiry time of your tickets and kdestroy to invalidate your tickets). After we get a ticket, everything works just as it normally would.


Instead of using the three-step Kerberos ticket exchange protocol to authenticate each call, which would present a high load on the KDC on a busy cluster, Hadoop uses delegation tokens to allow later authenticated access without having to contact the KDC again. Delegation tokens are created and used transparently by Hadoop on behalf of users, so there’s no action you need to take as a user beyond using kinit to sign in, but it’s useful to have a basic idea of how they are used.


A delegation token is generated by the server (the namenode in this case) and can be thought of as a shared secret between the client and the server. On the first RPC call to the namenode, the client has no delegation token, so it uses Kerberos to authenticate, and as a part of the response it gets a delegation token from the namenode. In subsequent calls, it presents the delegation token, which the namenode can verify (since it generated it using a secret key), and hence the client is authenticated to the server. When it wants to perform operations on HDFS blocks, the client uses a special kind of delegation token, called a block access token, that the namenode passes to the client in response to a metadata request. 

The client uses the block access token to authenticate itself to datanodes. This is possible only because the namenode shares its secret key used to generate the block access token with datanodes (which it sends in heartbeat messages), so that they can verify block access tokens. Thus, an HDFS block may be accessed only by a client with a valid block access token from a namenode. This closes the security hole in unsecured Hadoop where only the block ID was needed to gain access to a block.











Popular Posts