Big Data – Buzz Words: What is HDFS – Day 8 of 21

In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS.

What is HDFS ?

HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure.

HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system.

Architecture of HDFS

The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes.

The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode.

In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language.

Visual Representation of HDFS Architecture

Big Data - Buzz Words: What is HDFS - Day 8 of 21 hdfs-1

Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode.

High Availability During Disaster

Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability.

If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails.

The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware.


In tomorrow’s blog post we will discuss the importance of the relational database in Big Data.

Reference: Pinal Dave (

Previous Post
Big Data – Buzz Words: What is MapReduce – Day 7 of 21
Next Post
Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

Related Posts

No results found.

6 Comments. Leave new

  • very use full…..

  • Thanks Dave, you have been writing really great information for last a week. But I doubt, what will happen if master node fails. Is there any procedure for that too. Or if it is assured that master node will never fail in system then why it will never fail and how it is implemented differently from slave nodes to avoid failures on master. Thanks a lot.

    • As mentioned above, a non – operational master node (name node) require to be maintained in separate server rack. This replicated node will have all the necessary data required to perform the task of name node; in case it fails.

  • Phaneendra Subnivis
    October 15, 2013 1:33 pm

    Hi Arijit Saha,

    Trying to share info which I know to answer your question.
    While we set up the entire Hadoop cluster with commodity machines, the name node is expected to have a server class machine which has HA built into it. This is one of the solution, and every Big Data solution providers like IBM, Cloudera, Hortonworks, Microsoft etc., have their own set of solutions fitting their platform.


  • NameNode is written as NameSpace in the above text.

    And, thank you for this informative introduction


Leave a Reply