Let us start with a very interesting quote for Big Data.
Decoding the human genome originally took 10 years to process; now it can be achieved in one week – The Economist.
This blog post is written in response to the T-SQL Tuesday post of The Big Data. This is a very interesting subject. Data is growing every single day. I remember my first computer which had 1 GB of the Hard Drive. I had told my dad that I will never need any more hard drive, we are good for next 10 years. I bought much larger Harddrive over 2 years and today I have a NAS at home, which can hold 2 TB and have few file hosting in the cloud as well. Well the point is, the amount of the data any individual deals with has increased significantly.
There was a time of floppy drives. Today, some of the auto correct software even does not recognize that word. However, USB drive, Pen drives and Jump drives are common names across the industry. It is race – I really do not know where it will stop.
The same way the amount of the data has grown so wild that a relational database is not able to handle the processing of this amount of the data. Conventional RDBMS faces challenges to process and analysis data beyond certain very large data. Big Data is a large amount of the data which is difficult or impossible for traditional relational database. Current moving target limits for Big data is terabytes, Exabytes and zettabytes.
There are two very famous companies using Hadoop to process their large data – Facebook and Yahoo. Hadoop platform can solve problems where deeper analysis is complex and unstructured, but needs to be done in reasonable time.
Hadoop is architectured to run on a large number of machines where ‘shared nothing’ is the architecture. All the independent server can be put use by Hadoop technology. Hadoop technology maintains and manages the data among all the independent servers. Individual users cannot directly gain the access to the data as data is divided among this server. Additionally, a single data can be shared on multiple server, which gives availability of the data in case of the disaster or single machine failure. Hadoop uses MapReduce software framework to return unified data.
This technology is much simpler conceptually, but very powerful when put along with Hadoop framework. There are two major steps: 1) Map 2) Reduce.
On the Map step master node takes input and divides into simple smaller chunks, and provides it to the other worker node. In Reduce step it collects all the small solution of the problem and returns as output in one unified answer. Both of these steps uses functions which relies on Key-Value pairs. This process runs on the various nodes in parallel and brings faster results for frame work.
Pigs and Hives
Microsoft and Big Data
Microsoft is committed to making Hadoop accessible to a broader class of end users, developers and IT professionals. Accelerate your Hadoop deployment through simplicity of Hadoop on Windows, and the use of familiar Microsoft products.
- Apache Hadoop connector for Microsoft SQL Server
- Apache Hadoop connector for Microsoft Parallel Data Warehouse
Here is the link for further reading.
I can not end this blog post if I do not talk about the one man from whom I have heard about Big Data very first time.
… and of-course – Happy Valentines Day!
Reference: Pinal Dave (https://blog.sqlauthority.com)