Comparing Real Time Analytics and Batch Processing Applications with Hadoop MapReduce and Spark

Apache Spark is an engine for fast, large scale data processing. It claims to run the programs up to 100x faster than Hadoop MapReduce in-memory, while 10x faster with the disks. Introduction of Hadoop Mapreduce framework greatly simplified the problem of big data management and analysis in a cost-efficient way. With the help of commodity hardware, we can apply several algorithms on large volumes of data. But MapReduce failed to show its performance while implementing complex and multi-stage algorithms. Through this article, we tried to dig deepĀ to understand why Apache Spark upstages Apache Hadoop MapReduce framework.

Unified Architecture

Introduction of big data mandated the development of sophisticated tools that runs faster and are easy to use. We need such tools for various applications such as interactive query processing, ad-hoc queries on real-time streaming data and sophisticated data processing on historical data for better decision making.

Continue reading

Advertisements

Introduction to Big Data with Apache Spark (Part-2)

In part-1 of this series we saw a brief overview of Apache Spark, Resilient Distributed Dataset (RDD) and Spark Ecosystem. In this article, we will have a closer look at Spark’s primary andĀ fault-tolerant memory abstraction for in-memory cluster computing called the Resilient Distributed Dataset (i.e RDD).

Motivation

One of the most popular parallel data processing paradigm – MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable to efficiently solve the complex and iterative machine learning and graph processing algorithms, as well as the interactive or ad-hoc queries. All of these complex algorithms need one thing in common that MapReduce lacks : efficient primitives for data sharing. In MapReduce, the data is shared across different jobs (or different stages of a single job) with the help of stable storage. As discussed in the previous article, MapReduce stores results on the disk, and thus, the reads and writes are very slow. Also, the existing storage abstraction interfaces uses the data replication or update log replication for fault-tolerance. This method is considerably costly if we are dealing with data-intensive applications.

Continue reading