Analyzing Wikipedia Text with pySpark

Spark improves usability by offering a rich set of APIs and making it easy for developers to write code. Programs in spark are 5x smaller than MapReduce. The Spark Python API (PySpark) exposes the Spark programming model to Python. To learn the basics of Spark, read through the Scala programming guide; it should be easy to follow even if you don’t know Scala. pySpark provides an easy-to-use programming abstraction and parallel runtime, we can think of it as – “Here’s an operation, run it on all of the data”.

To use Spark, developers write a driver program that implements the high-level control flow of their application and launches various operations in parallel on the nodes of the cluster.

The typical life cycle of a Spark program is –

  • Create RDDs from some external data source or parallelize a collection in your driver program.
  • Lazily transform the base RDDs into new RDDs using transformations.
  • Cache some of those RDDs for future reuse.
  • Perform actions to execute parallel computation and to produce results.

Continue reading

Advertisements

Introduction to Big Data with Apache Spark (Part-2)

In part-1 of this series we saw a brief overview of Apache Spark, Resilient Distributed Dataset (RDD) and Spark Ecosystem. In this article, we will have a closer look at Spark’s primary and fault-tolerant memory abstraction for in-memory cluster computing called the Resilient Distributed Dataset (i.e RDD).

Motivation

One of the most popular parallel data processing paradigm – MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable to efficiently solve the complex and iterative machine learning and graph processing algorithms, as well as the interactive or ad-hoc queries. All of these complex algorithms need one thing in common that MapReduce lacks : efficient primitives for data sharing. In MapReduce, the data is shared across different jobs (or different stages of a single job) with the help of stable storage. As discussed in the previous article, MapReduce stores results on the disk, and thus, the reads and writes are very slow. Also, the existing storage abstraction interfaces uses the data replication or update log replication for fault-tolerance. This method is considerably costly if we are dealing with data-intensive applications.

Continue reading

Introduction to Big Data with Apache Spark (Part-1)

With the advent of new technologies, there has been an increase in the number of data sources. Web server logs, machine log files, user activity on social media, recording a user’s clicks on the website and many other data sources have caused an exponential growth of data. Individually this content may not be very large, but when taken across billions of users, it produces terabytes or petabytes of data. For example, Facebook is collecting 500 terabytes(TB) of data everyday with more than 950 million users. Such a massive amount of data which is not only structured but also unstructured and semi-structured  is considered under the roof known as Big Data.

Big data is of more importance today, because in past we collected a lot of data and built models to predict the future, called forecasting, but now we collect data and build models to predict what is happening now, called nowcasting. So a phenomenal amount of data is collected, but only a tiny amount is ever analysed. The term Data Science means deriving knowledge from big data, efficiently and intelligently.

The common tasks involved in data science are :

  1. Dig data to find useful data to analyse
  2. Clean and prepare that data
  3. Define a model
  4. Evaluate the model
  5. Repeat until we get statistically good results, and hence a good model
  6. Use this model for large scale data processing

Continue reading

Working with DB2 Database Grants

Being a database administrator and developer, one often needs to restore databases from one database server to another. In DB2, when a database is created, by default, the DB2 instance owner becomes the owner of the database. So when you restore a database backup across different database servers where both, the source and the target database servers have the same instance owner names, it works fine. But when the source and target database servers have different instance owners, the database gets restored successfully, but there can be privilege/authorization issues when you try to access the database objects. The most common being, SQL0551N and SQL1092N.

This problem can be solved by creating the source database server’s instance owner on target database server, connecting to the database through it and then granting the DBADM privileges to the new instance owner. But there is a more elegant and simpler way to do this using the DB2_RESTORE_GRANT_ADMIN_AUTHORITIES DB2 registry variable.

Continue reading

IDUG NA 2015 : Lessons from a First Time Speaker

Finally, there it was, the most exciting and awaited week of the year. I got a chance to live my dream, to be at the International DB2 User Group Tech Conference (IDUG), Philadelphia. I must confess, Its not about being at a conference, its about the whole new experience with which I will live ever after”. Being a first time attendee and speaker, the blogs on IDUG experience from Ember Crooks (at db2commerce.com) and Pavan Kristipati (at db2talk.com) really helped me. Also, slide-decks from the past IDUG conferences served a good reference and a starting point. I reached out for tips to the people I admire from the DB2 Community – Mike Krafick, Ember Crooks and Pavan Kristipati. They happily extended their helping hand towards me and Mike actually reviewed my presentation and helped me to improve it. Thank you Mike! Here is a summary of tips that these experts offered – I hope it will be of help to the first time attendees and speakers.

Continue reading