Analyzing Wikipedia Text with pySpark

Spark improves usability by offering a rich set of APIs and making it easy for developers to write code. Programs in spark are 5x smaller than MapReduce. The Spark Python API (PySpark) exposes the Spark programming model to Python. To learn the basics of Spark, read through the Scala programming guide; it should be easy to follow even if you don’t know Scala. pySpark provides an easy-to-use programming abstraction and parallel runtime, we can think of it as – “Here’s an operation, run it on all of the data”.

To use Spark, developers write a driver program that implements the high-level control flow of their application and launches various operations in parallel on the nodes of the cluster.

The typical life cycle of a Spark program is –

  • Create RDDs from some external data source or parallelize a collection in your driver program.
  • Lazily transform the base RDDs into new RDDs using transformations.
  • Cache some of those RDDs for future reuse.
  • Perform actions to execute parallel computation and to produce results.

Continue reading

Advertisements