Apache Spark(TM) is an open-source distributed general-purpose cluster computing framework with (mostly) in-memory data processing engine that can do ETL, analytics, machine learning and graph processing on large volumes of data at rest (batch processing) or in motion (streaming processing) with rich concise high-level APIs for the programming languages: Scala, Python, Java, R, and SQL.
In contrast to Hadoop’s two-stage disk-based MapReduce computation engine, Spark’s multi-stage (mostly) in-memory computing engine allows for running most computations in memory, and hence most of the time provides better performance for certain applications, e.g. iterative algorithms or interactive data mining.
- Mastering Apache Spark by Jacek Laskowski
See Apache Hadoop.
Libraries:
News | Stack Overflow Q&A | Community/Mailing Lists | Documentation | FAQ | IRC
No alternative software available under 'Data Analytics' category.
This page was last updated with commit: Fixed syntax in software source files (1e763e9)