Apache Spark Data Admin Data Analyst Data Lake Data Scientist Media and Entertainment Industry Performance Presto Product Scalability Tech

SQL-On-Hadoop Evaluation by Pearson | Qubole

Simple Query comparative results

This can be a guest submit written by Sumit Arora, Lead Huge Knowledge Architect at Pearson, and Asgar Ali, Senior Architect at Happiest Minds Applied sciences Pvt., ltd.

About Pearson

Pearson is the world’s main learning firm, with 40,000 staff in more than 80 nations working to assist individuals of all ages to make measurable progress of their lives by means of learning. Visit www.pearson.com for extra info.


Knowledge Lake at Pearson

An initiative to build a knowledge lake at Pearson for housing a variety of knowledge is presently underway. This knowledge lake will include totally different varieties of knowledge starting from product activity to operational knowledge, including sales/registrations, survey results, and so forth. It can act as a research surroundings for Knowledge Scientists and Analysts to query, investigate and mine the knowledge in a single location. Aside from this, it’ll also allow organizational reporting for enterprise intelligence, product efficacy analytics and content material analytics.

What was the Ask?

With a view to fulfill the research needs of knowledge scientists and analysts, it is vital that the system offers the power to run advert hoc interactive queries and supports ODBC/JDBC interfaces to combine with products like R and Tableau (or other visualization instruments). It was subsequently essential to guage numerous interactive query options and select a framework which fulfilled our necessities.  This not only required us to guage the SQL execution engine but in addition the underlying file format which is optimal for our needs.

Analysis Strategy

There are various interactive question options out there in the huge knowledge ecosystem. A few of these are proprietary options (like Amazon Redshift) while others are open supply (Spark SQL, Presto, Impala, and so forth). Equally, there are a selection of file codecs to select from – Parquet, Avro, ORC, and so forth. We “theoretically” evaluated 5 of those merchandise (Redshift, Spark SQL, Impala, Presto and H20) based mostly on the documentation/feedback out there on the internet and determined to brief record two of them (Presto and Spark SQL) for additional analysis. A number of the causes behind this determination of ours have been:

  1. Redshift requires knowledge to be moved out of S3 into its own format and we didn’t need to have this additional step in our processing pipeline. Aside from this, it’s a proprietary know-how and we needed to maintain open supply applied sciences in our stack.
  2. Impala doesn’t help complicated knowledge varieties and querying knowledge on S3

We additionally decided to test Spark SQL and Presto on 4 totally different file formats – Plain Textual content with out compression, Avro with Snappy compression, Parquet with Snappy compression, and ORC with Snappy compression. With a view to pretty evaluate the two candidates, we decided to deploy both of them on the identical Yarn cluster and use the identical knowledge set and queries for analysis functions.

Spark SQL and Presto can each hook up with Hive metastore and question tables outlined in Hive, so we decided to create and load tables in Hive after which question them from Spark SQL and Presto. It’s also a greatest follow to partition tables in Hive for limiting the info set utilized by a question, so we partitioned the Hive tables by yr, month and day.

Knowledge Set and Queries

It was essential to pick the right knowledge set and queries for analysis to make sure that we’re testing real-world use instances of Pearson. We used six area entities for our checks and created Hive tables for every one in every of them. Our queries ranged from single table, needle in a haystack queries with ‘like’, ‘order by’, and ‘where’ clauses to complicated desk joins with ‘order by’ and ‘the place’ clauses. We also ran aggregation queries using ‘rely’, ‘sum’, and ‘common’ features.

Complete dataset was 6.05 terabytes cut up between Plain Text, Avro, Parquet and ORC codecs and we partitioned the info utilizing date-based partitions.

Surroundings particulars

We used Amazon EMR as our Hadoop distribution and installed Spark and Presto on a Yarn cluster.
Following are the surroundings details:

Instance sort r3.xlarge (30.5 GB Reminiscence, four cores, 80 GB SSD storage, 500 Mbps I/O performance)
Num of Grasp nodes 1
Num of Core nodes 10
Java model Java 7 (apart from Presto)
AMI version 3.3.2
Hadoop distribution Amazon 2.4.0
Hive version 0.13.1
Presto model 0.89 (This required Java 8)
Spark SQL version 1.2.1

We additionally tuned some memory/configuration settings to ensure that Spark SQL and Presto are allocated the identical quantity of memory. Spark SQL’s thrift server was started with 10 Spark executors with 10 GB reminiscence allocated to each executor. We installed 10 Presto worker nodes and set process.max-memory to 10 GB for each one in every of them. We additionally made positive that each Spark SQL and Presto use the identical PermSize, MaxPermSize and ReservedCodeCacheSize JVM settings.

Results

Scalability

We tried our queries with a smaller knowledge set and on a smaller cluster first and then scaled the setting out by creating a much bigger cluster and growing the dimensions of the info set. We found that both Presto and Spark SQL scaled pretty nicely apart from a couple of minor hiccups:

  1. A number of the be a part of and aggregation queries on very giant tables failed in Presto with “Java heap area” error. This additionally killed multiple nodes within the Presto cluster. It is fascinating to notice that these queries didn’t fail on Spark SQL, nevertheless, they took a very very long time to return outcomes.
  2. Some be a part of queries failed on Spark SQL with “buffer overflow” error.

Availability

Each Presto and Spark SQL remained highly out there during our checks. There were situations when a few of the nodes within the cluster either died (a be a part of question on Presto killed a number of nodes) or weren’t added to the cluster (in one occasion, solely 9 out of 10 Spark SQL nodes have been lively in the cluster as a result of memory limitations). Nevertheless, the queries have been getting executed even if there’s one worker node lively in the cluster.

Efficiency

This was some of the essential standards for our evaluation. The efficiency of a SQL engine also will depend on the underlying file format through which the info is saved, so we examined our queries on Textual content, Avro, Parquet and ORC format. This required creation of Hive tables which retailer knowledge in the desired file format.

On EMR we discovered that ORC format has critical efficiency issues if the information reside on S3, so we did not execute all of the queries on ORC tables and in addition excluded ORC format from additional analysis. These issues have been reported to AWS and they’re investigating. It is fascinating to note that the identical queries work positive (in truth, they run much quicker than different file codecs) when tables are backed by ORC information on HDFS.

Since finishing our analysis on EMR, we have been delighted to study that Qubole had already fastened the issues with utilizing ORC on S3. The truth is, Qubole suggests using ORC as a Presto greatest apply. We additionally discovered that Qubole has dedicated their fixes back to the open source Presto challenge. Qubole has additional prolonged Presto to incorporate capability to cache knowledge and metadata.
Following graphs present extra insights into the results:
X-Axis represents the Question-Sort and  Y-Axis exhibits the time taken in seconds

Simple Query comparative results:

Be a part of Question comparative results:
Join Query comparative results

Combination Query comparative outcomes:
Aggregate Query comparative results

Compression

Compression is an important function for saving storage prices on S3, so we in contrast how nicely totally different file formats compress on S3:
X-Axis represents totally different entities and  Y-Axis exhibits proportion compression

Compression

Schema Evolution Capabilities (Utilizing Hive Metastore)

Our knowledge lake entities will evolve as we add extra knowledge sources and determine more use instances in the future. Subsequently, we should always be capable of change the schema definition with minimal influence to elements involved within the system. This can be completed in a number of ways; nevertheless, we decided to use Hive metastore for schema evolution. Following are the outcomes:

File Format Sort of evolution HIVE PRESTO SPARK SQL
TSV Add column PARTIALLY PASSED* PARTIALLY PASSED* PARTIALLY PASSED*
TSV Drop column PARTIALLY PASSED* PARTIALLY PASSED* PARTIALLY PASSED*
TSV Rename column PASSED PASSED FAILED
TSV Change datatype PASSED PASSED PASSED
Avro Add column PASSED PASSED PASSED
Avro Drop column PASSED PASSED PASSED
Avro Rename column PARTIALLY PASSED* PARTIALLY PASSED* PARTIALLY PASSED*
Avro Change datatype PARTIALLY PASSED* PARTIALLY PASSED* PARTIALLY PASSED*
Parquet Add column PASSED FAILED PASSED
Parquet Drop column PASSED PASSED PASSED
Parquet Rename column PARTIALLY PASSED* FAILED PARTIALLY PASSED*
Parquet Change datatype FAILED PASSED FAILED
ORC Add column PARTIALLY PASSED* PARTIALLY PASSED* PARTIALLY PASSED*
ORC Drop column PARTIALLY PASSED* PARTIALLY PASSED* PARTIALLY PASSED*
ORC Rename column PASSED PASSED PASSED
ORC Change datatype FAILED PARTIALLY PASSED*** FAILED

* Newly added or deleted column must be the last one
** Can’t read column values written with previous column identify but works advantageous for brand spanking new information
*** Works only if the brand new data-type is suitable with the previous sort e.g. INT to LONG

 

Help for Complicated Knowledge Varieties

Spark SQL

  Avro (data) Parquet (Row)
STRUCT Queryable Queryable
ARRAY Queryable Queryable*
MAP Queryable Queryable
ARRAY OF STRUCT Queryable Queryable*

This requires the following settings in Spark SQL
set spark.sql.parquet.useDataSourceApi=false
set spark.sql.hive.convertMetastoreParquet=false
It’s being tracked at https://issues.apache.org/jira/browse/SPARK-5508

Presto

  Avro (data) Parquet (Row)
STRUCT Queryable Queryable
ARRAY Queryable Queryable*
MAP Queryable Queryable
ARRAY OF STRUCT Queryable Queryable*

Different issues

We additionally evaluated these two tools on a number of other criteria like JDBC/ODBC help, vendor lock-in, ANSI SQL help, maturity, Amazon S3 help, and future path.

 

Summary of outcomes

Based mostly on our exams, it’s clear that Parquet file format is the optimum selection for storing knowledge. It doesn’t compress in addition to ORC however continues to be better than Avro or Plain Textual content. ORC doesn’t fulfill our necessities for schema evolution and S3 help. Similarly, though Avro scores better than Parquet on schema evolution, the performance of queries on Avro is dismal.

Following table summarizes the outcomes for Spark SQL and Presto:
1: Poor, 2: Truthful, three: Average, four: Very Good, 5: Wonderful

  Weight Spark SQL Presto
Scalability four 4 4
Availability 4 4 four
Latency 5 three.5 4
Complete Value of Possession 4 4 four
Maturity 3 3.5 2.5
3rd Celebration Help 3 5 3
S3 Help 3 four 4
Help for Complicated Knowledge Varieties 5 4 2
ANSI SQL Help 3 3 three.5
Further Interfaces Help 3 four four
Vendor Lock-in three 5 4
Future Path 4 5 4
Schema Evolution four four 1
Weighted Common 4.06 three.35

Based mostly on this analysis, we chosen Spark SQL because the execution engine with Parquet because the underlying file format for our knowledge lake.

Pearson has determined to use Qubole as the platform to maneuver ahead and construct our manufacturing purposes on. The rationale behind this choice was because Qubole was capable of meet other key requirements of this venture, including the next:

Custom-made VPC Help. We specify a custom domain identify in the DHCP options. By default this was making the hostnames unresolvable, which is a requirement for Hadoop 2 daemons. Luckily, Qubole was capable of clear up this drawback by dynamically updating the resolver configuration file.

Auto-scaling Spark Clusters. Qubole’s Spark clusters mechanically scale up and down based mostly on quantity. Given our utilization sample, this function will help drive efficient EC2 utilization.

Built-in Spark SQL Query UI. Out of the field, Qubole presents multiple methods to work together with Spark and Spark SQL: you should use the Analyze UI in addition to the built-in interactive notebook. These are key for our customers. The built-in UI provides customers the power to save lots of queries and quick access to query outcomes. Customers can access results and logs from past question situations even if the cluster is down. Further, customers can simply download the results regionally. The built-in Zeppelin-based notebook can also be great for numerous knowledge science use instances.

Tuning Per Question. Qubole runs Spark on Yarn with the power to create new contexts (or use the same) per question. That has allowed us to tune job settings based mostly on the query itself. This degree of flexibility has confirmed to be useful.

Security. Safety is essential for the use instances at Pearson. We cope with a whole lot of delicate knowledge. Out of the field, Qubole supplies multiple layers of encryption as well as consumer authentication and authorization.

Monitoring and Auditing. Together with the inbuilt Query UI and Pocket book, Qubole also has numerous interfaces for monitoring and auditing. There’s a per question history and audit path which could be searched, help for naming and tagging query situations, and helpful metrics around usage and skill to break down per cluster, command, and consumer.

Familiarity: The usual Spark UI is inbuilt, along with inbuilt Ganglia and DFS UI for cluster degree metrics.