Wednesday, 18 June 2014

Hive Function Cheat Sheet

Hive Function Cheat Sheet

Hive Function Cheat Sheet
  • Date Functions
  • Mathematical Functions
  • String Functions
  • Collection Functions
  • UDAF
  • UDTF
  • Conditional Functions
  • Functions for Text Analytics

Hive Function Meta commands

  • SHOW FUNCTIONS– lists Hive functions and operators
  • DESCRIBE FUNCTION [function name]– displays short description of the function
  • DESCRIBE FUNCTION EXTENDED [function name]– access extended description of the function

Types of Hive Functions

  • UDF– is a function that takes one or more columns from a row as argument and returns a single value or object. Eg: concat(col1, col2)
  • UDTF— takes zero or more inputs and and produces multiple columns or rows of output. Eg: explode()
  • Macros— a function that users other Hive functions.

How To Develop UDFs

package org.apache.hadoop.hive.contrib.udf.example;
import java.util.Date;
import java.text.SimpleDateFormat;
import org.apache.hadoop.hive.ql.exec.UDF;

@Description(name = "YourUDFName",
    value = "_FUNC_(InputDataType) - using the input datatype X argument, "+
            "returns YYY.",
    extended = "Example:\n"
             + "  > SELECT _FUNC_(InputDataType) FROM tablename;")

public class YourUDFName extends UDF{
..
  public YourUDFName( InputDataType InputValue ){
    ..;
  }

  public String evaluate( InputDataType InputValue ){
    ..;
  }
}

How To Develop UDFs, GenericUDFs, UDAFs, and UDTFs

  • public class YourUDFName extends UDF{
  • public class YourGenericUDFName extends GenericUDF {..}
  • public class YourGenericUDAFName extends AbstractGenericUDAFResolver {..}
  • public class YourGenericUDTFName extends GenericUDTF {..}

How To Deploy / Drop UDFs

At start of each session:
ADD JAR /full_path_to_jar/YourUDFName.jar;
    CREATE TEMPORARY FUNCTION YourUDFName AS 'org.apache.hadoop.hive.contrib.udf.example.YourUDFName';
At the end of each session:
DROP TEMPORARY FUNCTION IF EXISTS YourUDFName;

Date Functions

The following built-in date functions are supported in hive:
Return TypeName(Signature)Example
stringfrom_unixtime(bigint unixtime[, string format])Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the format of “1970-01-01 00:00:00″
bigintunix_timestamp()Gets current time stamp using the default time zone.
bigintunix_timestamp(string date)Converts time string in format yyyy-MM-dd HH:mm:ss to Unix time stamp, return 0 if fail: unix_timestamp(’2009-03-20 11:30:01′) = 1237573801
bigintunix_timestamp(string date, string pattern)Convert time string with given pattern to Unix time stamp, return 0 if fail: unix_timestamp(’2009-03-20′, ‘yyyy-MM-dd’) = 1237532400
stringto_date(string timestamp)Returns the date part of a timestamp string: to_date(“1970-01-01 00:00:00″) = “1970-01-01″
intyear(string date)Returns the year part of a date or a timestamp string: year(“1970-01-01 00:00:00″) = 1970, year(“1970-01-01″) = 1970
intmonth(string date)Returns the month part of a date or a timestamp string: month(“1970-11-01 00:00:00″) = 11, month(“1970-11-01″) = 11
intday(string date) dayofmonth(date)Return the day part of a date or a timestamp string: day(“1970-11-01 00:00:00″) = 1, day(“1970-11-01″) = 1
inthour(string date)Returns the hour of the timestamp: hour(’2009-07-30 12:58:59′) = 12, hour(’12:58:59′) = 12
intminute(string date)Returns the minute of the timestamp
intsecond(string date)Returns the second of the timestamp
intweekofyear(string date)Return the week number of a timestamp string: weekofyear(“1970-11-01 00:00:00″) = 44, weekofyear(“1970-11-01″) = 44
intdatediff(string enddate, string startdate)Return the number of days from startdate to enddate: datediff(’2009-03-01′, ’2009-02-27′) = 2
stringdate_add(string startdate, int days)Add a number of days to startdate: date_add(’2008-12-31′, 1) = ’2009-01-01′
stringdate_sub(string startdate, int days)Subtract a number of days to startdate: date_sub(’2008-12-31′, 1) = ’2008-12-30′
timestampfrom_utc_timestamp(timestamp, string timezone)Assumes given timestamp ist UTC and converts to given timezone (as of Hive 0.8.0)
timestampto_utc_timestamp(timestamp, string timezone)Assumes given timestamp is in given timezone and converts to UTC (as of Hive 0.8.0)

Mathematical Functions

The following built-in mathematical functions are supported in hive; most return NULL when the argument(s) are NULL:
Return TypeName(Signature)Example
BIGINTround(double a)Returns the rounded BIGINT value of the double
DOUBLEround(double a, int d)Returns the double rounded to d decimal places
BIGINTfloor(double a)Returns the maximum BIGINT value that is equal or less than the double
BIGINTceil(double a), ceiling(double a)Returns the minimum BIGINT value that is equal or greater than the double
doublerand(), rand(int seed)Returns a random number (that changes from row to row) that is distributed uniformly from 0 to 1. Specifiying the seed will make sure the generated random number sequence is deterministic.
doubleexp(double a)Returns ea where e is the base of the natural logarithm
doubleln(double a)Returns the natural logarithm of the argument
doublelog10(double a)Returns the base-10 logarithm of the argument
doublelog2(double a)Returns the base-2 logarithm of the argument
doublelog(double base, double a)Return the base “base” logarithm of the argument
doublepow(double a, double p), power(double a, double p)Return ap
doublesqrt(double a)Returns the square root of a
stringbin(BIGINT a)Returns the number in binary format
stringhex(BIGINT a) hex(string a)If the argument is an int, hex returns the number as a string in hex format. Otherwise if the number is a string, it converts each character into its hex representation and returns the resulting string.
stringunhex(string a)Inverse of hex. Interprets each pair of characters as a hexidecimal number and converts to the character represented by the number.
stringconv(BIGINT num, int from_base, int to_base), conv(STRING num, int from_base, int to_base)Converts a number from a given base to another
doubleabs(double a)Returns the absolute value
int doublepmod(int a, int b) pmod(double a, double b)Returns the positive value of a mod b
doublesin(double a)Returns the sine of a (a is in radians)
doubleasin(double a)Returns the arc sin of x if -1<=a<=1 or null otherwise
doublecos(double a)Returns the cosine of a (a is in radians)
doubleacos(double a)Returns the arc cosine of x if -1<=a<=1 or null otherwise
tan(double a)tan(double a)Returns the tangent of a (a is in radians)
doubleatan(double a)Returns the arctangent of a
doubledegrees(double a)Converts value of a from radians to degrees
doubleradians(double a)Converts value of a from degrees to radians
int doublepositive(int a), positive(double a)Returns a
int doublenegative(int a), negative(double a)Returns -a
floatsign(double a)Returns the sign of a as ’1.0′ or ‘-1.0′
doublee()Returns the value of e
doublepi()Returns the value of pi

String Functions

The following are built-in String functions are supported in hive:
Return TypeName(Signature)Example
intascii(string str)Returns the numeric value of the first character of str
stringconcat(string|binary A, string|binary B…)Returns the string or bytes resulting from concatenating the strings or bytes passed in as parameters in order. e.g. concat(‘foo’, ‘bar’) results in ‘foobar’. Note that this function can take any number of input strings.
array<struct<string,double>>context_ngrams(array<array>, array, int K, int pf)Returns the top-k contextual N-grams from a set of tokenized sentences, given a string of “context”. See StatisticsAndDataMining for more information.
stringconcat_ws(string SEP, string A, string B…)Like concat() above, but with custom separator SEP.
stringconcat_ws(string SEP, array)Like concat_ws() above, but taking an array of strings. (as of Hive 0.9.0)
intfind_in_set(string str, string strList)Returns the first occurance of str in strList where strList is a comma-delimited string. Returns null if either argument is null. Returns 0 if the first argument contains any commas. e.g. find_in_set(‘ab’, ‘abc,b,ab,c,def’) returns 3
stringformat_number(number x, int d)Formats the number X to a format like ‘#,###,###.##’, rounded to D decimal places, and returns the result as a string. If D is 0, the result has no decimal point or fractional part. (as of Hive 0.10.0)
stringget_json_object(string json_string, string path)Extract json object from a json string based on json path specified, and return json string of the extracted json object. It will return null if the input json string is invalid.NOTE: The json path can only have the characters [0-9a-z_], i.e., no upper-case or special characters. Also, the keys *cannot start with numbers.* This is due to restrictions on Hive column names.
booleanin_file(string str, string filename)Returns true if the string str appears as an entire line in filename.
intinstr(string str, string substr)Returns the position of the first occurence of substr in str
intlength(string A)Returns the length of the string
intlocate(string substr, string str[, int pos])Returns the position of the first occurrence of substr in str after position pos
stringlower(string A) lcase(string A)
stringlpad(string str, int len, string pad)Returns str, left-padded with pad to a length of len
stringltrim(string A)Returns the string resulting from trimming spaces from the beginning(left hand side) of A e.g. ltrim(‘ foobar ‘) results in ‘foobar ‘
array<struct<string,double>>ngrams(array<array >, int N, int K, int pf)Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF. See StatisticsAndDataMining for more information.
stringparse_url(string urlString, string partToExtract [, string keyToExtract])Returns the specified part from the URL. Valid values for partToExtract include HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO. e.g. parse_url(‘http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1′, ‘HOST’) returns ‘facebook.com’. Also a value of a particular key in QUERY can be extracted by providing the key as the third argument, e.g. parse_url(‘http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1′, ‘QUERY’, ‘k1′) returns ‘v1′.
stringprintf(String format, Obj… args)Returns the input formatted according do printf-style format strings (as of Hive 0.9.0)
stringregexp_extract(string subject, string pattern, int index)Returns the string extracted using the pattern. e.g. regexp_extract(‘foothebar’, ‘foo(.*?)(bar)’, 2) returns ‘bar.’ Note that some care is necessary in using predefined character classes: using ‘\s’ as the second argument will match the letter s; ‘s’ is necessary to match whitespace, etc. The ‘index’ parameter is the Java regex Matcher group() method index. See docs/api/java/util/regex/Matcher.html for more information on the ‘index’ or Java regex group() method.
stringregexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT)Returns the string resulting from replacing all substrings in INITIAL_STRING that match the java regular expression syntax defined in PATTERN with instances of REPLACEMENT, e.g. regexp_replace(“foobar”, “oo|ar”, “”) returns ‘fb.’ Note that some care is necessary in using predefined character classes: using ‘\s’ as the second argument will match the letter s; ‘s’ is necessary to match whitespace, etc.
stringrepeat(string str, int n)Repeat str n times
stringreverse(string A)Returns the reversed string
stringrpad(string str, int len, string pad)Returns str, right-padded with pad to a length of len
stringrtrim(string A)Returns the string resulting from trimming spaces from the end(right hand side) of A e.g. rtrim(‘ foobar ‘) results in ‘ foobar’
array<array>sentences(string str, string lang, string locale)Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words. The ‘lang’ and ‘locale’ are optional arguments. e.g. sentences(‘Hello there! How are you?’) returns ( (“Hello”, “there”), (“How”, “are”, “you”) )
stringspace(int n)Return a string of n spaces
arraysplit(string str, string pat)Split str around pat (pat is a regular expression)
map<string,string>str_to_map(text[, delimiter1, delimiter2])Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ‘,’ for delimiter1 and ‘=’ for delimiter2.
stringsubstr(string|binary A, int start) substring(string|binary A, int start)Returns the substring or slice of the byte array of A starting from start position till the end of string A e.g. substr(‘foobar’, 4) results in ‘bar’
stringsubstr(string|binary A, int start, int len) substring(string|binary A, int start, int len)Returns the substring or slice of the byte array of A starting from start position with length len e.g. substr(‘foobar’, 4, 1) results in ‘b’
stringtranslate(string input, string from, string to)Translates the input string by replacing the characters present in the from string with the corresponding characters in the to string. This is similar to the translatefunction in PostgreSQL. If any of the parameters to this UDF are NULL, the result is NULL as well (available as of Hive 0.10.0)
stringtrim(string A)Returns the string resulting from trimming spaces from both ends of A e.g. trim(‘ foobar ‘) results in ‘foobar’
stringupper(string A) ucase(string A)Returns the string resulting from converting all characters of A to upper case e.g. upper(‘fOoBaR’) results in ‘FOOBAR’

Collection Functions

The following built-in collection functions are supported in hive:
Return TypeName(Signature)Example
intsize(Map)Returns the number of elements in the map type
intsize(Array)Returns the number of elements in the array type
arraymap_keys(Map)Returns an unordered array containing the keys of the input map
arraymap_values(Map)Returns an unordered array containing the values of the input map
booleanarray_contains(Array, value)Returns TRUE if the array contains value
arraysort_array(Array)Sorts the input array in ascending order according to the natural ordering of the array elements and returns it (as of version 0.9.0)

Built-in Aggregate Functions (UDAF)

The following are built-in aggregate functions are supported in Hive:
Return TypeName(Signature)Example
bigintcount(*), count(expr), count(DISTINCT expr[, expr_.])count(*) – Returns the total number of retrieved rows, including rows containing NULL values; count(expr) – Returns the number of rows for which the supplied expression is non-NULL; count(DISTINCT expr[, expr]) – Returns the number of rows for which the supplied expression(s) are unique and non-NULL.
doublesum(col), sum(DISTINCT col)Returns the sum of the elements in the group or the sum of the distinct values of the column in the group
doubleavg(col), avg(DISTINCT col)Returns the average of the elements in the group or the average of the distinct values of the column in the group
doublemin(col)Returns the minimum of the column in the group
doublemax(col)Returns the maximum value of the column in the group
doublevariance(col), var_pop(col)Returns the variance of a numeric column in the group
doublevar_samp(col)Returns the unbiased sample variance of a numeric column in the group
doublestddev_pop(col)Returns the standard deviation of a numeric column in the group
doublestddev_samp(col)Returns the unbiased sample standard deviation of a numeric column in the group
doublecovar_pop(col1, col2)Returns the population covariance of a pair of numeric columns in the group
doublecovar_samp(col1, col2)Returns the sample covariance of a pair of a numeric columns in the group
doublecorr(col1, col2)Returns the Pearson coefficient of correlation of a pair of a numeric columns in the group
doublepercentile(BIGINT col, p)Returns the exact pth percentile of a column in the group (does not work with floating point types). p must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
arraypercentile(BIGINT col, array(p1 [, p2]…))Returns the exact percentiles p1, p2, … of a column in the group (does not work with floating point types). pi must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
doublepercentile_approx(DOUBLE col, p [, B])Returns an approximate pth percentile of a numeric column (including floating point types) in the group. The B parameter controls approximation accuracy at the cost of memory. Higher values yield better approximations, and the default is 10,000. When the number of distinct values in col is smaller than B, this gives an exact percentile value.
arraypercentile_approx(DOUBLE col, array(p1 [, p2]…) [, B])Same as above, but accepts and returns an array of percentile values instead of a single one.
arrayhistogram_numeric(col, b)Computes a histogram of a numeric column in the group using b non-uniformly spaced bins. The output is an array of size b of double-valued (x,y) coordinates that represent the bin centers and heights
arraycollect_set(col)Returns a set of objects with duplicate elements eliminated

Built-in Table-Generating Functions (UDTF)

Normal user-defined functions, such as concat(), take in a single input row and output a single output row. In contrast, table-generating functions transform a single input row to multiple output rows.
Return TypeName(Signature)Example
inline(ARRAY<STRUCT[,STRUCT]>)Explodes an array of structs into a table (as of Hive 0.10)
Explodeexplode() takes in an array as an input and outputs the elements of the array as separate rows. UDTF’s can be used in the SELECT expression list and as a part of LATERAL VIEW.

Conditional Functions

Return TypeName(Signature)Example
Tif(boolean testCondition, T valueTrue, T valueFalseOrNull)Return valueTrue when testCondition is true, returns valueFalseOrNull otherwise
TCOALESCE(T v1, T v2, …)Return the first v that is not NULL, or NULL if all v’s are NULL
TCASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] ENDWhen a = b, returns c; when a = d, return e; else return f
TCASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] ENDWhen a = true, returns b; when c = true, return d; else return e

Functions for Text Analytics

Return TypeName(Signature)Example
array<struct<string,double>>context_ngrams(array<array>, array, int K, int pf)Returns the top-k contextual N-grams from a set of tokenized sentences, given a string of “context”. See StatisticsAndDataMining for more information.N-grams are subsequences of length N drawn from a longer sequence. The purpose of the ngrams() UDAF is to find the k most frequent n-grams from one or more sequences. It can be used in conjunction with the sentences() UDF to analyze unstructured natural language text, or the collect() function to analyze more general string data.
array<struct<string,double>>ngrams(array<array>, int N, int K, int pf)Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF. See StatisticsAndDataMining for more information.Contextual n-grams are similar to n-grams, but allow you to specify a ‘context’ string around which n-grams are to be estimated. For example, you can specify that you’re only interested in finding the most common two-word phrases in text that follow the context “I love”. You could achieve the same result by manually stripping sentences of non-contextual content and then passing them to ngrams(), but context_ngrams() makes it much easier.

5 Tips for efficient Hive queries


5 Tips for efficient Hive queries

5_tips_efficient
Hive on Hadoop makes data processing so straightforward and scalable that we can easily forget to optimize our Hive queries. Well designed tables and queries can greatly improve your query speed and reduce processing cost. This article includes five tips, which are valuable for ad-hoc queries, to save time, as much as for regular ETL (Extract, Transform, Load) workloads, to save money. The three areas in which we can optimize our Hive utilization are:
  • Data Layout (Partitions and Buckets)
  • Data Sampling (Bucket and Block sampling)
  • Data Processing (Bucket Map Join and Parallel execution)

We will discuss these areas in detail in this article, you can if like, also watch our webinar on the topic given by Ashish Thusoo, co-founder of Apache Hive, and Sadiq Sid Shaik, Director of Product at Qubole. Example Data Set We can illustrate the improvements best on an example data set we use at Qubole. The data consists of three tables. The table Airline Bookings All contains 276 million records of complete air travel trips from an origin to a destination with an itinerary identifier as key. The second table Airline Bookings Origin Only contains the data for the first leg of an itinerary only and also has the itinerary’s identifier as a key. The last table is ‘Census’ containing population information for each US state.
airline-booking
The example data set to demonstrate Hive optimization
Tip 1: Partitioning Hive Tables Hive is a powerful tool to perform queries on large data sets and it is particularly good at queries that require full table scans. Yet many queries run on Hive have filtering where clauses limiting the data to be retrieved and processed, e.g. SELECT * WHERE state=’CA’. Hive users tend to have or develop a domain knowledge, understand the data they work with and the queries commonly executed or scheduled. With this knowledge we can identify common data structures that surface in queries. This enables us to identify columns with a (relatively) low cardinality like geographies or dates and high relevance to key queries. For example, common approaches to slice the airline data may be by origin state for reporting purposes. We can utilize this knowledge to organise our data by this information and tell Hive about it. Hive can utilize this knowledge to exclude data from queries before even reading it. Hive tables are linked to directories on HDFS or S3 with files in them interpreted by the meta data stored with Hive. Without partitioning Hive reads all the data in the directory and applies the query filters on it. This is slow and expensive since all data has to be read. In our example a common reports and queries might be generated on an origin state basis. This enables us to define at creation time of the table the state column to be a partition. Consequently, when we write data to the table the data will be written in sub-directories named by state (abbreviations). Subsequently, queries filtering by origin state, e.g. SELECT * FROM Airline_Bookings_All WHERE origin_state = ‘CA’, allow Hive to skip all but the relevant sub-directories and data files. This can lead to tremendous reduction in data required to read and filter in the initial map stage. This reduces the number of mappers, IO operations, and time to answer the query.
patritioned
Example Hive table partitioning It is important to consider the cardinality of a potential partition column and avoid fragmenting the data too much. Itinerary ID would be a very poor choice for partitioning. Queries for single itineraries by ID would be very fast but any other query would require to parse a huge amount of directories and files incurring serious overheads. Additionally, HDFS uses a very large block size of usually 64 MB or more which means that each file, even with only a few bytes of data, will have to allocate that block size on HDFS. This can potentially fill the file system up with large number of files carrying barely any actual data.
Tip 2: Bucketing Hive Tables Itinerary ID is unsuitable for partitioning as we learned but it is used frequently for join operations. We can optimize joins by bucketing ‘similar’ IDs so Hive can minimise the processing steps, and reduce the data needed to parse and compare for join operations. Itinerary IDs, of course, have no real similarity and we only need to achieve that the same itinerary IDs from two tables end up in the same processing bucket. A simple trick to do this is to hash the data and store it by hash results, which is what bucketing does.
31
Example Hive table bucketing Bucketing requires us to tell Hive at table creation time by which column to cluster by and into how many buckets. We also have to ensure the bucketing flag is set (SET hive.enforce.bucketing=true;) every time before we write data to the bucketed table. Importantly, the corresponding tables we want to join on have to be set up in the same manner with the joining columns bucketed and the bucket sizes being multiples of each other to work. The second part is the optimized query for which we have to set a flag to hint to Hive that we want to take advantage of the bucketing in the join (SET hive.optimize.bucketmapjoin=true;). The SELECT statement then can include a MAPJOIN statement to ensure that the join operation is executed at the map stage by combining only the few relevant files in each mapper task in a distributed fashion from the two tables instead of parsing the full tables.   Example Hive MAPJOIN with bucketing
bucketmap
Tip 3: Bucket Sampling Once our tables are setup with these buckets we can address another important use-case. We often want to query large table joins for a sample. We may want to try out complex queries or explore the data, and we want to do this iteratively, swiftly, and not process the whole data set. This is particularly difficult because of the joining of the tables since only very little data may overlap on independent samples from two tables. Ideally we would want to sample the relevant data on both tables and join it, i.e. ensure that we sample the same itinerary IDs from both tables and not sets with no or little overlap. The bucketing on the join column enables us to join specific buckets from two tables with data overlapping on the join column. Effectively, we execute exactly one part of the complete join operation and only incur the cost of it. The hashing function on the ID has the additional benefit of a (somewhat) random nature providing a representative sample.
table-sample
Example Hive TABLESAMPLE on bucketed tables
Tip 4: Block Sampling Similarly, to the previous tip, we often want to sample data from only one table to explore queries and data. In these cases we may not want to go through bucketing the table or we have the need to sample the data more randomly (independent from the hashing of a bucketing column) or at decreasing granularity. Block sampling provides a powerful syntax to define various ways of sampling the data in a table with the TABLESAMPLE statement. We can use it to sample a certain percentage, number of bytes, or rows of the data. We can use the sampling to approximate information like average distance between origin and destination of our itineraries. A query using 1% of the data using TABLESAMPLE(1 PERCENT) on a large table will give us a near perfect answer and use up to only a hundredth of the resource and return the result one to two magnitudes faster. In exploratory work or for metrics this approach can be extremely efficient and effective alternative to processing all of the data. The beauty of this solution is that we can scale the sample size with our data size. If we were to explore at Tera- or Petabytes of data we could sample a fraction of percent and get the same actionable information in minutes or less which would otherwise take hours to receive.
Tip 5: Parallel Execution Hadoop can execute map reduce jobs in parallel and several queries executed on Hive make automatically use of this parallelism. However, single, complex Hive queries commonly are translated to a number of map reduce jobs that are executed by default sequentially. Often though some of a query’s map reduce stages are not interdependent and could be executed in parallel. They then can take advantage of spare capacity on a cluster and improve cluster utilization while at the same time reduce the overall query executions time. The configuration in Hive to change this behaviour is a merely switching a single flag SET hive.exce.parallel=true;.
parallal
Example of Hive parallel stage execution of a query In our example in the image above we can see that the two sub-queries are independent and when we enable parallel execution are processed at the same time. In our example this reduced the execution time by 50%! Conclusion The five presented tips in this article can easily be applied by anyone using Hive to improve processing and query speed and reduce resource consumption.
Related Posts Plugin for WordPress, Blogger...