Thursday 21 November 2013

Hive Language Manual UDF

Hive Operators and User-Defined Functions (UDFs)

Case-insensitive
All Hive keywords are case-insensitive, including the names of Hive operators and functions.
In the CLI, use the commands below to show the latest documentation:
SHOW FUNCTIONS;
DESCRIBE FUNCTION <function_name>;
DESCRIBE FUNCTION EXTENDED <function_name>;

Built-in Operators

Relational Operators

The following operators compare the passed operands and generate a TRUE or FALSE value depending on whether the comparison between the operands holds.
OperatorOperand typesDescription
A = BAll primitive typesTRUE if expression A is equal to expression B otherwise FALSE
A <=> BAll primitive typesReturns same result with EQUAL(=) operator for non-null operands, but returns TRUE if both are NULL, FALSE if one of the them is NULL (as of version 0.9.0)
A == BNone!Fails because of invalid syntax. SQL uses =, not ==
A <> BAll primitive typesNULL if A or B is NULL, TRUE if expression A is NOT equal to expression B otherwise FALSE
A != BAll primitive typesa synonym for the <> operator
A < BAll primitive typesNULL if A or B is NULL, TRUE if expression A is less than expression B otherwise FALSE
A <= BAll primitive typesNULL if A or B is NULL, TRUE if expression A is less than or equal to expression B otherwise FALSE
A > BAll primitive typesNULL if A or B is NULL, TRUE if expression A is greater than expression B otherwise FALSE
A >= BAll primitive typesNULL if A or B is NULL, TRUE if expression A is greater than or equal to expression B otherwise FALSE
A [NOT] BETWEEN B AND CAll primitive typesNULL if A, B or C is NULL, TRUE if A is greater than or equal to B AND A less than or equal to C otherwise FALSE. This can be inverted by using the NOT keyword. (as of version 0.9.0)
A IS NULLall typesTRUE if expression A evaluates to NULL otherwise FALSE
A IS NOT NULLAll typesFALSE if expression A evaluates to NULL otherwise TRUE
A [NOT] LIKE BstringsNULL if A or B is NULL, TRUE if string A matches the SQL simple regular expression B, otherwise FALSE. The comparison is done character by character. The _ character in B matches any character in A (similar to . in posix regular expressions) while the % character in B matches an arbitrary number of characters in A (similar to .* in posix regular expressions) e.g. 'foobar' like 'foo' evaluates to FALSE where as 'foobar' like 'foo_ _ _' evaluates to TRUE and so does 'foobar' like 'foo%'
A RLIKE BstringsNULL if A or B is NULL, TRUE if any (possibly empty) substring of A matches the Java regular expression B, otherwise FALSE. E.g. 'foobar' RLIKE 'foo' evaluates to TRUE and so does 'foobar' RLIKE '^f.*r$'.
A REGEXP BstringsSame as RLIKE

Arithmetic Operators

The following operators support various common arithmetic operations on the operands. All return number types; if any of the operands are NULL, then the result is also NULL.
OperatorOperand typesDescription
A + BAll number typesGives the result of adding A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. e.g. since every integer is a float, therefore float is a containing type of integer so the + operator on a float and an int will result in a float.
A - BAll number typesGives the result of subtracting B from A. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A * BAll number typesGives the result of multiplying A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. Note that if the multiplication causing overflow, you will have to cast one of the operators to a type higher in the type hierarchy.
A / BAll number typesGives the result of dividing B from A. The result is a double type.
A % BAll number typesGives the reminder resulting from dividing A by B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A & BAll number typesGives the result of bitwise AND of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A | BAll number typesGives the result of bitwise OR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
A ^ BAll number typesGives the result of bitwise XOR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands.
~AAll number typesGives the result of bitwise NOT of A. The type of the result is the same as the type of A.

Logical Operators

The following operators provide support for creating logical expressions. All of them return boolean TRUE, FALSE, or NULL depending upon the boolean values of the operands. NULL behaves as an "unknown" flag, so if the result depends on the state of an unknown, the result itself is unknown.
OperatorOperand typesDescription
A AND BbooleanTRUE if both A and B are TRUE, otherwise FALSE. NULL if A or B is NULL
A && BbooleanSame as A AND B
A OR BbooleanTRUE if either A or B or both are TRUE; FALSE OR NULL is NULL; otherwise FALSE
A || BbooleanSame as A OR B
NOT AbooleanTRUE if A is FALSE or NULL if A is NULL. Otherwise FALSE.
! AbooleanSame as NOT A
A IN (val1, val2, ...)booleanTRUE if A is equal to any of the values. As of Hive 0.13 subqueries are supported in IN statements.
A NOT IN (val1, val2, ...)booleanTRUE if A is not equal to any of the values. As of Hive 0.13 subqueries are supported in NOT IN statements.
[NOT] EXISTS (subquery)TRUE if the the subquery returns at least one row. Supported as of Hive 0.13.

Complex Type Constructors

The following functions construct instances of complex types.
Constructor FunctionOperandsDescription
map(key1, value1, key2, value2, ...)Creates a map with the given key/value pairs
struct(val1, val2, val3, ...)Creates a struct with the given field values. Struct field names will be col1, col2, ...
named_struct(name1, val1, name2, val2, ...)Creates a struct with the given field names and values. (as of Hive 0.8.0)
array(val1, val2, ...)Creates an array with the given elements
create_union(tag, val1, val2, ...)Creates a union type with the value that is being pointed to by the tag parameter

Operators on Complex Types

The following operators provide mechanisms to access elements in Complex Types
OperatorOperand typesDescription
A[n]A is an Array and n is an intReturns the nth element in the array A. The first element has index 0 e.g. if A is an array comprising of ['foo', 'bar'] then A[0] returns 'foo' and A[1] returns 'bar'
M[key]M is a Map<K, V> and key has type KReturns the value corresponding to the key in the map e.g. if M is a map comprising of {'f' -> 'foo', 'b' -> 'bar', 'all' -> 'foobar'} then M['all'] returns 'foobar'
S.xS is a structReturns the x field of S. e.g for struct foobar {int foo, int bar} foobar.foo returns the integer stored in the foo field of the struct.

Built-in Functions

Mathematical Functions

The following built-in mathematical functions are supported in hive; most return NULL when the argument(s) are NULL:
Return TypeName (Signature)Description
DOUBLEround(DOUBLE a)Returns the rounded BIGINT value of a
DOUBLEround(DOUBLE a, INT d)Returns a rounded to d decimal places
BIGINTfloor(DOUBLE a)Returns the maximum BIGINT value that is equal or less than a
BIGINTceil(DOUBLE a), ceiling(DOUBLE a)Returns the minimum BIGINT value that is equal or greater than a
DOUBLErand(), rand(INT seed)Returns a random number (that changes from row to row) that is distributed uniformly from 0 to 1. Specifying the seed will make sure the generated random number sequence is deterministic.
DOUBLEexp(DOUBLE a)Returns ea where e is the base of the natural logarithm
DOUBLEln(DOUBLE a)Returns the natural logarithm of the argument a
DOUBLElog10(DOUBLE a)Returns the base-10 logarithm of the argument a
DOUBLElog2(DOUBLE a)Returns the base-2 logarithm of the argument a
DOUBLElog(DOUBLE base, DOUBLE a)Return the base-base logarithm of the argument d
DOUBLEpow(DOUBLE a, DOUBLE p), power(DOUBLE a, DOUBLE p)Return ap
DOUBLEsqrt(DOUBLE a)Returns the square root of a
STRINGbin(BIGINT a)Returns the number in binary format (see http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_bin)
STRINGhex(BIGINT a) hex(STRING a) hex(BINARY a)If the argument is an INT or binary hex returns the number as a STRING in hex format. Otherwise if the number is a STRING, it converts each character into its hex representation and returns the resulting STRING. (see http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_hex, BINARYversion as of Hive 0.12.0)
BINARYunhex(STRING a)Inverse of hex. Interprets each pair of characters as a hexadecimal number and converts to the byte representation of the number. (BINARY version as of Hive 0.12.0, used to return a string)
STRINGconv(BIGINT num, INT from_base, INT to_base), conv(STRING num, INT from_base, INT to_base)Converts a number from a given base to another (see http://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_conv)
DOUBLEabs(DOUBLE a)Returns the absolute value
INT or DOUBLEpmod(INT a, INT b), pmod(DOUBLE a, DOUBLE b)Returns the positive value of a mod b
DOUBLEsin(DOUBLE a)Returns the sine of a (a is in radians)
DOUBLEasin(DOUBLE a)Returns the arc sin of a if -1<=a<=1 or NULL otherwise
DOUBLEcos(DOUBLE a)Returns the cosine of a (a is in radians)
DOUBLEacos(DOUBLE a)Returns the arccosine of a if -1<=a<=1 or NULL otherwise
DOUBLEtan(DOUBLE a)Returns the tangent of a (a is in radians)
DOUBLEatan(DOUBLE a)Returns the arctangent of a
DOUBLEdegrees(DOUBLE a)Converts value of a from radians to degrees
DOUBLEradians(DOUBLE a)Converts value of a from degrees to radians
INT or DOUBLEpositive(INT a), positive(DOUBLE a)Returns a
INT or DOUBLEnegative(INT a), negative(DOUBLE a)Returns -a
FLOUTsign(DOUBLE a)Returns the sign of a as '1.0' (if a is positive) or '-1.0' (if a is negative), '0.0' otherwise
DOUBLEe()Returns the value of e
DOUBLEpi()Returns the value of pi

Mathematical Functions and Operators for Decimal Datatypes

Version
The decimal datatype was introduced in Hive 0.11.0 (HIVE-2693).
All regular arithmetic operators (such as +, -, *, /) and relevant mathematical UDFs (Floor, Ceil, Round, and many more) have been updated to handle decimal types. For a list of supported UDFs, see Mathematical UDFs in Hive Data Types.

Collection Functions

The following built-in collection functions are supported in hive:
Return TypeName(Signature)Description
intsize(Map<K.V>)Returns the number of elements in the map type
intsize(Array<T>)Returns the number of elements in the array type
array<K>map_keys(Map<K.V>)Returns an unordered array containing the keys of the input map
array<V>map_values(Map<K.V>)Returns an unordered array containing the values of the input map
booleanarray_contains(Array<T>, value)Returns TRUE if the array contains value
array<t>sort_array(Array<T>)Sorts the input array in ascending order according to the natural ordering of the array elements and returns it (as of version 0.9.0)

Type Conversion Functions

The following type conversion functions are supported in hive:
Return TypeName(Signature)Description
binarybinary(string|binary)Casts the parameter into a binary
Expected "=" to follow "type"cast(expr as <type>)Converts the results of the expression expr to <type> e.g. cast('1' as BIGINT) will convert the string '1' to it integral representation. A null is returned if the conversion does not succeed.

Date Functions

The following built-in date functions are supported in hive:
Return TypeName(Signature)Description
stringfrom_unixtime(bigint unixtime[, string format])Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the format of "1970-01-01 00:00:00"
bigintunix_timestamp()Gets current time stamp using the default time zone.
bigintunix_timestamp(string date)Converts time string in format yyyy-MM-dd HH:mm:ss to Unix time stamp, return 0 if fail: unix_timestamp('2009-03-20 11:30:01') = 1237573801
bigintunix_timestamp(string date, string pattern)Convert time string with given pattern (see [http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html]) to Unix time stamp, return 0 if fail: unix_timestamp('2009-03-20', 'yyyy-MM-dd') = 1237532400
stringto_date(string timestamp)Returns the date part of a timestamp string: to_date("1970-01-01 00:00:00") = "1970-01-01"
intyear(string date)Returns the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970
intmonth(string date)Returns the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") = 11
intday(string date) dayofmonth(date)Return the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1
inthour(string date)Returns the hour of the timestamp: hour('2009-07-30 12:58:59') = 12, hour('12:58:59') = 12
intminute(string date)Returns the minute of the timestamp
intsecond(string date)Returns the second of the timestamp
intweekofyear(string date)Return the week number of a timestamp string: weekofyear("1970-11-01 00:00:00") = 44, weekofyear("1970-11-01") = 44
intdatediff(string enddate, string startdate)Return the number of days from startdate to enddate: datediff('2009-03-01', '2009-02-27') = 2
stringdate_add(string startdate, int days)Add a number of days to startdate: date_add('2008-12-31', 1) = '2009-01-01'
stringdate_sub(string startdate, int days)Subtract a number of days to startdate: date_sub('2008-12-31', 1) = '2008-12-30'
timestampfrom_utc_timestamp(timestamp, string timezone)Assumes given timestamp is UTC and converts to given timezone (as of Hive 0.8.0)
timestampto_utc_timestamp(timestamp, string timezone)Assumes given timestamp is in given timezone and converts to UTC (as of Hive 0.8.0)

Conditional Functions

Return TypeName(Signature)Description
Tif(boolean testCondition, T valueTrue, T valueFalseOrNull)Return valueTrue when testCondition is true, returns valueFalseOrNull otherwise
TCOALESCE(T v1, T v2, ...)Return the first v that is not NULL, or NULL if all v's are NULL
TCASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] ENDWhen a = b, returns c; when a = d, return e; else return f
TCASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] ENDWhen a = true, returns b; when c = true, return d; else return e

String Functions

The following built-in String functions are supported in hive:
Return TypeName(Signature)Description
intascii(string str)Returns the numeric value of the first character of str
stringbase64(binary bin)Convert the argument from binary to a base 64 string (as of Hive 0.12.0)
stringconcat(string|binary A, string|binary B...)Returns the string or bytes resulting from concatenating the strings or bytes passed in as parameters in order. e.g. concat('foo', 'bar') results in 'foobar'. Note that this function can take any number of input strings.
array<struct<string,double>>context_ngrams(array<array<string>>, array<string>, int K, int pf)Returns the top-k contextual N-grams from a set of tokenized sentences, given a string of "context". See StatisticsAndDataMining for more information.
stringconcat_ws(string SEP, string A, string B...)Like concat() above, but with custom separator SEP.
stringconcat_ws(string SEP, array<string>)Like concat_ws() above, but taking an array of strings. (as of Hive 0.9.0)
stringdecode(binary bin, string charset)Decode the first argument into a String using the provided character set (one of 'US_ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null. (as of Hive 0.12.0)
binaryencode(string src, string charset)Encode the first argument into a BINARY using the provided character set (one of 'US_ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). If either argument is null, the result will also be null. (as of Hive 0.12.0)
intfind_in_set(string str, string strList)Returns the first occurance of str in strList where strList is a comma-delimited string. Returns null if either argument is null. Returns 0 if the first argument contains any commas. e.g. find_in_set('ab', 'abc,b,ab,c,def') returns 3
stringformat_number(number x, int d)Formats the number X to a format like '#,###,###.##', rounded to D decimal places, and returns the result as a string. If D is 0, the result has no decimal point or fractional part. (as of Hive 0.10.0)
stringget_json_object(string json_string, string path)Extract json object from a json string based on json path specified, and return json string of the extracted json object. It will return null if the input json string is invalid. NOTE: The json path can only have the characters [0-9a-z_], i.e., no upper-case or special characters. Also, the keys *cannot start with numbers.* This is due to restrictions on Hive column names.
booleanin_file(string str, string filename)Returns true if the string str appears as an entire line in filename.
intinstr(string str, string substr)Returns the position of the first occurence of substr in str
intlength(string A)Returns the length of the string
intlocate(string substr, string str[, int pos])Returns the position of the first occurrence of substr in str after position pos
stringlower(string A) lcase(string A)Returns the string resulting from converting all characters of B to lower case e.g. lower('fOoBaR') results in 'foobar'
stringlpad(string str, int len, string pad)Returns str, left-padded with pad to a length of len
stringltrim(string A)Returns the string resulting from trimming spaces from the beginning(left hand side) of A e.g. ltrim(' foobar ') results in 'foobar '
array<struct<string,double>>ngrams(array<array<string>>, int N, int K, int pf)Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF. See StatisticsAndDataMiningfor more information.
stringparse_url(string urlString, string partToExtract [, string keyToExtract])Returns the specified part from the URL. Valid values for partToExtract include HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO. e.g. parse_url('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1', 'HOST') returns 'facebook.com'. Also a value of a particular key in QUERY can be extracted by providing the key as the third argument, e.g. parse_url('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1', 'QUERY', 'k1') returns 'v1'.
stringprintf(String format, Obj... args)Returns the input formatted according do printf-style format strings (as of Hive 0.9.0)
stringregexp_extract(string subject, string pattern, int index)Returns the string extracted using the pattern. e.g. regexp_extract('foothebar', 'foo(.*?)(bar)', 2) returns 'bar.' Note that some care is necessary in using predefined character classes: using '\s' as the second argument will match the letter s; ' 
s' is necessary to match whitespace, etc. The 'index' parameter is the Java regex Matcher group() method index. See docs/api/java/util/regex/Matcher.html for more information on the 'index' or Java regex group() method.
stringregexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT)Returns the string resulting from replacing all substrings in INITIAL_STRING that match the java regular expression syntax defined in PATTERN with instances of REPLACEMENT, e.g. regexp_replace("foobar", "oo|ar", "") returns 'fb.' Note that some care is necessary in using predefined character classes: using '\s' as the second argument will match the letter s; ' 
s' is necessary to match whitespace, etc.
stringrepeat(string str, int n)Repeat str n times
stringreverse(string A)Returns the reversed string
stringrpad(string str, int len, string pad)Returns str, right-padded with pad to a length of len
stringrtrim(string A)Returns the string resulting from trimming spaces from the end(right hand side) of A e.g. rtrim(' foobar ') results in ' foobar'
array<array<string>>sentences(string str, string lang, string locale)Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words. The 'lang' and 'locale' are optional arguments. e.g. sentences('Hello there! How are you?') returns ( ("Hello", "there"), ("How", "are", "you") )
stringspace(int n)Return a string of n spaces
arraysplit(string str, string pat)Split str around pat (pat is a regular expression)
map<string,string>str_to_map(text[, delimiter1, delimiter2])Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ',' for delimiter1 and '=' for delimiter2.
stringsubstr(string|binary A, int start) substring(string|binary A, int start)Returns the substring or slice of the byte array of A starting from start position till the end of string A e.g. substr('foobar', 4) results in 'bar' (see [http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substr])
stringsubstr(string|binary A, int start, int len) substring(string|binary A, int start, int len)Returns the substring or slice of the byte array of A starting from start position with length len e.g. substr('foobar', 4, 1) results in 'b' (see [http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substr])
stringtranslate(string input, string from, string to)Translates the input string by replacing the characters present in the from string with the corresponding characters in the to string. This is similar to the translate function in PostgreSQL. If any of the parameters to this UDF are NULL, the result is NULL as well (available as of Hive 0.10.0)
stringtrim(string A)Returns the string resulting from trimming spaces from both ends of A e.g. trim(' foobar ') results in 'foobar'
binaryunbase64(string str)Convert the argument from a base 64 string to BINARY (as of Hive 0.12.0)
stringupper(string A) ucase(string A)Returns the string resulting from converting all characters of A to upper case e.g. upper('fOoBaR') results in 'FOOBAR'

Misc. Functions

Return TypeName(Signature)Description
variesjava_method(class, method[, arg1[, arg2..]])Synonym for reflect (as of Hive 0.9.0)
variesreflect(class, method[, arg1[, arg2..]])Use this UDF to call Java methods by matching the argument signature (uses reflection). (as of Hive 0.7.0)
inthash(a1[, a2...])Returns a hash value of the arguments (as of Hive 0.4)

xpath

The following functions are described in LanguageManual XPathUDF:
  • xpath, xpath_short, xpath_int, xpath_long, xpath_float, xpath_double, xpath_number, xpath_string

get_json_object

A limited version of JSONPath is supported:
  • $ : Root object
  • . : Child operator
  • [] : Subscript operator for array
  • * : Wildcard for []
Syntax not supported that's worth noticing:
  • : Zero length string as key
  • .. : Recursive descent
  • @ : Current object/element
  • () : Script expression
  • ?() : Filter (script) expression.
  • [,] : Union operator
  • [start:end.step] : array slice operator
Example: src_json table is a single column (json), single row table:
+----+
                               json
+----+
{"store":
  {"fruit":\[{"weight":8,"type":"apple"},{"weight":9,"type":"pear"}],
   "bicycle":{"price":19.95,"color":"red"}
  },
 "email":"amy@only_for_json_udf_test.net",
 "owner":"amy"
}
+----+
The fields of the json object can be extracted using these queries:
hive> SELECT get_json_object(src_json.json, '$.owner') FROM src_json;
amy
hive> SELECT get_json_object(src_json.json, '$.store.fruit\[0]') FROM src_json;
{"weight":8,"type":"apple"}
hive> SELECT get_json_object(src_json.json, '$.non_exist_key') FROM src_json;
NULL

Built-in Aggregate Functions (UDAF)

The following are built-in aggregate functions are supported in Hive:
Return TypeName(Signature)Description
BIGINTcount(*), count(expr), count(DISTINCT expr[, expr_.])count(*) - Returns the total number of retrieved rows, including rows containing NULL values; count(expr) - Returns the number of rows for which the supplied expression is non-NULL; count(DISTINCT expr[, expr]) - Returns the number of rows for which the supplied expression(s) are unique and non-NULL.
DOUBLEsum(col), sum(DISTINCT col)Returns the sum of the elements in the group or the sum of the distinct values of the column in the group
DOUBLEavg(col), avg(DISTINCT col)Returns the average of the elements in the group or the average of the distinct values of the column in the group
DOUBLEmin(col)Returns the minimum of the column in the group
DOUBLEmax(col)Returns the maximum value of the column in the group
DOUBLEvariance(col), var_pop(col)Returns the variance of a numeric column in the group
DOUBLEvar_samp(col)Returns the unbiased sample variance of a numeric column in the group
DOUBLEstddev_pop(col)Returns the standard deviation of a numeric column in the group
DOUBLEstddev_samp(col)Returns the unbiased sample standard deviation of a numeric column in the group
DOUBLEcovar_pop(col1, col2)Returns the population covariance of a pair of numeric columns in the group
DOUBLEcovar_samp(col1, col2)Returns the sample covariance of a pair of a numeric columns in the group
DOUBLEcorr(col1, col2)Returns the Pearson coefficient of correlation of a pair of a numeric columns in the group
DOUBLEpercentile(BIGINT col, p)Returns the exact pth percentile of a column in the group (does not work with floating point types). p must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
array<double>percentile(BIGINT col, array(p1 [, p2]...))Returns the exact percentiles p1, p2, ... of a column in the group (does not work with floating point types). pi must be between 0 and 1. NOTE: A true percentile can only be computed for integer values. Use PERCENTILE_APPROX if your input is non-integral.
DOUBLEpercentile_approx(DOUBLE col, p [, B])Returns an approximate pth percentile of a numeric column (including floating point types) in the group. The B parameter controls approximation accuracy at the cost of memory. Higher values yield better approximations, and the default is 10,000. When the number of distinct values in col is smaller than B, this gives an exact percentile value.
array<double>percentile_approx(DOUBLE col, array(p1 [, p2]...) [, B])Same as above, but accepts and returns an array of percentile values instead of a single one.
array<struct {'x','y'}>histogram_numeric(col, b)Computes a histogram of a numeric column in the group using b non-uniformly spaced bins. The output is an array of size b of double-valued (x,y) coordinates that represent the bin centers and heights
arraycollect_set(col)Returns a set of objects with duplicate elements eliminated
arraycollect_list(col)Returns a list of objects with duplicates (as of Hive 0.13.0)

Built-in Table-Generating Functions (UDTF)

Normal user-defined functions, such as concat(), take in a single input row and output a single output row. In contrast, table-generating functions transform a single input row to multiple output rows.
Return TypeName(Signature)Description
N rowsexplode(ARRAY)Returns one row for each element from the array
N rowsexplode(MAP)Returns one row for each key-value pair from the input map with two columns in each row: one for the key and another for the value. (as of Hive 0.8.0)
inline(ARRAY<STRUCT[,STRUCT]>)Explodes an array of structs into a table (as of Hive 0.10)
Array Typeexplode(array<TYPE> a)For each element in a, explode() generates a row containing that element
tuplejson_tuple(jsonStr, k1, k2, ...)It takes a set of names (keys) and a JSON string, and returns a tuple of values. This is a more efficient version of the get_json_object UDF because it can get multiple keys with just one call
tupleparse_url_tuple(url, p1, p2, ...)This is similar to the parse_url() UDF but can extract multiple parts at once out of a URL. Valid part names are: HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, USERINFO, QUERY:<KEY>.
N rowsposexplode(ARRAY)Behaves like explode for arrays, but includes the position of items in the original array by returning a tuple of (pos, value) (as of Hive 0.13.0)
stack(INT n, v_1, v_2, ..., v_k)Breaks up v_1, ..., v_k into n rows. Each row will have k/n columns. n must be constant.
Using the syntax "SELECT udtf(col) AS colAlias..." has a few limitations:
  • No other expressions are allowed in SELECT
    • SELECT pageid, explode(adid_list) AS myCol... is not supported
  • UDTF's can't be nested
    • SELECT explode(explode(adid_list)) AS myCol... is not supported
  • GROUP BY / CLUSTER BY / DISTRIBUTE BY / SORT BY is not supported
    • SELECT explode(adid_list) AS myCol ... GROUP BY myCol is not supported
Please see LanguageManual LateralView for an alternative syntax that does not have these limitations.

explode

explode() takes in an array as an input and outputs the elements of the array as separate rows. UDTF's can be used in the SELECT expression list and as a part of LATERAL VIEW.
An example use of explode() in the SELECT expression list is as follows:
Consider a table named myTable that has a single column (myCol) and two rows:
Array<int> myCol
[100,200,300]
[400,500,600]
Then running the query:
SELECT explode(myCol) AS myNewCol FROM myTable;
Will produce:
(int) myNewCol
100
200
300
400
500
600

posexplode

Version
Available as of Hive 0.13.0. See HIVE-4943.
posexplode() is similar to explode but instead of just returning the elements of the array it returns the element as well as its position in the original array.
An example use of posexplode() in the SELECT expression list is as follows:
Consider a table named myTable that has a single column (myCol) and two rows:
Array<int> myCol
[100,200,300]
[400,500,600]
Then running the query:
SELECT posexplode(myCol) AS pos, myNewCol FROM myTable;
Will produce:
(int) pos(int) myNewCol
1100
2200
3300
1400
2500
3600

json_tuple

A new json_tuple() UDTF is introduced in hive 0.7. It takes a set of names (keys) and a JSON string, and returns a tuple of values using one function. This is much more efficient than calling GET_JSON_OBJECT to retrieve more than one key from a single JSON string. In any case where a single JSON string would be parsed more than once, your query will be more efficient if you parse it once, which is what JSON_TUPLE is for. As JSON_TUPLE is a UDTF, you will need to use the LATERAL VIEW syntax in order to achieve the same goal.
For example,
select a.timestamp, get_json_object(a.appevents, '$.eventid'), get_json_object(a.appenvets, '$.eventname') from log a;
should be changed to
select a.timestamp, b.*
from log a lateral view json_tuple(a.appevent, 'eventid', 'eventname') b as f1, f2;

parse_url_tuple

The parse_url_tuple() UDTF is similar to parse_url(), but can extract multiple parts of a given URL, returning the data in a tuple. Values for a particular key in QUERY can be extracted by appending a colon and the key to the partToExtract argument, e.g. parse_url_tuple('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1', 'QUERY:k1', 'QUERY:k2') returns a tuple with values of 'v1','v2'. This is more efficient than calling parse_url() multiple times. All the input parameters and output column types are string.
SELECT b.*
FROM src LATERAL VIEW parse_url_tuple(fullurl, 'HOST', 'PATH', 'QUERY', 'QUERY:id') b as host, path, query, query_id LIMIT 1;

GROUPing and SORTing on f(column)

A typical OLAP pattern is that you have a timestamp column and you want to group by daily or other less granular date windows than by second. So you might want to select concat(year(dt),month(dt)) and then group on that concat(). But if you attempt to GROUP BY or SORT BY a column on which you've applied a function and alias, like this:
select f(col) as fc, count(*) from table_name group by fc;
You will get an error:
FAILED: Error in semantic analysis: line 1:69 Invalid Table Alias or Column Reference fc
Because you are not able to GROUP BY or SORT BY a column alias on which a function has been applied. There are two workarounds. First, you can reformulate this query with subqueries, which is somewhat complicated:
select sq.fc,col1,col2,...,colN,count(*) from
  (select f(col) as fc,col1,col2,...,colN from table_name) sq
 group by sq.fc,col1,col2,...,colN;
Or you can make sure not to use a column alias, which is simpler:
select f(col) as fc, count(*) from table_name group by f(col);
Contact Tim Ellis (tellis) at RiotGames dot com if you would like to discuss this in further detail.

UDF internals

The context of a UDF's evaluate method is one row at a time. A simple invocation of a UDF like
SELECT length(string_col) FROM table_name;
would evaluate the length of each of the string_col's values in the map portion of the job. The side effect of the UDF being evaluated on the map-side is that you can't control the order of rows which get sent to the mapper. It is the same order in which the file split sent to the mapper gets deserialized. Any reduce side operation (e.g. SORT BY, ORDER BY, regular JOIN, etc.) would apply to the UDFs output as if it is just another column of the table. This is fine since the context of the UDF's evaluate method is meant to be one row at a time.
If you would like to control which rows get sent to the same UDF (and possibly in what order), you will have the urge to make the UDF evaluate during the reduce phase. This is achievable by making use ofDISTRIBUTE BY, DISTRIBUTE BY + SORT BY, CLUSTER BY. An example query would be:
SELECT reducer_udf(my_col, distribute_col, sort_col) FROM
(SELECT my_col, distribute_col, sort_col FROM table_name DISTRIBUTE BY distribute_col SORT BY distribute_col, sort_col) t
However, one could argue that the very premise of your requirement to control the set of rows sent to the same UDF is to do aggregation in that UDF. In such a case, using a User Defined Aggregate Function (UDAF) is a better choice. You can read more about writing a UDAF here. Alternatively, you can user a custom reduce script to accomplish the same using Hive's Transform functionality. Both of these options would do aggregations on the reduce side.

Wednesday 20 November 2013

How to change the default key-value seperator of a mapreduce job

TextOutputFormat:
============================== 
The default MapReduce output format, TextOutputFormat, writes records as lines of text. Its keys and values may be of any type, since TextOutputFormat turns them to strings by calling toString() on them.

Each key-value pair is separated by a tab character. We can change this separator to some character of our choice using the mapred.textoutputformat.separator propertty.

To do this you have to add this line in your job function: 

Configuration conf = new Configuration();

// by default \t
conf.set("mapred.textoutputformat.separator", "\t"); 

// to make ,
conf.set("mapred.textoutputformat.separator", ","); 

// to make :
conf.set("mapred.textoutputformat.separator", ":");  

  

KeyValueTextInputFormat
=================================
 
TextInputFormat’s keys, being simply the offset within the file, are not normally very useful. It is common for each line in a file to be a key-value pair, separated by a delimiter such as a tab character. 

For example, this is the output produced by TextOutputFormat, Hadoop’s default OutputFormat. To interpret such files correctly, KeyValueTextInputFormat is appropriate.

You can specify the separator via the mapreduce.input.keyvaluelinerecordreader.key.value.separator property (In the older MapReduce API this was
key.value.separator.in.input.line). It is a tab character by default. 


Consider the following input file, where → represents a (horizontal) tab character:

line1→On the top of the Crumpetty Tree
line2→The Quangle Wangle sat,
line3→But his face you could not see,
line4→On account of his Beaver Hat.


Configuration conf = new Configuration();

// by default \t
 conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", "\t");  

// to make →
 conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", "→");   

// to make :
 conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", ":");   

  
NLineInputFormat
=============================
With TextInputFormat and KeyValueTextInputFormat, each mapper receives a variable number of lines of input. The number depends on the size of the split and the length of the lines. If you want your mappers to receive a fixed number of lines of input, then NLineInputFormat is the InputFormat to use. Like TextInputFormat, the keys are the byte offsets within the file and the values are the lines themselves.

N refers to the number of lines of input that each mapper receives. With N set to one (the default), each mapper receives exactly one line of input. The mapreduce.input.lineinputformat.linespermap property
(In the older MapReduce API this was mapred.line.input.format.linespermap) controls the value of N.

By way of example, consider these four lines again:

On the top of the Crumpetty Tree
The Quangle Wangle sat,
But his face you could not see,
On account of his Beaver Hat.


If, for example, N is two, then each split contains two lines. One mapper will receive the first two key-value pairs:

(0, On the top of the Crumpetty Tree)
(33, The Quangle Wangle sat,)
And another mapper will receive the second two key-value pairs:

(57, But his face you could not see,)
(89, On account of his Beaver Hat.)
The keys and values are the same as TextInputFormat produces. What is different is the way the splits are constructed.


Usually, having a map task for a small number of lines of input is inefficient (due to the overhead in task setup), but there are applications that take a small amount of input data and run an extensive (that is, CPU-intensive) computation for it, then emit their output. Simulations are a good example. By creating an input file that specifies input parameters, one per line, you can perform a parameter sweep: run a set of simulations in parallel to find how a model varies as the parameter changes.
 
 

Binary Input
==========================
Hadoop MapReduce is not just restricted to processing textual data—it has support for binary formats, too.

SequenceFileInputFormat

==========================
Hadoop’s sequence file format stores sequences of binary key-value pairs. They are well suited as a format for MapReduce data since they are splittable (they have sync points so that readers can synchronize with record boundaries from an arbitrary point in the file, such as the start of a split), they support compression as a part of the format, and they can store arbitrary types using a variety of serialization frameworks.

To use data from sequence files as the input to MapReduce, you use SequenceFileInputFormat. The keys and values are determined by the sequence file, and you need to make sure that your map input types correspond. 


For example, if your sequence file has IntWritable keys and Text values, then the map signature would be Mapper<IntWritable, Text, K, V>, where K and V are the types of the map’s output keys and values.

Note
Although its name doesn’t give it away, SequenceFileInputFormat can read MapFiles as well as sequence files. If it finds a directory where it was expecting a sequence file, SequenceFileInputFormat assumes that it is reading a MapFile and uses its data file. This is why there is no MapFileInputFormat class.


SequenceFileAsTextInputFormat

=======================================
SequenceFileAsTextInputFormat is a variant of SequenceFileInputFormat that converts the sequence file’s keys and values to Text objects. The conversion is performed by calling toString() on the keys and values. This format makes sequence files suitable input for Streaming.

SequenceFileAsBinaryInputFormat

=========================================
SequenceFileAsBinaryInputFormat is a variant of SequenceFileInputFormat that retrieves the sequence file’s keys and values as opaque binary objects. They are encapsulated as BytesWritable objects, and the application is free to interpret the underlying byte array as it pleases. Combined with SequenceFile.Reader’s appendRaw() method, this provides a way to use any binary data types with MapReduce (packaged as a sequence file), although plugging into Hadoop’s serialization mechanism is normally a cleaner alternative.

Multiple Inputs

===========================
Although the input to a MapReduce job may consist of multiple input files (constructed by a combination of file globs, filters, and plain paths), all of the input is interpreted by a single InputFormat and a single Mapper. What often happens, however, is that over time, the data format evolves, so you have to write your mapper to cope with all of your legacy formats. Or, you have data sources that provide the same type of data but in different formats. This arises in the case of performing joins of different datasets; see Reduce-Side Joins. For instance, one might be tab-separated plain text, the other a binary sequence file. Even if they are in the same format, they may have different representations and, therefore, need to be parsed differently.

These cases are handled elegantly by using the MultipleInputs class, which allows you to specify the InputFormat and Mapper to use on a per-path basis. For example, if we had weather data from the UK Met Office that we wanted to combine with the NCDC data for our maximum temperature analysis, then we might set up the input as follows:

MultipleInputs.addInputPath(conf, ncdcInputPath, TextInputFormat.class, MaxTemperatureMapper.class)
MultipleInputs.addInputPath(conf, metOfficeInputPath, TextInputFormat.class, MetOfficeMaxTemperatureMapper.class);


This code replaces the usual calls to FileInputFormat.addInputPath() and conf.setMapperClass(). Both Met Office and NCDC data is text-based, so we use TextInputFormat for each. But the line format of the two data sources is different, so we use two different mappers. The MaxTemperatureMapper reads NCDC input data and extracts the year and temperature fields. The MetOfficeMaxTemperatureMapper reads Met Office input data and extracts the year and temperature fields. The important thing is that the map outputs have the same types, since the reducers (which are all of the same type) see the aggregated map outputs and are not aware of the different mappers used to produce them.

The MultipleInputs class has an overloaded version of addInputPath() that doesn’t take a mapper:

public static void addInputPath(JobConf conf, Path path, Class<? extends InputFormat> inputFormatClass)

 
This is useful when you only have one mapper but multiple input formats.






 
Related Posts Plugin for WordPress, Blogger...