hive中count(distinct)的原理

更新时间:2023-07-11 19:51:34 阅读: 评论:0

hive中count(distinct)的原理
参考博客:
(看我)
count(distinct id)的原理
count(distinct id)从执⾏计划上⾯来看:只有⼀个reducer任务(即使你设置reducer任务为100个,实际上也没有⽤),所有的id都会聚集到同⼀个reducer任务进⾏去重然后在聚合,这⾮常容易造成数据倾斜.
STAGE DEPENDENCIES:
Stage-1 is a root stage
Stage-0 depends on stages: Stage-1
STAGE PLANS:
Stage: Stage-1
Map Reduce
Map Operator Tree:
TableScan
alias: emp_ct
Statistics: Num rows: 42 Data size: 171 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: dept_num (type: int)
outputColumnNames: _col0
Statistics: Num rows: 42 Data size: 171 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: int)
sort order: +
Statistics: Num rows: 42 Data size: 171 Basic stats: COMPLETE Column stats: NONE
Reduce Operator Tree:
Group By Operator
aggregations: count(DISTINCT KEY._col0:0._col0)
mode: complete新军事革命
outputColumnNames: _col0
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compresd: fal
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
rde: org.apache.hadoop.hive.rde2.lazy.LazySimpleSerDe
Stage: Stage-0
Fetch Operator
limit: -1
Processor Tree:
ListSink
运⾏⽰例:注意设置的reducer任务数量实际上是不⽣效的。
hive> t duces=5;
hive>
> lect count(distinct dept_num)
> from emp_ct;
Query ID = mart_fro_20200320233947_4f60c190-4967-4da6-bf3e-97db786fbc6c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
吗怎么组词
In order to change the average load for a reducer (in bytes):
educers.ducer=<number>
In order to limit the maximum number of reducers:
educers.max=<number>
In order to t a constant number of reducers:
t duces=<number>
Start submit job !
Start GetSplits
GetSplits finish, it costs : 32 milliconds
Submit job success : job_1584341089622_358496
Starting Job = job_1584341089622_358496, Tracking URL = BJHTYD-Hope-25-11.hadoop.jd.local:50320/proxy/application_1584341089622_358496/
Kill Command = /data0/hadoop/hadoop_2.100.31_2019090518/bin/hadoop job  -kill job_1584341089622_358496糖醋姜片
Hadoop job(job_1584341089622_358496) information for Stage-1: number of mappers: 2; number of reducers: 1
2020-03-20 23:39:58,215 Stage-1(job_1584341089622_358496) map = 0%,  reduce = 0%
2020-03-20 23:40:09,628 Stage-1(job_1584341089622_358496) map = 50%,  reduce = 0%, Cumulative CPU 2.74 c
2020-03-20 23:40:16,849 Stage-1(job_1584341089622_358496) map = 100%,  reduce = 0%, Cumulative CPU 7.43 c
2020-03-20 23:40:29,220 Stage-1(job_1584341089622_358496) map = 100%,  reduce = 100%, Cumulative CPU 10.64 c
MapReduce Total cumulative CPU time: 10 conds 640 mc
Stage-1  Elapd : 40533 ms  job_1584341089622_358496
柿子怎么去涩Ended Job = job_1584341089622_358496
MapReduce Jobs Launched:
Stage-1: Map: 2  Reduce: 1  Cumulative CPU: 10.64 c  HDFS Read: 0.000 GB HDFS Write: 0.000 GB SUCCESS  Elapd : 40s533ms job_1584341089622_358496 Total MapReduce CPU Time Spent: 10s640ms
Total Map: 2  Total Reduce: 1
Total HDFS Read: 0.000 GB  Written: 0.000 GB
OK
3
Time taken: 43.025 conds, Fetched: 1 row(s)
count(distinct id)的解决⽅案
该怎么解决这个问题呢?实际上解决⽅法⾮常巧妙:
我们利⽤Hive对嵌套语句的⽀持,将原来⼀个MapReduce作业转换为两个作业,在第⼀阶段选出全部的⾮重复id,在第⼆阶段再对
这些已消重的id进⾏计数。这样在第⼀阶段我们可以通过增⼤Reduce的并发数,并发处理Map输出。在第⼆阶段,由于id已经消重,
因此COUNT(*)操作在Map阶段不需要输出原id数据,只输出⼀个合并后的计数即可。这样即使第⼆阶段Hive强制指定⼀个Reduce Task,极少量的Map输出数据也不会使单⼀的Reduce Task成为瓶颈。改进后的SQL语句如下:
查看⼀下执⾏计划:
STAGE DEPENDENCIES:
地球我的母亲Stage-1 is a root stage
Stage-2 depends on stages: Stage-1
Stage-0 depends on stages: Stage-2
STAGE PLANS:
Stage: Stage-1
Map Reduce
Map Operator Tree:
TableScan
alias: emp_ct
Statistics: Num rows: 42 Data size: 171 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: dept_num (type: int)
outputColumnNames: dept_num
Statistics: Num rows: 42 Data size: 171 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: dept_num (type: int)
sort order: +
Map-reduce partition columns: dept_num (type: int)
Statistics: Num rows: 42 Data size: 171 Basic stats: COMPLETE Column stats: NONE
Reduce Operator Tree:
Group By Operator
keys: KEY._col0 (type: int)
mode: complete
宿舍文化节主题
outputColumnNames: _col0
Statistics: Num rows: 21 Data size: 85 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compresd: fal
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
rde: org.apache.hadoop.hive.rde2.lazybinary.LazyBinarySerDe
Stage: Stage-2
Map Reduce
Map Operator Tree:
TableScan
Reduce Output Operator
sort order:
Statistics: Num rows: 21 Data size: 85 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: int)
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
mode: complete
outputColumnNames: _col0
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compresd: fal
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
rde: org.apache.hadoop.hive.rde2.lazy.LazySimpleSerDe
Stage: Stage-0
Fetch Operator
limit: -1
Processor Tree:
ListSink
具体看⼀下执⾏结果:注意看reducer任务的数量,第⼀个reducer任务是5个,第⼆个是1个.
hive> t duces=5;
hive>
> lect count(dept_num)
> from (
>        lect distinct dept_num
>        from emp_ct
>        ) t1;
Query ID = mart_fro_20200320234453_68ad3780-c3e5-44bc-94df-58a8f2b01f59
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Defaulting to jobconf value of: 5
In order to change the average load for a reducer (in bytes):
educers.ducer=<number>
In order to limit the maximum number of reducers:
educers.max=<number>
In order to t a constant number of reducers:
t duces=<number>
Start submit job !
Start GetSplits
GetSplits finish, it costs : 13 milliconds
Submit job success : job_1584341089622_358684
Starting Job = job_1584341089622_358684, Tracking URL = BJHTYD-Hope-25-11.hadoop.jd.local:50320/proxy/application_1584341089622_358684/ Kill Command = /data0/hadoop/hadoop_2.100.31_2019090518/bin/hadoop job  -kill job_1584341089622_358684
Hadoop job(job_1584341089622_358684) information for Stage-1: number of mappers: 2; number of reducers: 5
2020-03-20 23:45:02,920 Stage-1(job_1584341089622_358684) map = 0%,  reduce = 0%
2020-03-20 23:45:23,533 Stage-1(job_1584341089622_358684) map = 50%,  reduce = 0%, Cumulative CPU 3.48 c
2020-03-20 23:45:25,596 Stage-1(job_1584341089622_358684) map = 100%,  reduce = 0%, Cumulative CPU 7.08 c
2020-03-20 23:45:32,804 Stage-1(job_1584341089622_358684) map = 100%,  reduce = 20%, Cumulative CPU 9.43 c
2020-03-20 23:45:34,861 Stage-1(job_1584341089622_358684) map = 100%,  reduce = 40%, Cumulative CPU 12.39 c
2020-03-20 23:45:36,923 Stage-1(job_1584341089622_358684) map = 100%,  reduce = 80%, Cumulative CPU 18.47 c
2020-03-20 23:45:40,011 Stage-1(job_1584341089622_358684) map = 100%,  reduce = 100%, Cumulative CPU 23.23 c
MapReduce Total cumulative CPU time: 23 conds 230 mc
Stage-1  Elapd : 46404 ms  job_1584341089622_358684
Ended Job = job_1584341089622_358684
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
黄埔蛋
In order to change the average load for a reducer (in bytes):
educers.ducer=<number>
In order to limit the maximum number of reducers:
educers.max=<number>
In order to t a constant number of reducers:
t duces=<number>
Start submit job !
Start GetSplits
GetSplits finish, it costs : 47 milliconds
Submit job success : job_1584341089622_358729
Starting Job = job_1584341089622_358729, Tracking URL = BJHTYD-Hope-25-11.hadoop.jd.local:50320/proxy/application_1584341089622_358729/ Kill Command = /data0/hadoop/hadoop_2.100.31_2019090518/bin/hadoop job  -kill job_1584341089622_358729
Hadoop job(job_1584341089622_358729) information for Stage-2: number of mappers: 5; number of reducers: 1
2020-03-20 23:45:48,353 Stage-2(job_1584341089622_358729) map = 0%,  reduce = 0%
2020-03-20 23:46:05,846 Stage-2(job_1584341089622_358729) map = 20%,  reduce = 0%, Cumulative CPU 2.62 c
2020-03-20 23:46:06,873 Stage-2(job_1584341089622_358729) map = 60%,  reduce = 0%, Cumulative CPU 8.49 c
2020-03-20 23:46:08,931 Stage-2(job_1584341089622_358729) map = 80%,  reduce = 0%, Cumulative CPU 11.53 c
2020-03-20 23:46:09,960 Stage-2(job_1584341089622_358729) map = 100%,  reduce = 0%, Cumulative CPU 15.23 c
为什么老是犯困2020-03-20 23:46:35,639 Stage-2(job_1584341089622_358729) map = 100%,  reduce = 100%, Cumulative CPU 20.37 c
MapReduce Total cumulative CPU time: 20 conds 370 mc
Stage-2  Elapd : 54552 ms  job_1584341089622_358729
Ended Job = job_1584341089622_358729
MapReduce Jobs Launched:
Stage-1: Map: 2  Reduce: 5  Cumulative CPU: 23.23 c  HDFS Read: 0.000 GB HDFS Write: 0.000 GB SUCCESS  Elapd : 46s404ms job_1584341089622_358684 Stage-2: Map: 5  Reduce: 1  Cumulative CPU: 20.37 c  HDFS Read: 0.000 GB HDFS Write: 0.000 GB SUCCESS  Elapd : 54s552ms job_1584341089622_358729 Total MapReduce CPU Time Spent: 43s600ms
Total Map: 7  Total Reduce: 6
Total HDFS Read: 0.000 GB  Written: 0.000 GB
OK
3
Time taken: 103.692 conds, Fetched: 1 row(s)
这个解决⽅案有点类似于upby.skew.indata参数的作⽤!
实际测试:
lect count(distinct dept_num)
from emp_ct
lect count(*)
from (
lect distinct dept_num
from emp_ct
)

本文发布于:2023-07-11 19:51:34,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/82/1091290.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:输出   阶段   数据   任务   解决   改进   作业
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图