Greenplum聚合函数的两种实现:HashAggregate与
GroupAggregate
在Postgresql/Greenplum中,聚合函数有两种实现⽅式:HashAggregate与GroupAggregate。
我们现在通过⼀个最简单的sql来分析这两种聚合的区别以及其应⽤场景。
lect count(1) from pg_class group by oid;
⼀、两种实现算法的⽐较:
HashAggregate
对于hash聚合来说,数据库会根据group by字段后⾯的值算出hash值,并根据前⾯使⽤的聚合函数在内存中维护对应的列表。如果lect后⾯有两个聚合函数,那么在内存中就会维护两个对应的数据。同样的,有n个聚合函数就会维护n个同样的数组。对于hash来说,数组的长度肯定是⼤于group by的字段的distinct值的个数的,且跟这个值应该呈线性关系,group by后⾯的值越唯⼀,使⽤的内存也就越⼤。
执⾏计划如下:
[sql]
1. aligputf8=# explain lect count(1) from pg_class group by oid;
2. QUERY PLAN
3. ----------------------------------------------------------------------
4. HashAggregate (cost=1721.40..2020.28 rows=23910 width=4)
5. Group By: oid
6. -> Seq Scan on pg_class (cost=0.00..1004.10 rows=143460 width=4)
7. Settings: enable_qscan=on
8. (4 rows)
GroupAggregate
对于普通聚合函数,使⽤group聚合,其原理是先将表中的数据按照group by的字段排序,这样⼦同⼀
个group by的值就在⼀起,这样就只需要对排好序的数据进⾏⼀次全扫描,就可以得到聚合的结果了。
执⾏计划如下:
[sql]
1. aligputf8=# t enable_hashagg = off;
2. SET
3. aligputf8=# explain lect count(1) from pg_class group by oid;
4. QUERY PLAN
5. ----------------------------------------------------------------------------
6. GroupAggregate (cost=13291.66..14666.48 rows=23910 width=4)
7. Group By: oid
8. -> Sort (cost=13291.66..13650.31 rows=143460 width=4)
9. Sort Key: oid
10. -> Seq Scan on pg_class (cost=0.00..1004.10 rows=143460 width=4)
11. Settings: enable_hashagg=off; enable_qscan=on
12. (6 rows)
从上⾯的两个执⾏计划的cost来说,GroupAggregate 由于需要排序,效率很差,消耗是HashAggregate的7倍。所以在GP⾥⾯,对于这种聚合函数的使⽤,采⽤的都是HashAggregate。
⼆、两种实现的内存消耗:
先建⽴⼀张表,并且往⾥⾯inrt数据,通过每个字段的数据唯⼀性不⼀致,还有聚合函数的个数来观察HashAggregate与GroupAggregate在内存的消耗情况以及实际的计算时间的⽐较。
1.表结构如下:
[html]
1. create table test_group(
2. id integer
课型分为哪几类
3. ,col1 numeric
4. ,col2 numeric
5. ,col3 numeric
最终流放河汉6. ,col4 numeric
7. ,col5 numeric
8. ,col6 numeric
9. ,col7 numeric
优惠酒店
10. ,col8 numeric
11. ,col9 numeric
12. ,col11 varchar(100)
13. ,col12 varchar(100)
14. ,col13 varchar(100)
15. ,col14 varchar(100)
16. )distributed by(id);
2.插⼊数据,通过random函数,实现每个字段数据的唯⼀性不⼀样
[sql]
1. aligputf8=# inrt into test_group
2. aligputf8-# lect generate_ries(1,100000),
胆结石中药方剂
3. aligputf8-# (random()*200)::int,
4. aligputf8-# (random()*800)::int,
5. aligputf8-# (random()*1600)::int,
6. aligputf8-# (random()*3200)::int,
7. aligputf8-# (random()*6400)::int,
8. aligputf8-# (random()*12800)::int,
9. aligputf8-# (random()*40000)::int,
10. aligputf8-# (random()*100000)::int,
11. aligputf8-# (random()*1000000)::int,
12. aligputf8-# 'hello',
13. aligputf8-# 'welcome',
14. aligputf8-# 'haha',
15. aligputf8-# 'chen';
16. INSERT 0 100000
表⼤⼩为:
[sql]
1. aligputf8=# lect pg_size_pretty(pg_relation_size('test_group'));
2. pg_size_pretty
3. ----------------
怎么炖燕窝
4. 12 MB
5. (1 row)
3.使⽤explain analyze来观察实际数据库消耗的内存差异:
以下是底层单个节点来计算的,避免了⼴播的时间跟内存消耗
HashAggregate
[sql]
1. aligputf8=# explain analyze lect sum(col1),sum(col2),sum(col3),sum(col4),sum(col5),sum(col6),sum(col7),sum(col8),sum(col9) from test_group group by col5;
2. QUERY PLAN
3. ----------------------------------------------------------------------------------------------------
4. HashAggregate (cost=4186.96..5432.88 rows=38336 width=62)
5. Group By: col5
6. Rows out: 6401 rows with 289 ms to first row, 295 ms to end, start offt by 0.143 ms.
7. Executor memory: 2818K bytes.
8. -> Seq Scan on test_group (cost=0.00..1480.56 rows=108256 width=62)
9. Rows out: 100000 rows with 0.023 ms to first row, 48 ms to end, start offt by 0.218 ms.
10. Slice statistics:
11. (slice0) Executor memory: 2996K bytes.
12. Settings: enable_qscan=off
13. Total runtime: 296.283 ms
14. (10 rows)
GroupAggregate
[sql]
1. aligputf8=# explain analyze lect sum(col1),sum(col2),sum(col3),sum(col4),sum(col5),sum(col6),sum(col7),sum(col8),sum(col9) from test_group group by col5;
2. QUERY PLAN
3. ----------------------------------------------------------------------------------------------------------
4. GroupAggregate (cost=10532.97..1475分数和小数的互化
5.93 rows=38336 width=62)
5. Group By: col5
6. Rows out: 6401 rows with 306 ms to first row, 585 ms to end, start offt by 0.092 ms.
7. Executor memory: 8K bytes.
8. -> Sort (cost=10532.97..10803.61 rows=108256 width=62)
9. Sort Key: col5
10. Rows out: 100000 rows with 306 ms to first row, 342 ms to end, start offt by 0.093 ms.
11. Executor memory: 19449K bytes.
12. Work_mem ud: 19449K bytes.
13. -> Seq Scan on test_group (cost=0.00..1480.56 rows=108256 width=62)
14. Rows out: 100000 rows with 0.021 ms to first row, 46 ms to end, start offt by 0.116 ms.
15. Slice statistics:
16. (slice0) Executor memory: 19623K bytes. Work_mem: 19449K bytes max.
17. Settings: enable_hashagg=off; enable_qscan=off
18. Total runtime: 586.114 ms
19. (15 rows)
通过这种⽅法,可以看出,消耗的内存跟实际执⾏时间的⽐例:
SQL:
explain analyze lect sum(col1),sum(col2),sum(col3),sum(col4),sum(col5),sum(col6),sum(col7),sum(col8),sum(col9) from test_group group by id;
9个聚合函数
group by字段col1col2col3col4col5col6col7col8col9id HashAggregate Executor memory554K786K10
74K1715K2996K5469K13691K21312K29428K29476K 时间(ms)266272275281296323357359352340
GroupAggregate Executor memory19623K19623K19623K19623K19623K19623K19623K19623K19623K19615K 时间(ms)500533547568589609636652649387
SQL:
27个聚合函数
group by字段col1col2col3col4col5col6col7col8col9id HashAggregate Executor memory514K1299K2340K4405K8504K19687K69947K93859K106419K106876K 时间(ms)504.91511.03523.36559.85616.94937.731179.051395.561391.271391.14
GroupAggregate Executor memory19687K19687K19687K19687K19687K19687K19687K19687K19687K19687K 时间(ms)759.58782.56802.4838.07880.38939.521104.751256.921365.611142
explain analyze lect sum(col1),sum(col2),sum(col3),sum(col4),sum(col5),sum(col6),sum(col7),su
m(col8),sum(col9),
max(col1),max(col2),max(col3),max(col4),max(col5),max(col6),max(col7),max(col8),max(col9),
avg(col1),avg(col2),avg(col3),avg(col4),avg(col5),avg(col6),avg(col7),avg(col8),avg(col9) from test_group group by id;
可以看出,对于GroupAggregate来说,消耗的内存基本上是恒定的,⽆论group by哪个字段。当聚合函数较少的时候,速度也相对较慢,但是相对稳定。
HashAggregate在少数聚合函数是表现优异,但是很多聚合函数,性能跟消耗的内存差异很明显。尤其是受group by字段的唯⼀性很明显,字段
count(district)值越⼤,hash聚合消耗的内存越多,性能下降剧烈。
所以在sql中有⼤量聚合函数,group by 的字段由相对⽐较唯⼀的时候,应该⽤GroupAggregate,⽽不能⽤HashAggregate。
三、在GP4.1出现的SQL报错:
在GP4.1中,之前出现过有⼤量聚合函数,并且group by 的字段由相对⽐较唯⼀的SQL报错如下:
ERROR: Unexpected internal error: Segment process received signal SIGSEGV
这个sql其实应该就是占⽤内存太多,进程被发出信号⼲掉导致的报错。
查看执⾏计划,发现是HashAggregate搞得⿁。⼀般来说,数据库会根据统计信息来选择HashAggregate或者是GroupAggregate,但是有可能统计信息不够详细或者sql太复杂⽽选错执⾏计划。
⼀般遇到这种问题,有两张办法:
1.拆分成多个sql来跑,减少HashAggregate使⽤的内存.
2.在跑sql之前,先执⾏enable_hashagg = off;将hash聚合参数关掉。强制不⾛HashAggregate,建议⽤这种。
下次如果再遇到这种sql报错,建议采⽤这种⽅法改⼀下脚本试⼀下。
注:当work_mem不够内存使⽤时:
[sql]
1. aligputf8=# explain analyze lect sum(col1),sum(col2),sum(col3),sum(col4),sum(col5),sum(col6),sum(col7),sum(col8),sum(col9),
2. max(col1),max(col2),max(col3),max(col4),max(col5),max(col6),max(col7),max(col8),max(col9),
3. avg(col1),avg(col2),avg(col3),avg(col4),avg(col5),avg(col6),avg(col7),avg(col8),avg(col9) from test_group group by id;
4. QUERY PLAN
5. ----------------------------------------------------------------------------------------------------
6. HashAggregate (cost=15225.85..29783.06 rows=108256 width=66)
7. Group By: id
8. Rows out: 100000 rows with 722 ms to first row, 1367 ms to end, start offt by 0.125 ms.
9. Executor memory: 32536K bytes.屑读音
10. Work_mem ud: 32001K bytes.
11. Work_mem wanted: 106876K bytes to lesn workfile I/O.
12. 100000 groups total in 32 batches; 1 overflows; 100000 spill groups.
13. Hash chain length 1.8 avg, 20 max, using 74100 of 135168 buckets.
14. -> Seq Scan on test_group (cost=0.00..1480.56 rows=108256 width=66)
15. Rows out: 100000 rows with 0.016 ms to first row, 51 ms to end, start offt by 0.142 ms.
16. Slice statistics:
湖州长兴
17. (slice0) * Executor memory: 32697K bytes. Work_mem: 32001K bytes max, 106876K bytes wanted.
18. Settings: enable_groupagg=off; enable_hashagg=on; enable_qscan=off; work_mem=32000kB
19. Total runtime: 1391.138 ms
20. (14 rows)
当work_mem⾜够时:
[sql]
1. QUERY PLAN
2. ----------------------------------------------------------------------------------------------------
3. HashAggregate (cost=9058.48..17448.32 rows=108256 width=66)
4. Group By: id
5. Rows out: 100000 rows with 460 ms to first row, 1014 ms to end, start offt by 0.120 ms.
6. Executor memory: 110093K bytes.
7. -> Seq Scan on test_group (cost=0.00..1480.56 rows=108256 width=66)
8. Rows out: 100000 rows with 0.019 ms to first row, 52 ms to end, start offt by 0.554 ms.
9. Slice statistics:
10. (slice0) Executor memory: 110271K bytes.
11. Settings: enable_groupagg=off; enable_hashagg=on; enable_qscan=off; work_mem=320000kB
12. Total runtime: 1038.209 ms
13.
14. (10 rows)
15.