使用db_bench对rocksdb进行性能压测

更新时间:2023-06-04 10:17:20 阅读: 评论:0

使⽤db_bench对rocksdb进⾏性能压测
⽂章⽬录
rocksdb提供benchmark⼯具来对⾃⾝性能进⾏全⽅位各个纬度的评估,包括顺序读写,随机读写,热点读写,删除,合并,查找,校验等性能的评估,⼗分好⽤。
1. ⼯具编译
这⾥使⽤的是cmake的⽅式,主要是为了在⾃⼰⽤户下指定对应的第三⽅库的路径(glfags等)以及指定是否开启rocksdb⾃⼰的压缩算法编译,否则直接make 原⽣的Makefile问题较多且 db_bench所依赖的⼀些库都没法⾃动链接进去(zstd,snappy等压缩算法 默认是不编译到db_bench中的)
如果你的db_bench⼯具已经安装好了,可以跳过当前步骤,如果没有编译好,不想这么⿇烦,可以直接看下⾯,会提供⼀个整体的便捷编译脚本。
基本流程如下:
1. 下载rocksdb源码
git /facebook/rocksdb.git
如果需要指定对应的版本,可以下载好之后执⾏
git branch xxx切换到对应版本的分⽀
2. 第三⽅库的编译安装
a. gflags
a. git clone githu
b. cd gflags
c. mkdir build &&cd build
#以下DCMAKE_INSTALL_PREFIX 之后的路径为⾃⼰想要安装的路径,如果有root权限且可以安装到系统⽬录下,那么可以不⽤指定prefix选项,BUILD_ SHARED_LIBS选项表⽰开启编译gflags的动态库,否则默认不编译动态库
d. cmake .. -DCMAKE_INSTALL_PREFIX=/xxx -DBUILD_SHARED_LIBS=1 -DCMAKE_BUILD_TY
PE=Relea
e. make&&make install
#增加gflags的include 和 lib库的路径到系统库下⾯,如上⾯未指定路径,则系统默认安装在
#/usr/local/gflags
f. 编辑当前⽤户下的bashrc,添加如下内容:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/xxx/gcc-5.3/lib64:/xxx/gflags/lib
export LIBRARY_PATH=$LIBRARY_PATH:/xxx/gflags/include
b. 安装snappy
sudo yum install snappy snappy-devel
c. 安装 zlib
yum install zlib zlib-devel
d. 安装 bzip2
yum install bzip2 bzip2-devel
e.安装lz4
yum install lz4-devel
f. 安装zstardard
/facebook/zstd/archive/v1.1.
mv v1.1. zstd-1.1.
tar zxvf zstd-1.1.
cd zstd-1.1.3
make&&sudo make install
3. ⽣成makefile
cd rocksdb &&mkdir build
#以下的prefix路径需要指定安装gflags的prefix路径,否则编译过程中⽆法链接到gflags的库
#如果cmake 版本过低,使⽤cmake3
#DWITH_xxx 表⽰开启⼏个压缩算法的编译选项,否则运⾏db_bench时rocksdb产⽣数据压缩的时候⽆法找到对应的库
cmake .. -DCMAKE_PREFIX_PATH=/xxx -DWITH_SNAPPY=1 -DWITH_LZ4=1 -DWITH_ZLIB=1 -DWITH_ZSTD=1 -DCMAKE_BUILD_TYPE=Releas e=relea
4. 编译
以上会通过上级⽬录的 在当前⽬录⽣成Makefile,最终执⾏编译
make -j
成功后db_bench⼯具会在当前⽬录下⽣成。
编译脚本如下,那⼀些压缩算法的lib还是需要提前装好的。
#!/bin/sh
t -x
# build gflags
function build_gflags()
{
local source_dir
local build_dir
# dir is exists, there is no need to clone again
if[ -e "gflags"]&&[ -e "gflags/build"];then
return
fi
git clone  /gflags/gflags.git
if[$? -ne 0];then
echo"git clone gflags failed."
fi
cd gflags
source_dir=$(pwd)
build_dir=$source_dir/build
mkdir -p "$build_dir"/ \
&&cd"$build_dir"/ \
&& cmake3 .. -DCMAKE_INSTALL_PREFIX="$build_dir" -DBUILD_SHARED_LIBS=1 -DBUILD_STATIC_LIBS=1\
&&make\
&&make install\
&&cd../../
}
git submodule update --init --recursive
build_gflags
SOURCE_DIR=$(pwd)
BUILD_DIR="$SOURCE_DIR"/build
GFLAGS_DIR=gflags/build
# for using multi cores parallel compile
NUM_CPU_CORES=$(grep"processor" -c /proc/cpuinfo)
if[ -z "${NUM_CPU_CORES}"]||["${NUM_CPU_CORES}"="0"];then
NUM_CPU_CORES=1
我们都是好朋友
fi
mkdir -p "$BUILD_DIR"/ \
&&cd"$BUILD_DIR"
cmake3 "$SOURCE_DIR" -DCMAKE_BUILD_TYPE=Relea -DWITH_SNAPPY=ON -DWITH_TESTS=OFF -DCMAKE_PREFIX_PATH=$GFLAGS_DIR make -j $NUM_CPU_CORES
2. 基本性能压测
由于db_bench⼯具的选项太多了,这⾥直接提取社区的
核⼼是benchmark,它代表本次测试使⽤的压测⽅式,benchmark的列表如下
fillq      -- write N values in quential key order in async mode
fillqdeterministic      -- write N values in the specified key order and keep the shape of the LSM tree
fillrandom    -- write N values in random key order in async mode
喜欢的英语怎么写filluniquerandomdeterministic      -- write N values in a random key order and keep the shape of the LSM tree
overwrite    -- overwrite N values in random key order in async mode
fillsync      -- write N/100 values in random key order in sync mode
fill100K      -- write N/1000 100K values in random order in async mode
deleteq    -- delete N keys in quential order
deleterandom  -- delete N keys in random order
治疗气管炎的药
readq      -- read N times quentially
readtocache  -- 1 thread reading databa quentially
readrever  -- read N times in rever order
readrandom    -- read N times in random order
伊索寓言小故事
readmissing  -- read N missing keys in random order
readwhilewriting      -- 1 writer, N threads doing random reads
readwhilemerging      -- 1 merger, N threads doing random reads
2000年属龙是什么命readrandomwriterandom -- N threads doing random-read, random-write
prefixscanrandom      -- prefix scan N times in random order风景用英语怎么说
updaterandom  -- N threads doing read-modify-write for random keys
appendrandom  -- N threads doing read-modify-write with growing values
mergerandom  -- same as updaterandom/appendrandom using merge operator. Must be ud with merge_operator
readrandommergerandom -- perform N random read-or-merge operations. Must be ud with merge_operator
newiterator  -- repeated iterator creation
ekrandom    -- N random eks, call Next ek_nexts times per ek
ekrandomwhilewriting -- ekrandom and 1 thread doing overwrite
ekrandomwhilemerging -- ekrandom and 1 thread doing merge
crc32c        -- repeated crc32c of 4K of data
做韵律操xxhash        -- repeated xxHash of 4K of data
acquireload  -- load N*1000 times
fillekq  -- write N values in quential key, then read them by eking to each key
randomtransaction    -- execute N random transactions and verify correctness
randomreplacekeys    -- randomly replaces N keys by deleting the old version and putting the new version
timeries            -- 1 writer generates time ries data and multiple readers doing random reads on id
创建⼀个db,并写⼊⼀些数据 ./db_bench --benchmarks="fillq"
但是这样并不会打印更多有效的元信息
DB path: [/tmp/rocksdbtest-1001/dbbench]
fillq      :  2.354 micros/op 424867 ops/c;47.0 MB/s
创建⼀个db,并打印⼀些元信息./db_bench --benchmarks="fillq,stats"
--benchmarks表⽰测试的顺序,⽀持持续叠加。本次就是顺序写之后打印db的状态信息。
这样会打印db相关的stats信息,包括db的stat信息和compaction的stat信息
DB path: [/tmp/rocksdbtest-1001/dbbench]
# 测试顺序写的性能信息
fillq      :  2.311 micros/op 432751 ops/c;47.9 MB/s
** Compaction Stats [default] **
Level    Files  Size    Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(c) CompMergeCP U(c) Comp(cnt) Avg(c) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0      1/0  28.88 MB  0.20.00.00.00.00.00.0  1.00.060.60.480.3110.47700
Sum      1/0  28.88 MB  0.00.00.00.00.00.00.0  1.00.060.60.480.3110.47700
Int      0/0    0.00 KB  0.00.00.00.00.00.00.0  1.00.060.60.480.3110.47700
** Compaction Stats [default] **
Priority    Files  Size    Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(c) CompMergeC PU(c) Comp(cnt) Avg(c) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
High      0/0    0.00 KB  0.00.00.00.00.00.00.00.00.060.60.480.3110.47700
Uptime(cs): 2.3 total, 2.3 interval
Flush(GB): cumulative 0.028, interval 0.028
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.03 GB write, 12.34 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 conds
Interval compaction: 0.03 GB write, 12.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.5 conds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_co mpaction_bytes, 0 slowdown for pending_c
ompaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
** File Read Latency Histogram By Level [default] **
** DB Stats **
Uptime(cs): 2.3 total, 2.3 interval
Cumulative writes: 1000K writes, 1000K keys, 1000K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 53.39 MB/s
Cumulative WAL: 1000K writes, 0 syncs, 1000000.00 writes per sync, written: 0.12 GB, 53.39 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 1000K writes, 1000K keys, 1000K commit groups, 1.0 writes per commit group, ingest: 124.93 MB, 54.06 MB/s
Interval WAL: 1000K writes, 0 syncs, 1000000.00 writes per sync, written: 0.12 MB, 54.06 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent
更多的meta operation操作如下
compact 对整个数据库进⾏合并
stats 打印db的状态信息
retstats 重置db的状态信息
levelstats 打印每⼀层的⽂件个数以及每⼀层的占⽤的空间⼤⼩
sstables 打印sst⽂件的信息
对应的sstables和levelstats显⽰信息如下
--- level 0 --- version# 2 ---
7:30286882[1..448148]['00000000000000003030303030303030'\
q:1, type:1 ..'000000000006D6933030303030303030' q:448148, type:1](0)
--- level 1 --- version# 2 ---
-
-- level 2 --- version# 2 ---
--- level 3 --- version# 2 ---
--- level 4 --- version# 2 ---
--- level 5 --- version# 2 ---
--- level 6 --- version# 2 ---
Level Files Size(MB)
--------------------
0129
100
200
300
400
500
600
单独随机写测试
相关的参数可以⾃⼰配置,这⾥仅仅列出⼀部分参数
可以通过 ./db_bench --help⾃⼰查看想要的配置参数,当然配置前需要对各个参数有⼀定的了解。
./db_bench  \
--benchmarks="fillrandom,stats,levelstats"\
--enable_write_thread_adaptive_yield=fal \
--disable_auto_compactions=fal \
--max_background_compactions=32\
-
-max_background_flushes=4\
--write_buffer_size=536870912\
中药甘草--min_write_buffer_number_to_merge=2\
--max_write_buffer_number=6\

本文发布于:2023-06-04 10:17:20,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/82/858858.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:编译   路径   读写   性能   需要   信息   顺序   安装
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图