ApacheDruid集群部署

更新时间:2023-06-14 03:46:43 阅读: 评论:0

ApacheDruid集群部署
欢迎关注今⽇头条号、微信公众号、知乎号:仰望夜空⼀万次
随意聊聊并记录从⼩城市到上海⼯作⽣活的所思所想。
不去记录,有些事情都好像没有发⽣过。
本⽂简单介绍Apache Druid的集群部署⽅式
这个简单的集群的特征为:
1台服务器部署Coordinator 和Overlord 进程,作为Master⾓⾊
2台可扩展、容错的的服务器部署Historical和MiddleManager进程,作为Data⾓⾊
1台服务器部署Broker和Router进程,作为Query⾓⾊
在⽣产环境,推荐署多个Master和多个Query服务来满⾜容错。但现在可以快速的使⽤⼀个Master、⼀个Query服务器的⽅式先完成集群部署,后续添加Master、Query服务器。
机器⾓⾊划分
假设我们有四台机器,以官⽹推荐最简单的模式分配组件⾓⾊。后期可以对各个⾓⾊进⾏⽔平扩展。
晚安小情话
服务器名称部署⾓⾊类型实际部署组件
verMaster Master Coordinator、Overloads
verQuery Query Router、Broker
verData1Data Middle Manager、Historical
verData2Data Middle Manager、Historical
服务器⾓⾊⽰意图在职出国留学
依赖组件
apache-druid-0.16.
Java
版本要求:Java 8 (8u92+)
Mysql
⽤于存储MetaData信息
安装步骤
安装Mysql
使⽤Docker安装Mysql
蚩尤是哪个民族的祖先#下载Mysql的Docker镜像
docker pull mysql:5.7.22
#启动,机器3309对docker内部映射3306
docker run --name mysql-docker -v /data/apps/mysqldata:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=xxx -p 3309:3306 -d mysql:5.7.22
将mysql驱动拷贝到druid扩展⽬录
cp mysql-connector-java-5.1.38.jar /opt/druid/extensions/mysql-metadata-storage
执⾏命令,初始化数据库
#进⼊docker mysql命令⾏
docker exec -it mysql-docker bash
#使⽤root登陆
mysql -uroot -p
#创建数据库,赋权
CREATE DATABASE druid DEFAULT CHARACTER SET utf8mb4;
CREATE USER 'druid'@'%' IDENTIFIED BY 'druid';
GRANT ALL PRIVILEGES ON druid.* TO 'druid'@'%';
初始化HDFS的druid⽬录
为了使⽤HDFS作为Deep Storage,需要⽤到HDFS
#Hadoop⽬录⽂件创建、hdfs⽤户
hadoop fs -mkdir -p /druid/gments
hadoop fs  -mkdir -p /druid/indexing-logs
hadoop fs  -mkdir -p /tmp/druid-indexing
hadoop fs -chmod -R 777 /druid
hadoop fs -chmod -R 777 /tmp/druid-indexing
创建软链接
并将hadoop的配置⽂件以软链接的⽅式⽣成到druidHome/conf/druid/cluster/_common/
启动服务
Master rver 启动coordinators和Overloads
nohup /opt/druid/bin/start-cluster-master-no-zk-rver > /data/logs/druid/master.log 2>&1 &
Query rver 启动Routers和Brokers
nohup /opt/druid/bin/start-cluster-query-rver > /data/logs/druid/query.log 2>&1 &
Data rver 启动Middle Managers和Historicals
nohup /opt/druid/bin/start-cluster-data-rver > /data/logs/druid/data.log 2>&1 &
单核细胞偏高说明什么
注意点
common.runtime.properties注释配置druid.host=localhost,各个druid服务会使⽤进程中的函数
关闭服务
ps -ef|grep druid
kill ⼦进程... ⽗进程
默认资源占⽤
魔芋图片⾓⾊默认资源占⽤
Coordinator-Xms15g -Xmx15g
Overlord Xms15g -Xmx15g
middleManager Xms128m -Xmx128m
⾓⾊默认资源占⽤
historical-Xms8g -Xmx8g -XX:MaxDirectMemorySize=13g
broker-Xms12g -Xmx12g -XX:MaxDirectMemorySize=6g
routers-Xms1g -Xmx1g -XX:MaxDirectMemorySize=128m
配置⽂件参考
当前集群是验证druid的功能特性,参数没有调优。
关键点:
1. common.runtime.properties注释掉配置druid.host=localhost
#druid.host=localhost
2. 配置Mysql作为metadata的存储;配置zookeeper(独⽴部署的zookeeper);配置sions.loadList
3. 配置hdfs⽬录作为storage、logs的存储
参考
MySQL Metadata Store
Clustered deployment
具体配置⽂件
common.runtime.properties
#
# Licend to the Apache Software Foundation (ASF) under one
# or more contributor licen agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licens this file
# to you under the Apache Licen, Version 2.0 (the
# "Licen"); you may not u this file except in compliance
# with the Licen.  You may obtain a copy of the Licen at
#
#  www.apache/licens/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the Licen is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the Licen for the
# specific language governing permissions and limitations
# under the Licen.
#
# Extensions specified in the load list will be loaded by Druid
# We are using local fs for deep storage - not recommended for production - u S3, HDFS, or NFS instead
# We are using local derby for the metadata store - not recommended for production - u MySQL or Postgres instead
# If you specify `sions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `sions.loadList`, Druid will load all the extensions under root extension directory.
# More info: druid.apache/docs/latest/operations/including-extensions.html
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory # and uncomment the line below to point to your directory.
#sions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
#
# Hostname
#
#druid.host=localhost
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.rvice.host=rver-1:2181,rver-2:2181,rver-3:2181,rver-4:2181,rver-5:2181
小学数学计算题
druid.zk.paths.ba=/druid
#
# Metadata storage
#
# For Derby rver on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#pe=derby
#tURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
#tor.host=localhost冷的近义词
#tor.port=1527
# For MySQL (make sure to include the MySQL JDBC driver on the classpath):
# For PostgreSQL:
#pe=postgresql
#tURI=jdbc:postgresql://:5432/druid
#tor.ur=...
#tor.password=...
#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
emei#pe=local
#druid.storage.storageDirectory=var/druid/gments
# For HDFS:
pe=hdfs
druid.storage.storageDirectory=/druid/gments
# For S3:
#pe=s3
#druid.storage.bucket=your-bucket
#druid.storage.baKey=druid/gments
#druid.s3.accessKey=...
#Key=...
#

本文发布于:2023-06-14 03:46:43,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1037520.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:部署   进程   服务器   配置
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图