1、将下载的hive压缩包拉到/opt/software/文件夹下
安装包版本:apache-hive-3.1.2-bin.tar.gz
2、将安装包解压到/opt/module/文件夹中,命令:
cd /opt/software/tar -zxvf 压缩包名 -c /opt/module/
3、修改系统环境变量,命令:
vi /etc/profile
在编辑面板中添加如下代码:
export hive_home=/opt/module/apache-hive-3.1.2-binexport path=$path:$hadoop_home/sbin:$hive_home/bin
4、重启环境配置,命令:
source /etc/profile
5、修改hive环境变量
cd /opt/module/apache-hive-3.1.2-bin/bin/
①配置hive-config.sh文件
vi hive-config.sh
在编辑面板中添加如下代码:
export java_home=/opt/module/jdk1.8.0_212export hive_home=/opt/module/apache-hive-3.1.2-binexport hadoop_home=/opt/module/hadoop-3.2.0export hive_conf_dir=/opt/module/apache-hive-3.1.2-bin/conf
6、拷贝hive配置文件,命令:
cd /opt/module/apache-hive-3.1.2-bin/conf/cp hive-default.xml.template hive-site.xml
7、修改hive配置文件,找到对应位置按一下代码进行修改:
vi hive-site.xml
<?xml version="1.0" encoding="utf-8" standalone="no"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration> <property&g猪肉饺子馅做法大全t; <name>javax.jdo.option.connectiondrivername</name> <value>com.mysql.cj.jdbc.driver</value> <description>driver class name for a jdbc metastore</description> </property><property> <name>javax.jdo.option.connectionurname</name> <value>root</value> <description>urname to u against metastore databa</description> </property><property> <name>javax.jdo.option.connectionpassword</name> <value>123456</value># 自定义密码 <description>password to u against metastore databa</description> </property><property> <name>javax.jdo.option.connectionurl</name><value>jdbc:mysql://192.168.1.100:3306/hive?uunicode=true&characterencoding=utf8&ussl=fal&rvertimezone=gmt</value> <description> jdbc connect string for a jdbc metastore. to u ssl to encrypt/authenticate the connection, provide databa-specific ssl flag in the connection url. for example, jdbc:postgresql://myhost/db?ssl=true for postgres databa. </description> </property> <property> <name>datanucleus.schema.autocreateall</name> <value>true</value> <description>auto creates necessary schema on a startup if one doesn't exist. t this to fal, after creating it once.to enable auto create also t hive.metastore.schema.verification=fal. auto creation is not recommended for production u cas, run schematool command instead.</description> </property><property> <name>hive.metastore.schema.verification</name> <value>fal</value> <description> enforce metastore schema version consistency. true: verify that version information stored in is compatible with one from hive jars. also disable automatic schema migration attempt. urs are required to manually migrate schema after hive upgrade which ensures proper metastore schema migration. (default) fal: warn if the version information stored in metastore doesn't match with one from in hive jars. </description> </property><property> <name>hive.exec.local.scratchdir</name> <value>/opt/module/apache-hive-3.1.2-bin/tmp/${ur.nam小学生写人作文e}</value> <description>local scratch space for hive jobs</description> </property> <property><name>system:java.io.tmpdir</name><value>/opt/module/apache-hive-3.1.2-bin/iotmp</value><description/></property> <property> <name>hive.downloaded.resources.dir</name><value>/opt/module/apache-hive-3.1.2-bin/tmp/${hive.ssion.id}_resources</value> <description>temporary local directory for added resources in the remote file system.</description> </property><币字组词property> <name>hive.querylog.location</name> <value>/三本大学排名opt/module/apache-hive-3.1.2-bin/tmp/${system:ur.name}</value> <description>location of hive run time structured log file</description> </property> <property> <name>hive.rver2.logging.operation.log.location</name><value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:ur.name}/operation_logs</value> <description>top level directory where operation logs are stored if logging functionality is enabled</description> </property> <property> <name>hive.metastore.db.type</name> <value>mysql</value> <description> expects one of [derby, oracle, mysql, mssql, postgres]. type of databa ud by the metastore. information schema & jdbcstoragehandler depend on it. </description> </property> <property> <name>hive.cli.print.current.db</name> <value>true</value> <description>whether to include the current databa in the hive prompt.</description> </property> <property> <name>hive.cli.print.header</name> <value>true</value> <description>whether to print the names of the columns in query output.</description> </property> <property> <name>hive.metastore.warehou.dir</name> <value>/opt/hive/warehou</value> <description>location of default databa for the warehou</description> </property></configuration>
8、上传mysql驱动包到/opt/module/apache-hive-3.1.2-bin/lib/文件夹下
驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包
9、进入数据库,在数据库中新建名为hive的数据库,确保 mysql数据库中有名称为hive的数据库
mysql>中科大录取分数线 create databa hive;
10、初始化元数据库,命令:
schematool -dbtype mysql -initschema
11、群起,命令:
start-all.sh hadoop100上start-yarn.sh hadoop101上
12、启动hive,命令:
hive
13、检测是否启动成功,命令:
show databas;
出现各类数据库,则启动成功
到此这篇关于hadoop环境配置之hive环境配置的文章就介绍到这了,更多相关hadoophive环境配置内容请搜索www.887551.com以前的文章或继续浏览下面的相关文章希望大家以后多多支持www.887551.com!
本文发布于:2023-04-04 02:08:43,感谢您对本站的认可!
本文链接:https://www.wtabcd.cn/fanwen/zuowen/fa7d74fccae1f3d9e55a01fabba570a0.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文word下载地址:Hadoop环境配置之hive环境配置详解.doc
本文 PDF 下载地址:Hadoop环境配置之hive环境配置详解.pdf
留言与评论(共有 0 条评论) |