current position:Home>Hadoop (hive installation)

Hadoop (hive installation)

2022-01-26 21:58:42 cyjku

1、 Download installation package :apache-hive-3.1.2-bin.tar.gz

Uploaded to the linux System /opt/software/ route

2、 Decompression software :

cd /opt/software/

tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /opt/module/

3、 Modify system environment variables

vi /etc/profile

Add content :

export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin

export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin

Restart the environment configuration :

source /etc/profile

4、 modify hive environment variable

cd  /opt/module/apache-hive-3.1.2-bin/bin/

edit hive-config.sh file

vi hive-config.sh

The new content :

export JAVA_HOME=/opt/module/jdk1.8.0_212

export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin

export HADOOP_HOME=/opt/module/hadoop-3.2.0

export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf

5、 Copy hive The configuration file :

cd  /opt/module/apache-hive-3.1.2-bin/conf/

cp hive-default.xml.template hive-site.xml

6、 modify Hive The configuration file , Find the corresponding location and modify it :

  <property>

    <name>javax.jdo.option.ConnectionDriverName</name>

    <value>com.mysql.cj.jdbc.Driver</value>

    <description>Driver class name for a JDBC metastore</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionUserName</name>

    <value>root</value>

    <description>Username to use against metastore database</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionPassword</name>

    <value>123456</value>

    <description>password to use against metastore database</description>

  </property>

<property>

    <name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://192.168.1.100:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT</value>

    <description>

      JDBC connect string for a JDBC metastore.

      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.

      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.

    </description>

  </property>

  <property>

    <name>datanucleus.schema.autoCreateAll</name>

    <value>true</value>

    <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>

  </property>

<property>

    <name>hive.metastore.schema.verification</name>

    <value>false</value>

    <description>

      Enforce metastore schema version consistency.

      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic

            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures

            proper metastore schema migration. (Default)

      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.

    </description>

  </property>

<property>

    <name>hive.exec.local.scratchdir</name>

    <value>/opt/module/apache-hive-3.1.2-bin/tmp/${user.name}</value>

    <description>Local scratch space for Hive jobs</description>

  </property>

  <property>

<name>system:java.io.tmpdir</name>

<value>/opt/module/apache-hive-3.1.2-bin/iotmp</value>

<description/>

</property>



  <property>

    <name>hive.downloaded.resources.dir</name>

<value>/opt/module/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources</value>

    <description>Temporary local directory for added resources in the remote file system.</description>

  </property>

<property>

    <name>hive.querylog.location</name>

    <value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}</value>

    <description>Location of Hive run time structured log file</description>

  </property>

  <property>

    <name>hive.server2.logging.operation.log.location</name>

<value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs</value>

    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>

  </property>

  <property>

    <name>hive.metastore.db.type</name>

    <value>mysql</value>

    <description>

      Expects one of [derby, oracle, mysql, mssql, postgres].

      Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.

    </description>

  </property>

  <property>

    <name>hive.cli.print.current.db</name>

    <value>true</value>

    <description>Whether to include the current database in the Hive prompt.</description>

  </property>

  <property>

    <name>hive.cli.print.header</name>

    <value>true</value>

    <description>Whether to print the names of the columns in query output.</description>

  </property>

  <property>

    <name>hive.metastore.warehouse.dir</name>

    <value>/opt/hive/warehouse</value>

    <description>location of default database for the warehouse</description>

  </property>

7、 Upload mysql Driver package to /opt/module/apache-hive-3.1.2-bin/lib/ Under the folder :

Drive pack :mysql-connector-java-8.0.15.zip, Extract it from the inside jar package

8、 Make sure mysql There is a database named hive The database of

9、 Initialize metabase

 schematool -dbType mysql -initSchema

10、 Make sure Hadoop start-up

11、 start-up hive:

hive

12、 Check whether the startup is successful

show databases;

copyright notice
author[cyjku],Please bring the original link to reprint, thank you.
https://en.cdmana.com/2022/01/202201262158402504.html

Random recommended