• 通过Shell脚本自动安装Hive&JDBC测试&提供CDH5网盘地址


    〇、参考地址

    1、Linux下编写脚本自动安装hive

    https://blog.csdn.net/weixin_44911081/article/details/121227024?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522163695916016780269859534%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fblog.%2522%257D&request_id=163695916016780269859534&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~blog~first_rank_v2~rank_v29-3-121227024.pc_v2_rank_blog_default&utm_term=hive&spm=1018.2226.3001.4450

    2、如何运行.sh脚本文件

    https://blog.csdn.net/weixin_55821558/article/details/125830542

    3、hive教程:启动hiveserver2,通过jdbc方式访问hive☆

    https://blog.csdn.net/a12355556/article/details/124565395

    2、CDH安装hadoop与版本比较

    https://www.freesion.com/article/8763708397/

    一、代码编写

    1、下载Hive

    原生:http://archive.apache.org/dist/hive/hive-1.1.0/

    CDH版本(已失效):https://archive.cloudera.com/p/cdh5/cdh/5 注意:登录名为邮箱,密码大小写数字+符号!

    命令下载(已失效):wget https://archive.cloudera.com/cdh5/cdh/5/hive-1.1.0-cdh5.14.2.tar.gz

    CDH5网盘备份:链接:https://pan.baidu.com/s/1XUGRMpjTbrJWDy9QCT9vTw?pwd=gmyf 

    比较:CDH版本比原生的兼容性更强,下载哪个都可以

    2、编写脚本

    vi hive_insatll.sh

    1. echo "----------安装hive----------"
    2. #-C 指定目录
    3. tar -zxf /usr/local/hive-1.1.0-cdh5.14.2.tar.gz -C /usr/local/
    4. #改名
    5. mv /usr/local/hive-1.1.0-cdh5.14.2 /usr/local/hive110
    6. #配置环境变量
    7. echo '#hive' >>/etc/profile
    8. echo 'export HIVE_HOME=/usr/local/hive110' >>/etc/profile
    9. echo 'export PATH=$PATH:$HIVE_HOME/bin' >>/etc/profile
    10. #创建配置文件hive-site.xml
    11. touch /usr/local/hive110/conf/hive-site.xml
    12. path="/usr/local/hive110/conf/hive-site.xml"
    13. #编写配置
    14. echo '"1.0" encoding="UTF-8" standalone="no"?>' >> $path
    15. echo '"text/xsl" href="configuration.xsl"?>' >> $path
    16. echo '<configuration>' >> $path
    17. #和jdbc如出一辙,更换自己的ip地址和用户名密码即可
    18. echo '<property><name>javax.jdo.option.ConnectionURLname><value>jdbc:mysql://192.168.91.137:3306/hive137?createDatabaseIfNotExist=truevalue>property>' >> $path
    19. echo '<property><name>javax.jdo.option.ConnectionDriverNamename><value>com.mysql.jdbc.Drivervalue>property>' >> $path
    20. echo '<property><name>javax.jdo.option.ConnectionUserNamename><value>rootvalue>property>' >> $path
    21. echo '<property><name>javax.jdo.option.ConnectionPasswordname><value>123123value>property>' >> $path
    22. echo '<property><name>hive.server2.thift.client.username><value>rootvalue>property>' >> $path
    23. echo '<property><name>hive.server2.thift.client.passwordname><value>123123value>property>' >> $path
    24. echo 'configuration>' >>$path

    3、调用

    添加执行权限:chmod u+x hive_insatll.sh

    执行.sh文件:./hive_insatll.sh 或 sh hive_insatll.sh

    4、使环境变量生效

    source /etc/profile

    二、运行后的其他操作

     1、下载mysql的jar包

    下载地址:https://mvnrepository.com/artifact/mysql/mysql-connector-java/5.1.38

    其他jar包:mysql-binlog-connector-java、 eventuate-local-java-cdc-connector-mysql-binlog……

    注意:已经转至新目录

    2、放入hive110/lib目录

    3、执行格式化操作

    schematool -dbType mysql -initSchema

    4、启动hiveserver2

    前台启动:hive --service hiveserver2

    后台启动:nohup hive --service hiveserver2 2>&1 &

    组合使用: nohup [xxx 命令操作]> file 2>&1 &,表示将 xxx 命令运行的结 果输出到 file 中(第一个2表示错误输出,另外0表示标准输入,1表示标准输出)

    三、配置与验证

    1、beeline 客户端连接hive

    连接:beeline -u jdbc:hive2://localhost:10000 -n root

    执行语句:show databases;

    2、java验证

    (1)引入依赖

    1. <dependency>
    2. <groupId>org.apache.hive</groupId>
    3. <artifactId>hive-jdbc</artifactId>
    4. <version>1.1.0</version>
    5. <exclusions>
    6. <exclusion>
    7. <groupId>org.eclipse.jetty.aggregate</groupId>
    8. <artifactId>jetty-all</artifactId>
    9. </exclusion>
    10. <exclusion>
    11. <groupId>org.apache.hive</groupId>
    12. <artifactId>hive-shims</artifactId>
    13. </exclusion>
    14. </exclusions>
    15. </dependency>

    (2)代码验证

    1. import java.sql.SQLException;
    2. import java.sql.Connection;
    3. import java.sql.ResultSet;
    4. import java.sql.Statement;
    5. import java.sql.DriverManager;
    6. public class HiveAPITest {
    7. private static String driverName = "org.apache.hive.jdbc.HiveDriver";
    8. public static void main(String[] args) throws SQLException {
    9. try {
    10. Class.forName(driverName);
    11. } catch (ClassNotFoundException e) {
    12. // TODO Auto-generated catch block
    13. e.printStackTrace();
    14. System.exit(1);
    15. }
    16. //replace "hive" here with the name of the user the queries should run as
    17. Connection con = DriverManager.getConnection("jdbc:hive2://localhost:10000/default",
    18. "hive", "");
    19. Statement stmt = con.createStatement();
    20. String tableName = "testHiveDriverTable";
    21. stmt.execute("drop table if exists " + tableName);
    22. stmt.execute("create table " + tableName + " (key int, value string) row format delimited fields terminated by '\t'");
    23. // show tables
    24. String sql = "show tables '" + tableName + "'";
    25. System.out.println("Running: " + sql);
    26. ResultSet res = stmt.executeQuery(sql);
    27. if (res.next()) {
    28. System.out.println(res.getString(1));
    29. }
    30. // describe table
    31. sql = "describe " + tableName;
    32. System.out.println("Running: " + sql);
    33. res = stmt.executeQuery(sql);
    34. while (res.next()) {
    35. System.out.println(res.getString(1) + "\t" + res.getString(2));
    36. }
    37. // load data into table
    38. // NOTE: filepath has to be local to the hive server
    39. // NOTE: /opt/tmp/a.txt is a \t separated file with two fields per line
    40. String filepath = "/opt/tmp/a.txt";
    41. sql = "load data local inpath '" + filepath + "' into table " + tableName;
    42. System.out.println("Running: " + sql);
    43. stmt.execute(sql);
    44. // select * query
    45. sql = "select * from " + tableName;
    46. System.out.println("Running: " + sql);
    47. res = stmt.executeQuery(sql);
    48. while (res.next()) {
    49. System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));
    50. }
    51. // regular hive query
    52. sql = "select count(1) from " + tableName;
    53. System.out.println("Running: " + sql);
    54. res = stmt.executeQuery(sql);
    55. while (res.next()) {
    56. System.out.println(res.getString(1));
    57. }
    58. } }

    3、Zeppelin验证

    (1)配置interpreter

    (2)验证-Note

  • 相关阅读:
    Centos7中redis开机自启动设置
    MongoDB设置用户账号密码登录
    HBase(超级无敌详细PROMAX讲解版)
    JuiceFS 社区版 v1.1- Beta 发布,新增五个实用功能
    【Vue入门】MVVM数据双向绑定与Vue的生命周期
    Android Studio compose的简单使用与案例实现
    系列文章|云原生时代下微服务架构进阶之路 - Spring Native
    【Linux】root和子用户都能执行的命令,sudo无法执行(已解决)
    C++ 最长公共前缀
    ECMAScript 6 Reflect / Proxy
  • 原文地址:https://blog.csdn.net/USTSD/article/details/128179913