ASP源码.NET源码PHP源码JSP源码JAVA源码DELPHI源码PB源码VC源码VB源码Android源码
当前位置:首页 >> 企业开发 >> 在Liunx上安装Hive

在Liunx上安装Hive(3/4)

来源:网络整理     时间:2016-01-17     关键词:

本篇文章主要介绍了"在Liunx上安装Hive",主要涉及到方面的内容,对于企业开发感兴趣的同学可以参考一下: 在Liunx上安装Hive以及如何与Hadoop集成和将Hive的元数据存储到MySQL里,今天散仙就来看下,如何在Eclipse里通过JDBC的方式操作Hiv...

package com.test;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;

import org.apache.hadoop.conf.Configuration;

/**
 * 在Win7上,使用JDBC操作Hive
 * @author qindongliang
 * 
 * 大数据技术交流群:376932160
 * **/
public class HiveJDBClient {

	/**Hive的驱动字符串*/
	private static String driver="org.apache.hive.jdbc.HiveDriver";

	public static void main(String[] args) throws Exception{
		//加载Hive驱动
		Class.forName(driver);
		//获取hive2的jdbc连接,注意默认的数据库是default
		Connection c//192.168.46.32/default", "search", "dongliang");
	    Statement st=conn.createStatement();
	    String tableName="mytt";//表名
	    ResultSet rs=st.executeQuery("select  avg(count) from "+tableName+" ");//求平均数,会转成MapReduce作业运行
	    //ResultSet rs=st.executeQuery("select  * from "+tableName+" ");//查询所有,直接运行
	    while(rs.next()){
	    	System.out.println(rs.getString(1)+"   ");
	    }
	    System.out.println("成功!");
	    st.close();
	    conn.close();

	}

}


结果如下:

Java代码 在Liunx上安装Hive在Liunx上安装Hive在Liunx上安装Hive

  1. 48.6
  2. 成功!  
48.6   
成功!


Hive的hiveserver2 端log打印日志:

Java代码 在Liunx上安装Hive在Liunx上安装Hive在Liunx上安装Hive

  1. [search@h1 bin]$ ./hiveserver2   
  2. Starting HiveServer2  
  3. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces  
  4. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize  
  5. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative  
  6. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node  
  7. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive  
  8. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack  
  9. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize  
  10. 14/08/0504:00:02 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed  
  11. 14/08/0504:00:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.  
  12. OK  
  13. OK  
  14. Total jobs = 1
  15. Launching Job 1 out of 1
  16. Number of reduce tasks determined at compile time: 1
  17. In order to change the average load for a reducer (in bytes):  
  18.   set hive.exec.reducers.bytes.per.reducer=  
  19. In order to limit the maximum number of reducers:  
  20.   set hive.exec.reducers.max=  
  21. In order to set a constant number of reducers:  
  22.   set mapreduce.job.reduces=  
  23. Starting Job = job_1407179651448_0001, Tracking URL = http://h1:8088/proxy/application_1407179651448_0001/
  24. Kill Command = /home/search/hadoop/bin/hadoop job  -kill job_1407179651448_0001  
  25. Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
  26. 2014-08-0504:03:49,951 Stage-1 map = 0%,  reduce = 0%  
  27. 2014-08-0504:04:19,118 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.74 sec  
  28. 2014-08-0504:04:30,860 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.7 sec  
  29. MapReduce Total cumulative CPU time: 3 seconds 700 msec  
  30. Ended Job = job_1407179651448_0001  
  31. MapReduce Jobs Launched:   
  32. Job 0: Map: 1  Reduce: 1   Cumulative CPU: 3.7 sec   HDFS Read: 253 HDFS Write: 5 SUCCESS  
  33. Total MapReduce CPU Time Spent: 3 seconds 700 msec  
  34. OK  
[search@h1 bin]$ ./hiveserver2 
Starting HiveServer2
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
14/08/05 04:00:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
OK
OK
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1407179651448_0001, Tracking URL = http://h1:8088/proxy/application_1407179651448_0001/
Kill Command = /home/search/hadoop/bin/hadoop job  -kill job_1407179651448_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-08-05 04:03:49,951 Stage-1 map = 0%,  reduce = 0%
2014-08-05 04:04:19,118 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.74 sec
2014-08-05 04:04:30,860 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.7 sec
MapReduce Total cumulative CPU time: 3 seconds 700 msec
Ended Job = job_1407179651448_0001
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 3.7 sec   HDFS Read: 253 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 700 msec
OK


hadoop的8088界面截图如下:
在Liunx上安装Hive
下面这条SQL语句,不会转成MapReduce执行,select * from mytt limit 3;
结果如下:

Java代码 在Liunx上安装Hive在Liunx上安装Hive在Liunx上安装Hive

  1. 中国     
  2. 美国     
  3. 中国     
  4. 成功!  

相关图片

相关文章