本篇文章主要介绍了"在Liunx上安装Hive",主要涉及到方面的内容,对于企业开发感兴趣的同学可以参考一下:
在Liunx上安装Hive以及如何与Hadoop集成和将Hive的元数据存储到MySQL里,今天散仙就来看下,如何在Eclipse里通过JDBC的方式操作Hiv...
package com.test;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import org.apache.hadoop.conf.Configuration;
/**
* 在Win7上,使用JDBC操作Hive
* @author qindongliang
*
* 大数据技术交流群:376932160
* **/
public class HiveJDBClient {
/**Hive的驱动字符串*/
private static String driver="org.apache.hive.jdbc.HiveDriver";
public static void main(String[] args) throws Exception{
//加载Hive驱动
Class.forName(driver);
//获取hive2的jdbc连接,注意默认的数据库是default
Connection c//192.168.46.32/default", "search", "dongliang");
Statement st=conn.createStatement();
String tableName="mytt";//表名
ResultSet rs=st.executeQuery("select avg(count) from "+tableName+" ");//求平均数,会转成MapReduce作业运行
//ResultSet rs=st.executeQuery("select * from "+tableName+" ");//查询所有,直接运行
while(rs.next()){
System.out.println(rs.getString(1)+" ");
}
System.out.println("成功!");
st.close();
conn.close();
}
}
结果如下:
Java代码 


- 48.6
- 成功!
48.6
成功!
Hive的hiveserver2 端log打印日志:
Java代码 


- [search@h1 bin]$ ./hiveserver2
- Starting HiveServer2
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
- 14/08/0504:00:02 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
- 14/08/0504:00:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
- OK
- OK
- Total jobs = 1
- Launching Job 1 out of 1
- Number of reduce tasks determined at compile time: 1
- In order to change the average load for a reducer (in bytes):
- set hive.exec.reducers.bytes.per.reducer=
- In order to limit the maximum number of reducers:
- set hive.exec.reducers.max=
- In order to set a constant number of reducers:
- set mapreduce.job.reduces=
- Starting Job = job_1407179651448_0001, Tracking URL = http://h1:8088/proxy/application_1407179651448_0001/
- Kill Command = /home/search/hadoop/bin/hadoop job -kill job_1407179651448_0001
- Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
- 2014-08-0504:03:49,951 Stage-1 map = 0%, reduce = 0%
- 2014-08-0504:04:19,118 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.74 sec
- 2014-08-0504:04:30,860 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.7 sec
- MapReduce Total cumulative CPU time: 3 seconds 700 msec
- Ended Job = job_1407179651448_0001
- MapReduce Jobs Launched:
- Job 0: Map: 1 Reduce: 1 Cumulative CPU: 3.7 sec HDFS Read: 253 HDFS Write: 5 SUCCESS
- Total MapReduce CPU Time Spent: 3 seconds 700 msec
- OK
[search@h1 bin]$ ./hiveserver2
Starting HiveServer2
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/08/05 04:00:02 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
14/08/05 04:00:02 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
OK
OK
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1407179651448_0001, Tracking URL = http://h1:8088/proxy/application_1407179651448_0001/
Kill Command = /home/search/hadoop/bin/hadoop job -kill job_1407179651448_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-08-05 04:03:49,951 Stage-1 map = 0%, reduce = 0%
2014-08-05 04:04:19,118 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.74 sec
2014-08-05 04:04:30,860 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.7 sec
MapReduce Total cumulative CPU time: 3 seconds 700 msec
Ended Job = job_1407179651448_0001
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 3.7 sec HDFS Read: 253 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 700 msec
OK
hadoop的8088界面截图如下:

下面这条SQL语句,不会转成MapReduce执行,select * from mytt limit 3;
结果如下:
Java代码 


- 中国
- 美国
- 中国
- 成功!