问题导读
1.如何使用脚本创建Hive分区表?
2.Linux调度如何实现?
3.如何实现周期性load HDFS的数据到hive的分区表?
上一篇
大数据实战之App管理平台日志分析(二)
http://www.aboutyun.com/forum.php?mod=viewthread&tid=27036
一、创建hive分区表
----------------------------------------------------
1.创建数据库
$hive> create database applogsdb;
2.创建分区表
编写脚本。
[applogs_create_table.sql]
use applogsdb;
--startup
CREATE external TABLE ext_startup_logs(createdAtMs bigint,appId string,tenantId string,deviceId string,appVersion string,appChannel string,appPlatform string,osType string,deviceStyle string,country string,province string,ipAddress string,network string,carrier string,brand string,screenSize string)PARTITIONED BY (ym string, day string,hm string) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' STORED AS TEXTFILE;
--error
CREATE external TABLE ext_error_logs(createdAtMs bigint,appId string,tenantId string,deviceId string,appVersion string,appChannel string,appPlatform string,osType string,deviceStyle string,errorBrief string,errorDetail string)PARTITIONED BY (ym string, day string,hm string) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' STORED AS TEXTFILE;
--event
CREATE external TABLE ext_event_logs(createdAtMs bigint,appId string,tenantId string,deviceId string,appVersion string,appChannel string,appPlatform string,osType string,deviceStyle string,eventId string,eventDurationSecs bigint,paramKeyValueMap Map<string,string>)PARTITIONED BY (ym string, day string,hm string) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' STORED AS TEXTFILE;
--page
CREATE external TABLE ext_page_logs(createdAtMs bigint,appId string,tenantId string,deviceId string,appVersion string,appChannel string,appPlatform string,osType string,deviceStyle string,pageViewCntInSession int,pageId string,visitIndex int,nextPage string,stayDurationSecs bigint)PARTITIONED BY (ym string, day string,hm string) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' STORED AS TEXTFILE;
--usage
CREATE external TABLE ext_usage_logs(createdAtMs bigint,appId string,tenantId string,deviceId string,appVersion string,appChannel string,appPlatform string,osType string,deviceStyle string,singleUseDurationSecs bigint,singleUploadTraffic bigint,singleDownloadTraffic bigint)PARTITIONED BY (ym string, day string,hm string) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' STORED AS TEXTFILE;
3.执行applogs.sql脚本
$> hive -f /share/umeng/applogs_create_table.sql
二、使用Linux cron调度,周期性load HDFS的数据到hive的分区表
----------------------------------------------------------------
1.解释
调度就是周期运行指定的任务。
2.Ubuntu安装Cron
apt-get install cron
3.调度命令[Ubuntu]
$> /usr/sbin/service cron start
$> /usr/sbin/service cron status
$> /usr/sbin/service cron restart
$> /usr/sbin/service cron stop
4.调度命令[centos]
//查看状态
$> service crond status
//停止
$>service crond stop
//启动
$> service crond start
5.配置调度任务
a.[/etc/crontab下]
0-59|0-23 1-31 1-12 0-6
分 时 天 月 星期
* * * * * ubuntu source /etc/profile;echo `date` >> ~/1.log
//五个* 表示通配,每分钟执行一次后面的命令 --> source /etc/profile;echo `date` >> ~/1.log
//Linux最小执行间隔为1分钟
6.date操作
date -d "-3 minute" +%Y%m-%d-%H%M //得到3分钟之前的时间
date -d "3 minute" +%Y%m-%d-%H%M //得到3分钟之后的时间
date -d "3 hour" +%Y%m-%d-%H%M //得到3分钟之后的时间
7.使用sed命令编辑文件
//删除第一行
$>sed '1d' 1.log
//删除最后一行
$>sed '$d' 1.log
//删除区间行
$>sed '1,3d' 1.log
//删除所有行
$>sed '1,$d' 1.log
//p:print -- 复制每一行,然后打印输出。也就是每一行打印两遍
$>sed '1,$p' 1.log
//-n:安静模式,只显示处理的行 -- 只打印第一行
$>sed -n '1,$p' 1.log
//-i:对源文件进行修改[1,$p]
$>sed -i '1,$p' 1.log
//显示含有hello的行[/.../p]
$>sed -n '/hello/p' 1.log
//追加内容第1行之后追加新行[1a]
$>sed -i '1ahello' 1.log
//追加新行,指定前置字符 [1a]
$>sed -i '1a\ hello' 1.log
//1-3行每行下面都追加新行hello[1,3a] --- append
$>sed -i '1,3ahello' 1.log
//替换,针对整行[1,2c] -- cover
$>sed -i '1,2ckkk' 1.log
//替换,针对特定字符串,用how替换掉hello [s/../../g]
$>sed -i 's/hello/how/g' 1.log
8.编写脚本,周期性导入hdfs的文件到hive的分区表
[~/Downloads/.exportData.sql]
load data inpath '/data/applogs/startup/${ym}/${day}/${hm}' into table applogsdb.ext_startup_logs partition(ym='${ym}',day='${day}',hm='${hm}');
load data inpath '/data/applogs/error/${ym}/${day}/${hm}' into table applogsdb.ext_error_logs partition(ym='${ym}',day='${day}',hm='${hm}');
load data inpath '/data/applogs/event/${ym}/${day}/${hm}' into table applogsdb.ext_event_logs partition(ym='${ym}',day='${day}',hm='${hm}');
load data inpath '/data/applogs/page/${ym}/${day}/${hm}' into table applogsdb.ext_page_logs partition(ym='${ym}',day='${day}',hm='${hm}');
load data inpath '/data/applogs/usage/${ym}/${day}/${hm}' into table applogsdb.ext_usage_logs partition(ym='${ym}',day='${day}',hm='${hm}');
9.编写执行脚本 -- 每次只拷贝1分钟时间片的数据。但是数据是3分钟前的那1分钟的数据。
[~/Downloads/exec.sh]
#!/bin/bash
systime=`date -d "-3 minute" +%Y%m-%d-%H%M`
ym=`echo ${systime} | awk -F '-' '{print $1}'`
day=`echo ${systime} | awk -F '-' '{print $2}'`
hm=`echo ${systime} | awk -F '-' '{print $3}'`
cp ~/Downloads/.exportData.sql ~/Downloads/exportData.sql
sed -i 's/${ym}/'${ym}'/g' ~/Downloads/exportData.sql
sed -i 's/${day}/'${day}'/g' ~/Downloads/exportData.sql
sed -i 's/${hm}/'${hm}'/g' ~/Downloads/exportData.sql
#执行hive的命令,注意此处的hive命令一定要写全路径,不然找不到hive
/soft/hive/bin/hive -f ~/Downloads/exportData.sql
rm ~/Downloads/exportData.sql
10.设定每隔1分钟自动执行脚本exec.sh一次[生产环境一般为一天执行一次。每天的凌晨2点]
$> sudo nano /etc/crontab
* * * * * ubuntu source /etc/profile;~/Downloads/exec.sh
//开启服务
$> /usr/sbin/service cron start
$> /usr/sbin/service cron status
$> /usr/sbin/service cron stop
三、导出web项目的war包,部署到ubuntu的tomcat上
---------------------------------------------------------------------
1.安装tomcat
a.下载安装
apache-tomcat-7.0.72.tar.gz
b.tar开
tar -xzvf ~/Downloads/apache-tomcat-7.0.72.tar.gz -C /soft
c.软连接
$>ln -s /soft/apache-tomcat-7.0.72 /soft/tomcat
2.导出web项目的war包
a.找到web项目,在pom.xml中添加插件和common依赖
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.test</groupId>
<artifactId>app-logs-collect-web</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>war</packaging>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.12.4</version>
<configuration>
<skipTests>true</skipTests>
</configuration>
</plugin>
<plugin>
<artifactId>maven-war-plugin</artifactId>
<version>2.6</version>
<configuration>
<warSourceDirectory>web</warSourceDirectory>
<failOnMissingWebXml>false</failOnMissingWebXml>
<excludes>css/*,images/*,js/*,png/*,phone/*</excludes>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.8.8</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.8.3</version>
</dependency>
<dependency>
<groupId>com.maxmind.db</groupId>
<artifactId>maxmind-db</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>4.3.5.RELEASE</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
<version>2.5</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.24</version>
</dependency>
<dependency>
<groupId>com.maxmind.db</groupId>
<artifactId>maxmind-db</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.0.1</version>
</dependency>
<dependency>
<groupId>com.test</groupId>
<artifactId>app-analyze-common</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
</dependencies>
</project>
b.因为涉及到关联的公共模块common,所以,导出war包之前要先安装common模块,使common模块重新打包放到.m2仓库下。
maven --> install common module ...
c.然后打包web服务器程序成war包app-web.war
3.复制war文件到centos下${tomcat}/webapps
4.启动tomcat
$>tomcat/bin/startup.sh
5.验证
$>netstat -anop | grep 8080
6.开启flume
flume-ng agent -f applog.conf -n a1
7.修改手机程序连接服务器的地址。
UploadUtil.java
21行:URL url = new URL("http://s100:8080/app-web/coll/index");
http://s100:8080/app-web/coll/index
8.至此,数据就收集并上传到hive上了
四、Hive查询
-----------------------------------------------------------
1.通过hive查询指定app的用户数[去重]
hive> select count(distinct deviceid) from ext_startup_logs where appid = 'sdk34734';
最新经典文章,欢迎关注公众号
---------------------
作者:葛红富
来源:CSDN
原文:https://blog.csdn.net/xcvbxv01/article/details/84296229
|
|