HiveQL基本操作整理

1. 创建操作

1.1 创建表

CREATE TABLE pokes (foo INT, bar STRING);

1.2 基于现有的表结构创建一个新表

create table new_table like records;

1.3 创建视图:

CREATE VIEW valid_records AS SELECT * FROM records2 WHERE temperature !=9999;

1.4 创建外部表:

1
2
3
4
5
6
7
8
CREATE EXTERNAL TABLE page_view(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User',
country STRING COMMENT 'country of origination')
COMMENT 'This is the staging page view table'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054'
STORED AS TEXTFILE
LOCATION '<hdfs_location>';

1.5 建分区表:

1
2
3
4
5
6
7
8
CREATE TABLE par_table(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(date STRING, pos STRING)
ROW FORMAT DELIMITED fields terminated by '\t'
lines terminated by '\n';
STORED AS SEQUENCEFILE;

2. 加载数据

hive不支持用insert语句一条一条的进行插入操作,也不支持update操作。数据是以load的方式加载到建立好的表中。数据一旦导入就不可以修改。
路径支持:

  • 相对路径,例如:project/data1
  • 绝对路径,例如: /user/hive/project/data1
  • 包含模式的完整 URI,例如:hdfs://namenode:9000/user/hive/project/data1

    2.1 从本地文件加载数据:

    LOAD DATA LOCAL INPATH ‘/home/hadoop/input/ncdc/micro-tab/sample.txt’ OVERWRITE INTO TABLE records;

    2.2 加载分区表数据:

    load data local inpath ‘/home/hadoop/input/hive/partitions/file1’ into table logs partition (dt=’2001-01-01’,country=’GB’);

3. 查看表结构

3.1 展示所有表:

1
2
SHOW TABLES;
SHOW TABLES '.*s'; //按正条件(正则表达式)显示表

3.2 显示表的结构信息

DESCRIBE invites;

3.3 展示表中有多少分区:

show partitions logs;

3.4 显示所有函数:

show functions;

3.5 查看函数用法:

describe function substr;

3.6 查看数组、map、结构

select col1[0],col2['b'],col3.c from complex;

3.7 查看hive为某个查询使用多少个MapReduce作业

Explain SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

3.8 查看视图详细信息:

DESCRIBE EXTENDED valid_records;

4. 更新操作

4.1 更新表的名称:

ALTER TABLE source RENAME TO target;

4.2 添加、更新一列

ALTER TABLE invites ADD|REPLACE COLUMNS (new_col2 INT COMMENT 'a comment');

4.3 增加、删除分区

1
2
3
4
5
ALTER TABLE table_name ADD [IF NOT EXISTS] partition_spec [ LOCATION 'location1' ] partition_spec [ LOCATION 'location2' ] ...
partition_spec:
: PARTITION (partition_col = partition_col_value, partition_col = partiton_col_value, ...)

ALTER TABLE table_name DROP partition_spec, partition_spec,...

4.4 增加表的元数据信息

ALTER TABLE table_name SET TBLPROPERTIES table_properties table_properties:
     :[property_name = property_value…..]

4.5 改变表文件格式与组织

ALTER TABLE table_name SET FILEFORMAT file_format
ALTER TABLE table_name CLUSTERED BY(userid) SORTED BY(viewTime) INTO num_buckets BUCKETS

5. 删除操作

5.1 删除表:

DROP TABLE records;

5.2 删除表中数据,但要保持表的结构定义

dfs -rmr /user/hive/warehouse/records;

5.3 删除视图

DROP VIEW view_name

6. 连接操作

6.1 内连接:

1
SELECT sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

6.2 外连接:

1
2
3
SELECT sales.*, things.* FROM sales LEFT OUTER JOIN things ON (sales.id = things.id);
SELECT sales.*, things.* FROM sales RIGHT OUTER JOIN things ON (sales.id = things.id);
SELECT sales.*, things.* FROM sales FULL OUTER JOIN things ON (sales.id = things.id);

6.3 in查询:Hive不支持,但可以使用LEFT SEMI JOIN

SELECT * FROM things LEFT SEMI JOIN sales ON (sales.id = things.id);

6.4 Map连接:Hive可以把较小的表放入每个Mapper的内存来执行连接操作

1
SELECT /*+ MAPJOIN(things) */ sales.*, things.* FROM sales JOIN things ON (sales.id = things.id);

INSERT OVERWRITE TABLE ..SELECT:新表预先存在

1
2
3
4
FROM records2
INSERT OVERWRITE TABLE stations_by_year SELECT year, COUNT(DISTINCT station) GROUP BY year
INSERT OVERWRITE TABLE records_by_year SELECT year, COUNT(1) GROUP BY year
INSERT OVERWRITE TABLE good_records_by_year SELECT year, COUNT(1) WHERE temperature != 9999 AND (quality = 0 OR quality = 1 OR quality = 4 OR quality = 5 OR quality = 9) GROUP BY year;

CREATE TABLE … AS SELECT:新表表预先不存在

1
CREATE TABLE target AS SELECT col1,col2 FROM source;

7. 插入数据

7.1 基本模式

INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement

7.2 多插入模式

FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ...] select_statement2] ...

7.3 自动分区模式

INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement

8. 导出数据到HDFS

数据写入文件系统时进行文本序列化,且每列用^A 来区分,\n换行

1
2
3
4
INSERT OVERWRITE [LOCAL] DIRECTORY directory1 SELECT ... FROM ...
FROM from_statement
INSERT OVERWRITE [LOCAL] DIRECTORY directory1 select_statement1
[INSERT OVERWRITE [LOCAL] DIRECTORY directory2 select_statement2]

JavaWeb常用的操作

###常用操作

  • 获取class路径:

    1
    URL path=Thread.currentThread().getContextClassLoader().getResource("server.xml");
  • 读取properties文件:

    1
    2
    3
    InputStream in = lnew BufferedInputStream(new FileInputStream(name));  
    Properties p = new Properties();
    p.load(in);

###常用软件

####Maven

  • 打包命令
    1
    $mvn package -DskipTests -Pproduct

####Nginx

  • 重启命令:
    1
    /usr/nginx/sbin/nginx -s reload		//平滑重启

####Memcached

  • 启动命令:
    1
    $/usr/local/memcached/bin/memcached -d -c 10240 -m 1024 -u root

-d 选项是启动一个守护进程
-m 是分配给Memcache使用的内存数量,单位是MB,这里是1024MB,默认是64MB
-u 是运行Memcache的用户,这里是root
-l 是监听的服务器IP地址,默认应该是本机
-p 是设置Memcache监听的端口,默认是11211,最好是1024以上的端口
-c 选项是最大运行的并发连接数,默认是1024,这里设置了10240,按照你服务器的负载量来设定
-P 是设置保存Memcache的pid文件位置
-h 打印帮助信息
-v 输出警告和错误信息
-vv 打印客户端的请求和返回信息

初学Linux应掌握的Shell命令

###基本操作

  • 模糊查询正在运行的进程:

    1
    $ps -ef|grep tomcat
  • 解压tar.gz:

    1
    $tar -xzvf abc.tar.gz
  • 修改环境变量:

    1
    2
    3
    $vi /etc/profile  //以root运行
    到文件末尾新增一行:export PATH="/home/abc/bin:$PATH",保存退出
    $source /etc/profile //使配置立即生效
  • 查找文件:

    1
    $find / -name abc.sh
  • 在后台运行程序(用户注销后仍然保持运行):

    1
    $nohup abc.sh &

###防火墙相关

####CentOS防火墙

  • 打开端口
    1
    2
    $/sbin/iptables -I INPUT -p tcp --dport 22 -j ACCEPT  
    $/etc/rc.d/init.d/iptables save

重启计算机后,防火墙默认已经开放了22这个端口。

  • 临时性的完全关闭防火墙,可以不重启机器:

    1
    2
    3
    $/etc/init.d/iptables status                // 查看防火墙状态
    $/etc/init.d/iptables stop // 本次关闭防火墙
    $/etc/init.d/iptables restart // 重启防火墙
  • 永久性关闭防火墙:

    1
    $chkconfig --level 35 iptables off   // 注意中间的是两个英式小短线;重启

Tomcat设置网站别名

  • 打开Tomcat目录下的 conf/server.xml 文件,新增一行 <Alias>你的域名</Alias>
    1
    2
    3
    4
    5
    6
    <Host name="lnho.github.io" appBase="webapp" unpackWARs="true" autoDeploy=
    "true" xmlValidation="false" xmlNamespaceAware="false">
    <Alias>www.lnho.com</Alias>
    <Context path="" docBase="D:/webapps" crossContext="true" debug="0"
    reloadable="true"/>
    </Host>

Oracle表空间操作

###查询表空间

  • 执行:select name from v$datafile 查询表空间中数据文件具体位置。
  • 执行:select * from DBA_DATA_FILES; 查询表空间中数据文件具体位置和文件名,表空间名等,比上一个全。

###创建用户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#创建临时表空间
create temporary tablespace score_temp
tempfile '/home/usr/oracle/oradata/orcl/score/score_temp.dbf'
size 50m
autoextend on
next 50m maxsize 20480m
extent management local;

#创建数据表空间
create tablespace score
logging
datafile '/home/usr/oracle/oradata/orcl/score/score.dbf'
size 100m
autoextend on
next 50m maxsize 20480m
extent management local;

#创建用户并指定表空间
create user score identified by score
default tablespace score
temporary tablespace score_temp;

#给用户授予权限
grant connect,resource to score;