hive 错误

advertisement
两个 Hive 无法启动问题的解决 (2011-02-15 10:21:15)转载▼
标签: hive hadoop bash conditional binary operator expected 内存 it 分类: 原创
最近在一台旧 linux 服务器上部署 hadoop+hive 的测试环境。
Hadoop 版本:0.20.2
Hive 版本:0.6.0
问题一,Bash 版本不对
Hadoop 使用假分布式启动,很容易的就跑起来了。
但 Hive 却总是报如下的异常:
#hive
/opt/hive/bin/hive: /opt/hive/bin/ext/hiveserver.sh: line 19: conditional binary operator
expected
/opt/hive/bin/hive: /opt/hive/bin/ext/hiveserver.sh: line 19: syntax error near `=~'
/opt/hive/bin/hive: /opt/hive/bin/ext/hiveserver.sh: line 19: ` if [[ "$version" =~ $version_re ]];
then'
/opt/hive/bin/hive: /opt/hive/bin/ext/hwi.sh: line 32: conditional binary operator expected
/opt/hive/bin/hive: /opt/hive/bin/ext/hwi.sh: line 32: syntax error near `=~'
/opt/hive/bin/hive: /opt/hive/bin/ext/hwi.sh: line 32: ` if [[ "$version" =~ $version_re ]]; then'
/opt/hive/bin/hive: /opt/hive/bin/ext/jar.sh: line 36: conditional binary operator expected
/opt/hive/bin/hive: /opt/hive/bin/ext/jar.sh: line 36: syntax error near `=~'
/opt/hive/bin/hive: /opt/hive/bin/ext/jar.sh: line 36: ` if [[ "$version" =~ $version_re ]]; then'
/opt/hive/bin/hive: /opt/hive/bin/ext/metastore.sh: line 18: conditional binary operator
expected
/opt/hive/bin/hive: /opt/hive/bin/ext/metastore.sh: line 18: syntax error near `=~'
/opt/hive/bin/hive: /opt/hive/bin/ext/metastore.sh: line 18: ` if [[ "$version" =~ $version_re ]];
then'
/opt/hive/bin/hive: /opt/hive/bin/ext/util/execHiveCmd.sh: line 21: conditional binary operator
expected
/opt/hive/bin/hive: /opt/hive/bin/ext/util/execHiveCmd.sh: line 21: syntax error near `=~'
/opt/hive/bin/hive: /opt/hive/bin/ext/util/execHiveCmd.sh: line 21: ` if [[ "$version" =~
$version_re ]]; then'
/opt/hive/bin/hive: line 6: execHiveCmd: command not found
Google,Baidu,都没有查到原因,没办法,只有自己看代码了,
# cat $HIVE_HOME/bin/ext/hiveserver.sh
第十四行,写着
# Save the regex to a var to workaround quoting incompatabilities
# between Bash 3.1 and 3.2
hive 的脚本必须在 bash 3.1,3.2 上面运行。
查看一下自己机器的版本,
#bash --version
#GNU bash, version 2.5b(1)-release (i386-redhat-linux-gnu)
难怪出问题。找到了 root cause。
解决的方法很简单——安装 bash 3.1
从 http://ftp.gnu.org/gnu/bash/ 下载 bash3.1
开始安装:
#mv bash-3.1.tar.gz /usr/local/src
#tar zxvf bash-3.1.tar.gz
#cd bash-3.1
#./configure
#make
#make install
重新登录一下 linux 服务器,bash 已经变为 3.1 版本。
问题二,内存不够
Bash 3.1 装上后,启动 hive 时又抛出新的异常。
#hive
Invalid maximum heap size: -Xmx4096m
The specified size exceeds the maximum representable size.
Could not create the Java virtual machine.
检查一下虚拟机使用的内存,
#ps -ef | grep java
root
15542
1 0 10:58 pts/1
00:00:02 /opt/jdk1.6.0_23/bin/java -Xmx1000m
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote
-Dhadoop.log.dir=/opt/hadoop-0.20.2/bin/../logs
-Dhadoop.log.file=hadoop-root-namenode-lmschina-web1.log
-Dhadoop.home.dir=/opt/hadoop-0.20.2/bin/.. -Dhadoop.id.str=root -Dha
root
15646
1 0 10:58 ?
00:00:02 /opt/jdk1.6.0_23/bin/java -Xmx1000m
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote
-Dhadoop.log.dir=/opt/hadoop-0.20.2/bin/../logs
-Dhadoop.log.file=hadoop-root-datanode-lmschina-web1.log
-Dhadoop.home.dir=/opt/hadoop-0.20.2/bin/.. -Dhadoop.id.str=root -Dha
root
15744
1 0 10:58 ?
00:00:02 /opt/jdk1.6.0_23/bin/java -Xmx1000m
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote
-Dhadoop.log.dir=/opt/hadoop-0.20.2/bin/../logs
-Dhadoop.log.file=hadoop-root-secondarynamenode-lmschina-web1.log
-Dhadoop.home.dir=/opt/hadoop-0.20.2/bin/.. -Dhadoop.id.str=
root
15819
1 0 10:58 pts/1
00:00:02 /opt/jdk1.6.0_23/bin/java -Xmx1000m
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote
-Dhadoop.log.dir=/opt/hadoop-0.20.2/bin/../logs
-Dhadoop.log.file=hadoop-root-jobtracker-lmschina-web1.log
-Dhadoop.home.dir=/opt/hadoop-0.20.2/bin/.. -Dhadoop.id.str=root -D
root
15923
1 0 10:58 ?
00:00:02 /opt/jdk1.6.0_23/bin/java -Xmx1000m
-Dhadoop.log.dir=/opt/hadoop-0.20.2/bin/../logs
-Dhadoop.log.file=hadoop-root-tasktracker-lmschina-web1.log
-Dhadoop.home.dir=/opt/hadoop-0.20.2/bin/..
-Dhadoop.root.logger=INFO,DRFA -Djava.library.path=/opt/hadoop-
-Dhadoop.id.str=root
好家伙,Hadoop 的 5 个服务,每个都是 1000M 内存,难怪不够。
其实这也不能怪 Hadoop 的默认配置不好。正常情况下,使用分布式部署,每个服务器最多
也就 2000M,现在我们使用假分布式,所有节点都在一台服务器,因此。。。
。
不啰嗦了,改配置,修改 conf/hadoop-env.sh,
# The maximum amount of heap to use, in MB. Default is 1000.
export HADOOP_HEAPSIZE=256
将每个服务改为 256M。
重启 hadoop
#start-all.sh
启动 hive
# hive
Hive history file=/opt/hive/querylog/hive_job_log_root_201102151118_693842029.txt
hive>
Great, it works fine. :)
Download