运行hive --service metastore命令报错,如何解决 5C

运行hive --service metastore命令报错,如何解决

0

2个回答

"Starting Hive Metastore Server"
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/D:/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/D:/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/09/13 13:46:29 INFO conf.HiveConf: Found configuration file file:/D:/apache-hive-2.1.1-bin/conf/hive-site.xml
2018-09-13 13:46:37,889 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream
2018-09-13 13:46:37,897 main WARN Unable to instantiate org.fusesource.jansi.WindowsAnsiOutputStream
18/09/13 13:46:38 INFO metastore.HiveMetaStore: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting HiveMetaStore
STARTUP_MSG: host = boc-PC/10.223.2.30
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.1.1
STARTUP_MSG: classpath = D:\hadoop-2.7.2\etc\hadoop;D:\hadoop-2.7.2\share\hadoop\common\lib\activation-1.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\avro-1.7.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-collections-3.2.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-configuration-1.6.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-digester-1.8.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-httpclient-3.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-lang-2.6.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-math3-3.1.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-net-3.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator-client-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator-framework-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator-recipes-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\gson-2.2.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hadoop-annotations-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hadoop-auth-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hamcrest-core-1.3.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\httpclient-4.2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\httpcore-4.2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-xc-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\java-xmlbuilder-0.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jaxb-api-2.2.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jersey-json-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jets3t-0.9.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jettison-1.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jetty-util-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jsch-0.1.42.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jsp-api-2.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\junit-4.11.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\mockito-all-1.8.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\paranamer-2.3.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\slf4j-api-1.7.10.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\slf4j-log4j12-1.7.10.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\stax-api-1.0-2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\xmlenc-0.52.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\zookeeper-3.4.6.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoop-common-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoop-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoop-nfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-lang-2.6.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\netty-all-4.0.23.Final.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\xml-apis-1.3.04.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\xmlenc-0.52.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-nfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\activation-1.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\aopalliance-1.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-collections-3.2.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-lang-2.6.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guice-3.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guice-servlet-3.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\javax.inject-1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-client-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-guice-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-json-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jettison-1.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\stax-api-1.0-2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\zookeeper-3.4.6-tests.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\zookeeper-3.4.6.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-api-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-client-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-registry-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-nodemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-sharedcachemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-tests-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\aopalliance-1.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\avro-1.7.4.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\guice-3.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\hadoop-annotations-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\javax.inject-1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-guice-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\junit-4.11.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\paranamer-2.3.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-app-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-plugins-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.2.jar;;D:\apache-hive-2.1.1-bin\conf;D:\apache-hive-2.1.1-bin\lib\accumulo-core-1.6.0.jar;D:\apache-hive-2.1.1-bin\lib\accumulo-fate-1.6.0.jar;D:\apache-hive-2.1.1-bin\lib\accumulo-start-1.6.0.jar;D:\apache-hive-2.1.1-bin\lib\accumulo-trace-1.6.0.jar;D:\apache-hive-2.1.1-bin\lib\activation-1.1.jar;D:\apache-hive-2.1.1-bin\lib\ant-1.6.5.jar;D:\apache-hive-2.1.1-bin\lib\ant-1.9.1.jar;D:\apache-hive-2.1.1-bin\lib\ant-launcher-1.9.1.jar;D:\apache-hive-2.1.1-bin\lib\antlr-2.7.7.jar;D:\apache-hive-2.1.1-bin\lib\antlr-runtime-3.4.jar;D:\apache-hive-2.1.1-bin\lib\antlr4-runtime-4.5.jar;D:\apache-hive-2.1.1-bin\lib\aopalliance-1.0.jar;D:\apache-hive-2.1.1-bin\lib\asm-3.1.jar;D:\apache-hive-2.1.1-bin\lib\asm-commons-3.1.jar;D:\apache-hive-2.1.1-bin\lib\asm-tree-3.1.jar;D:\apache-hive-2.1.1-bin\lib\avro-1.7.7.jar;D:\apache-hive-2.1.1-bin\lib\bonecp-0.8.0.RELEASE.jar;D:\apache-hive-2.1.1-bin\lib\calcite-avatica-1.6.0.jar;D:\apache-hive-2.1.1-bin\lib\calcite-core-1.6.0.jar;D:\apache-hive-2.1.1-bin\lib\calcite-linq4j-1.6.0.jar;D:\apache-hive-2.1.1-bin\lib\commons-cli-1.2.jar;D:\apache-hive-2.1.1-bin\lib\commons-codec-1.4.jar;D:\apache-hive-2.1.1-bin\lib\commons-collections-3.2.2.jar;D:\apache-hive-2.1.1-bin\lib\commons-compiler-2.7.6.jar;D:\apache-hive-2.1.1-bin\lib\commons-compress-1.9.jar;D:\apache-hive-2.1.1-bin\lib\commons-dbcp-1.4.jar;D:\apache-hive-2.1.1-bin\lib\commons-el-1.0.jar;D:\apache-hive-2.1.1-bin\lib\commons-httpclient-3.0.1.jar;D:\apache-hive-2.1.1-bin\lib\commons-io-2.4.jar;D:\apache-hive-2.1.1-bin\lib\commons-lang-2.6.jar;D:\apache-hive-2.1.1-bin\lib\commons-lang3-3.1.jar;D:\apache-hive-2.1.1-bin\lib\commons-logging-1.2.jar;D:\apache-hive-2.1.1-bin\lib\commons-math-2.2.jar;D:\apache-hive-2.1.1-bin\lib\commons-pool-1.5.4.jar;D:\apache-hive-2.1.1-bin\lib\commons-vfs2-2.0.jar;D:\apache-hive-2.1.1-bin\lib\curator-client-2.6.0.jar;D:\apache-hive-2.1.1-bin\lib\curator-framework-2.6.0.jar;D:\apache-hive-2.1.1-bin\lib\curator-recipes-2.6.0.jar;D:\apache-hive-2.1.1-bin\lib\datanucleus-api-jdo-4.2.1.jar;D:\apache-hive-2.1.1-bin\lib\datanucleus-core-4.1.6.jar;D:\apache-hive-2.1.1-bin\lib\datanucleus-rdbms-4.1.7.jar;D:\apache-hive-2.1.1-bin\lib\derby-10.10.2.0.jar;D:\apache-hive-2.1.1-bin\lib\disruptor-3.3.0.jar;D:\apache-hive-2.1.1-bin\lib\dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar;D:\apache-hive-2.1.1-bin\lib\eigenbase-properties-1.1.5.jar;D:\apache-hive-2.1.1-bin\lib\fastutil-6.5.6.jar;D:\apache-hive-2.1.1-bin\lib\findbugs-annotations-1.3.9-1.jar;D:\apache-hive-2.1.1-bin\lib\geronimo-annotation_1.0_spec-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\geronimo-jaspic_1.0_spec-1.0.jar;D:\apache-hive-2.1.1-bin\lib\geronimo-jta_1.1_spec-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\groovy-all-2.4.4.jar;D:\apache-hive-2.1.1-bin\lib\gson-2.2.4.jar;D:\apache-hive-2.1.1-bin\lib\guava-14.0.1.jar;D:\apache-hive-2.1.1-bin\lib\guice-3.0.jar;D:\apache-hive-2.1.1-bin\lib\guice-assistedinject-3.0.jar;D:\apache-hive-2.1.1-bin\lib\guice-servlet-3.0.jar;D:\apache-hive-2.1.1-bin\lib\hamcrest-core-1.3.jar;D:\apache-hive-2.1.1-bin\lib\hbase-annotations-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-client-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-common-1.1.1-tests.jar;D:\apache-hive-2.1.1-bin\lib\hbase-common-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-hadoop-compat-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-hadoop2-compat-1.1.1-tests.jar;D:\apache-hive-2.1.1-bin\lib\hbase-hadoop2-compat-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-prefix-tree-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-procedure-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-protocol-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hbase-server-1.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-accumulo-handler-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-ant-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-beeline-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-cli-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-common-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-contrib-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-exec-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-hbase-handler-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-hplsql-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-hwi-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-jdbc-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-llap-client-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-llap-common-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-llap-ext-client-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-llap-server-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-llap-tez-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-metastore-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-orc-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-serde-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-service-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-service-rpc-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-shims-0.23-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-shims-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-shims-common-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-shims-scheduler-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-storage-api-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\hive-testutils-2.1.1.jar;D:\apache-hive-2.1.1-bin\lib\htrace-core-3.1.0-incubating.jar;D:\apache-hive-2.1.1-bin\lib\httpclient-4.4.jar;D:\apache-hive-2.1.1-bin\lib\httpcore-4.4.jar;D:\apache-hive-2.1.1-bin\lib\ivy-2.4.0.jar;D:\apache-hive-2.1.1-bin\lib\jackson-annotations-2.4.0.jar;D:\apache-hive-2.1.1-bin\lib\jackson-core-2.4.2.jar;D:\apache-hive-2.1.1-bin\lib\jackson-databind-2.4.2.jar;D:\apache-hive-2.1.1-bin\lib\jackson-jaxrs-1.9.2.jar;D:\apache-hive-2.1.1-bin\lib\jackson-xc-1.9.2.jar;D:\apache-hive-2.1.1-bin\lib\jamon-runtime-2.3.1.jar;D:\apache-hive-2.1.1-bin\lib\janino-2.7.6.jar;D:\apache-hive-2.1.1-bin\lib\jasper-compiler-5.5.23.jar;D:\apache-hive-2.1.1-bin\lib\jasper-runtime-5.5.23.jar;D:\apache-hive-2.1.1-bin\lib\javax.inject-1.jar;D:\apache-hive-2.1.1-bin\lib\javax.jdo-3.2.0-m3.jar;D:\apache-hive-2.1.1-bin\lib\javax.servlet-3.0.0.v201112011016.jar;D:\apache-hive-2.1.1-bin\lib\javolution-5.5.1.jar;D:\apache-hive-2.1.1-bin\lib\jcodings-1.0.8.jar;D:\apache-hive-2.1.1-bin\lib\jcommander-1.32.jar;D:\apache-hive-2.1.1-bin\lib\jdo-api-3.0.1.jar;D:\apache-hive-2.1.1-bin\lib\jersey-client-1.9.jar;D:\apache-hive-2.1.1-bin\lib\jersey-server-1.14.jar;D:\apache-hive-2.1.1-bin\lib\jetty-6.1.26.jar;D:\apache-hive-2.1.1-bin\lib\jetty-all-7.6.0.v20120127.jar;D:\apache-hive-2.1.1-bin\lib\jetty-all-server-7.6.0.v20120127.jar;D:\apache-hive-2.1.1-bin\lib\jetty-sslengine-6.1.26.jar;D:\apache-hive-2.1.1-bin\lib\jetty-util-6.1.26.jar;D:\apache-hive-2.1.1-bin\lib\jline-2.12.jar;D:\apache-hive-2.1.1-bin\lib\joda-time-2.5.jar;D:\apache-hive-2.1.1-bin\lib\joni-2.1.2.jar;D:\apache-hive-2.1.1-bin\lib\jpam-1.1.jar;D:\apache-hive-2.1.1-bin\lib\json-20090211.jar;D:\apache-hive-2.1.1-bin\lib\jsp-2.1-6.1.14.jar;D:\apache-hive-2.1.1-bin\lib\jsp-api-2.0.jar;D:\apache-hive-2.1.1-bin\lib\jsp-api-2.1-6.1.14.jar;D:\apache-hive-2.1.1-bin\lib\jsp-api-2.1.jar;D:\apache-hive-2.1.1-bin\lib\jsr305-3.0.0.jar;D:\apache-hive-2.1.1-bin\lib\jta-1.1.jar;D:\apache-hive-2.1.1-bin\lib\junit-4.11.jar;D:\apache-hive-2.1.1-bin\lib\libfb303-0.9.3.jar;D:\apache-hive-2.1.1-bin\lib\libthrift-0.9.3.jar;D:\apache-hive-2.1.1-bin\lib\log4j-1.2-api-2.4.1.jar;D:\apache-hive-2.1.1-bin\lib\log4j-api-2.4.1.jar;D:\apache-hive-2.1.1-bin\lib\log4j-core-2.4.1.jar;D:\apache-hive-2.1.1-bin\lib\log4j-slf4j-impl-2.4.1.jar;D:\apache-hive-2.1.1-bin\lib\log4j-web-2.4.1.jar;D:\apache-hive-2.1.1-bin\lib\mail-1.4.1.jar;D:\apache-hive-2.1.1-bin\lib\maven-scm-api-1.4.jar;D:\apache-hive-2.1.1-bin\lib\maven-scm-provider-svn-commons-1.4.jar;D:\apache-hive-2.1.1-bin\lib\maven-scm-provider-svnexe-1.4.jar;D:\apache-hive-2.1.1-bin\lib\metrics-core-2.2.0.jar;D:\apache-hive-2.1.1-bin\lib\metrics-core-3.1.0.jar;D:\apache-hive-2.1.1-bin\lib\metrics-json-3.1.0.jar;D:\apache-hive-2.1.1-bin\lib\metrics-jvm-3.1.0.jar;D:\apache-hive-2.1.1-bin\lib\mysql-connector-java-5.1.25-bin.jar;D:\apache-hive-2.1.1-bin\lib\netty-3.7.0.Final.jar;D:\apache-hive-2.1.1-bin\lib\netty-all-4.0.23.Final.jar;D:\apache-hive-2.1.1-bin\lib\opencsv-2.3.jar;D:\apache-hive-2.1.1-bin\lib\org.abego.treelayout.core-1.0.1.jar;D:\apache-hive-2.1.1-bin\lib\paranamer-2.3.jar;D:\apache-hive-2.1.1-bin\lib\parquet-hadoop-bundle-1.8.1.jar;D:\apache-hive-2.1.1-bin\lib\pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar;D:\apache-hive-2.1.1-bin\lib\plexus-utils-1.5.6.jar;D:\apache-hive-2.1.1-bin\lib\protobuf-java-2.5.0.jar;D:\apache-hive-2.1.1-bin\lib\regexp-1.3.jar;D:\apache-hive-2.1.1-bin\lib\servlet-api-2.4.jar;D:\apache-hive-2.1.1-bin\lib\servlet-api-2.5-6.1.14.jar;D:\apache-hive-2.1.1-bin\lib\slider-core-0.90.2-incubating.jar;D:\apache-hive-2.1.1-bin\lib\snappy-0.2.jar;D:\apache-hive-2.1.1-bin\lib\snappy-java-1.0.5.jar;D:\apache-hive-2.1.1-bin\lib\ST4-4.0.4.jar;D:\apache-hive-2.1.1-bin\lib\stax-api-1.0.1.jar;D:\apache-hive-2.1.1-bin\lib\stringtemplate-3.2.1.jar;D:\apache-hive-2.1.1-bin\lib\super-csv-2.2.0.jar;D:\apache-hive-2.1.1-bin\lib\tempus-fugit-1.1.jar;D:\apache-hive-2.1.1-bin\lib\tephra-api-0.6.0.jar;D:\apache-hive-2.1.1-bin\lib\tephra-core-0.6.0.jar;D:\apache-hive-2.1.1-bin\lib\tephra-hbase-compat-1.0-0.6.0.jar;D:\apache-hive-2.1.1-bin\lib\transaction-api-1.1.jar;D:\apache-hive-2.1.1-bin\lib\twill-api-0.6.0-incubating.jar;D:\apache-hive-2.1.1-bin\lib\twill-common-0.6.0-incubating.jar;D:\apache-hive-2.1.1-bin\lib\twill-core-0.6.0-incubating.jar;D:\apache-hive-2.1.1-bin\lib\twill-discovery-api-0.6.0-incubating.jar;D:\apache-hive-2.1.1-bin\lib\twill-discovery-core-0.6.0-incubating.jar;D:\apache-hive-2.1.1-bin\lib\twill-zookeeper-0.6.0-incubating.jar;D:\apache-hive-2.1.1-bin\lib\velocity-1.5.jar;D:\apache-hive-2.1.1-bin\lib\zookeeper-3.4.6.jar;D:\apache-hive-2.1.1-bin\hcatalog\share\hcatalog\hive-hcatalog-core-2.1.1.jar;D:\apache-hive-2.1.1-bin\hcatalog\share\hcatalog\hive-hcatalog-pig-adapter-2.1.1.jar;D:\apache-hive-2.1.1-bin\hcatalog\share\hcatalog\hive-hcatalog-server-extensions-2.1.1.jar;D:\apache-hive-2.1.1-bin\hcatalog\share\hcatalog\hive-hcatalog-streaming-2.1.1.jar;;;
STARTUP_MSG: build = git://jcamachorodriguez-rMBP.local/Users/jcamachorodriguez/src/workspaces/hive/HIVE-release2/hive -r 1af77bbf8356e86cabbed92cfa8cc2e1470a1d5c; compiled by 'jcamachorodriguez' on Tue Nov 29 19:46:12 GMT 2016
************************************************************/
18/09/13 13:46:39 INFO metastore.HiveMetaStore: Starting hive metastore on port 9083
18/09/13 13:46:39 INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
18/09/13 13:46:40 INFO metastore.ObjectStore: ObjectStore, initialize called
18/09/13 13:46:40 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/09/13 13:46:40 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
18/09/13 13:46:43 ERROR Datastore.Schema: Failed initialising database.
Unable to open a test connection to the given database. JDBC url = jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true, username = hadoop. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1015)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:975)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:920)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2565)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2301)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:834)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:416)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:346)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483)
at org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:296)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133)
at org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:420)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:821)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:338)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:515)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:544)
at org.apache.hadoop.hive.metastore.ObjectStore.initializeHelper(ObjectStore.java:399)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:336)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:297)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:58)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:599)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:564)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:626)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:416)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6484)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6479)
at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6737)
at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6664)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.NullPointerException
at com.mysql.jdbc.ConnectionImpl.getServerCharacterEncoding(ConnectionImpl.java:3276)
at com.mysql.jdbc.MysqlIO.sendConnectionAttributes(MysqlIO.java:1940)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1866)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1252)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2483)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2516)
... 63 more

0

localhost:3306 对应的数据库服务是否启动了

 . JDBC url = jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true, username = hadoop. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
0
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
hive,启动metastore时,报错
报错信息:rnjavax.jdo.JDODataStoreException: Exception thrown obtaining schema column information from datastorern at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)rn at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:720)rn at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:740)rn at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:7763)rn at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7657)rn at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7632)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:498)rn at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)rn at com.sun.proxy.$Proxy18.verifySchema(Unknown Source)rn at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:547)rn at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:612)rn at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:398)rn at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)rn at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6390)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6385)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6643)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6570)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:498)rn at org.apache.hadoop.util.RunJar.run(RunJar.java:221)rn at org.apache.hadoop.util.RunJar.main(RunJar.java:136)rnNestedThrowablesStackTrace:rncom.mysql.jdbc.exceptions.MySQLSyntaxErrorException: Table 'hive.version' doesn't existrn at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936)rn at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)rn at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631)rn at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723)rn at com.mysql.jdbc.Connection.execSQL(Connection.java:3277)rn at com.mysql.jdbc.Connection.execSQL(Connection.java:3206)rn at com.mysql.jdbc.Statement.executeQuery(Statement.java:1232)rn at com.mysql.jdbc.DatabaseMetaData$2.forEach(DatabaseMetaData.java:2390)rn at com.mysql.jdbc.DatabaseMetaData$IterateBlock.doForAll(DatabaseMetaData.java:76)rn at com.mysql.jdbc.DatabaseMetaData.getColumns(DatabaseMetaData.java:2264)rn at org.datanucleus.store.rdbms.adapter.BaseDatastoreAdapter.getColumns(BaseDatastoreAdapter.java:1575)rn at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.refreshTableData(RDBMSSchemaHandler.java:1103)rn at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getRDBMSTableInfoForTable(RDBMSSchemaHandler.java:1015)rn at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getRDBMSTableInfoForTable(RDBMSSchemaHandler.java:965)rn at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getSchemaData(RDBMSSchemaHandler.java:338)rn at org.datanucleus.store.rdbms.RDBMSStoreManager.getColumnInfoForTable(RDBMSStoreManager.java:2392)rn at org.datanucleus.store.rdbms.table.TableImpl.initializeColumnInfoFromDatastore(TableImpl.java:324)rn at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3401)rn at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2877)rn at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:119)rn at org.datanucleus.store.rdbms.RDBMSStoreManager.manageClasses(RDBMSStoreManager.java:1608)rn at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:671)rn at org.datanucleus.store.rdbms.RDBMSStoreManager.getPropertiesForGenerator(RDBMSStoreManager.java:2069)rn at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1271)rn at org.datanucleus.ExecutionContextImpl.newObjectId(ExecutionContextImpl.java:3759)rn at org.datanucleus.state.StateManagerImpl.setIdentity(StateManagerImpl.java:2267)rn at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:484)rn at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:120)rn at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:218)rn at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2078)rn at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:1922)rn at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1777)rn at org.datanucleus.ExecutionContextThreadedImpl.persistObject(ExecutionContextThreadedImpl.java:217)rn at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:715)rn at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:740)rn at org.apache.hadoop.hive.metastore.ObjectStore.setMetaStoreSchemaVersion(ObjectStore.java:7763)rn at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7657)rn at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7632)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:498)rn at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)rn at com.sun.proxy.$Proxy18.verifySchema(Unknown Source)rn at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:547)rn at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:612)rn at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:398)rn at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)rn at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6390)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6385)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6643)rn at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6570)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:498)rn at org.apache.hadoop.util.RunJar.run(RunJar.java:221)rn at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
hive启动MetaStore报错解决方案
今天在自己的虚拟机上安装apache-hive-3.1.1时启动hive时出现了很多错误,经过不断的资料查询及测试最终可以正常运行了,特记录下,加深自己的印象分享给大家,也以便以后出现同样的错误时可以查看笔记解决。 第一条错误: MetaException(message:Error creating transactional connection factory) at org.apache...
hive metastore 启动出错解决
运行 ./hive --service metastore  报错如下: Starting Hive Metastore Server org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.         at org...
运行hive,报错,解决经历
今天运行hive,因为hadoop原来为分布式,然后改成伪分布式后,运行hive报错 初步判断是由于HA节点中处于standby状态造成的异常 Operation category READ is not supported in state standby 关闭后stop-all.sh 在重启start-all.sh 还是报错,然后重启了一下服务器 从新打开hadoop  star...
Hive Metastore 创建数据库失败
  HMSHandler Fatal error: javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '...
Hive Metastore原理及配置
一、Hive存储概念 1、Hive用户接口: 命令行接口(CLI):以命令行的形式输入SQL语句进行数据数据操作 Web界面:通过Web方式进行访问。      Hive的远程服务方式:通过JDBC等方式进行访问。   2、元数据存储  将元数据存储在关系数据库中(MySql、Derby),元数据包括表的属性、表的名称、表的列、分区及其属性以及表数据所在的目录等。 3、解...
Hive安装配置MetaStore到MySQL
<p>rn <br />rn</p>rn<p>rn <p>rn 20周年限定一卡通!<span style="color:#337FE5;">可学Java全部课程</span>,仅售799元(原价7016元),<span style="color:#E53333;">还送漫威正版授权机械键盘+CSDN 20周年限量版T恤+智能编程助手!</span>rn </p>rn <p>rn 点此链接购买:rn </p>rn <table>rn <tbody>rn <tr>rn <td>rn <span style="color:#337FE5;"><a href="https://edu.csdn.net/topic/teachercard?utm_source=jsk20xqy" target="_blank">https://edu.csdn.net/topic/teachercard?utm_source=jsk20xqy</a><br />rn</span>rn </td>rn </tr>rn </tbody>rn </table>rn</p>rn<span>&nbsp;</span> rn<p>rn <br />rn</p>rn<p>rn 本阶段详细介绍了大数据所涉及到的Linux、shell、Hadoop、zookeeper、HadoopHA、Hive、Flume、Kafka、Hbase、Sqoop、Oozie等技术的概念、安装配置、架构原理、数据类型定义、数据操作、存储集群等重点知识点。rn</p>
Hive执行hive --service metastore后一直卡着不动了,没有报错
如图,执行hive --service metastore后一直卡着不动了rn[img=https://img-bbs.csdn.net/upload/201605/10/1462883608_69303.png][/img]rn[img=https://img-bbs.csdn.net/upload/201605/10/1462883649_493821.png][/img]
hive中metastore三种存储方式
1、hive中metastore存储方式:       嵌套方式: 使用内置derby数据库,同一时间仅限一个hive cli环境登录       本地mysql存储方式: 采取外部mysql数据库服务器,支持多用户连接模式,通过设置hive.metastore.local 为true实现。
Hive metastore三种配置方式
Hive metastore三种配置方式
Hive MetaStore服务增大内存
找到hive的安装目录,进入/hive/bin/ext/,编辑 metastore.sh文件,增加以下内容: export HIVE_METASTORE_HADOOP_OPTS=&quot;-Xms4096m -Xmx4096m&quot; 添加后文件内容如下: THISSERVICE=metastore export SERVICE_LIST=&quot;${SERVICE_LIST}${THISSERVICE} &quot;...
Hive Metastore canary创建数据库失败
今天上班时打开CM管理界面,看到 Hive Metastore Server 运行状况 不良 :查看日志 Retrying creating default database after error: Unable to open a test connection to the given database. JDBC url = jdbc:mysql://loca
【一】hive安装(远程metastore)
前期:请先安装jdk和hadoop和mysql jdk安装 hadoop分布式安装 mysql安装 环境ubuntu16.04 下载 http://mirrors.tuna.tsinghua.edu.cn/apache/hive/ rz上传安装包到服务器 解压 tar -zxvf apache-hive-2.3.3-bin.tar.gz 修改名字文件名字 mv apa...
Hive之——metastore三种配置方式
Hive的meta数据支持以下三种存储方式,其中两种属于本地存储,一种为远端存储。远端存储比较适合生产环境。Hive官方wiki详细介绍了这三种方式,链接为:Hive Metastore。 一、本地derby 这种方式是最简单的存储方式,只需要在hive-site.xml做如下配置便可。 javax.jdo.option.ConnectionURL jdbc:der
hive metastore 基础表简绍
hive metastore主要涉及的基础表为:   表的关系为        
Hive报Error communicating with the metastore
Hadoop集群运行大约1到2周会出现Error communicating with the metastore的情况,重启metastore后恢复正常。rn看日志似乎是因为心跳超时中止了事务,不知道为啥会心跳超时?求助rn[code=text]2018-02-27T00:16:24,877 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0] txn.TxnHandler: 'HouseKeeper' locked by 'cplcdn3'rn2018-02-27T00:16:24,905 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0] txn.TxnHandler: Deleted 818 ext locks from HIVE_LOCKS due to timeout (vs. 4 found. List: [612320, 612324, 612330, 612344]) maxHeartbeatTime=1519661483775rn2018-02-27T00:16:24,930 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0] txn.TxnHandler: Aborted the following transactions due to timeout: [52959, 52960, 52967, 52968, 52969, 52970, 52971, 52972, 52973, 52974]rn2018-02-27T00:16:24,930 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0] txn.TxnHandler: Aborted 10 transactions due to timeoutrn2018-02-27T00:16:24,933 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0] txn.AcidHouseKeeperService: timeout reaper ran for 0seconds. isAliveCounter=-2147482203rn2018-02-27T00:16:24,949 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0] txn.TxnHandler: 'HouseKeeper' unlocked by 'cplcdn3'[/code]rn[code=text]2018-02-27T00:20:19,110 ERROR [pool-4-thread-130] metastore.RetryingHMSHandler: TxnAbortedException(message:Transaction txnid:52968 already aborted)rn at org.apache.hadoop.hive.metastore.txn.TxnHandler.ensureValidTxn(TxnHandler.java:2705)rn at org.apache.hadoop.hive.metastore.txn.TxnHandler.enqueueLockWithRetry(TxnHandler.java:855)rn at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:789)rn at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5972)rn at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:606)rn at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)rn at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)rn at com.sun.proxy.$Proxy21.lock(Unknown Source)rn at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:13828)rn at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$lock.getResult(ThriftHiveMetastore.java:13812)rn at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)rn at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)rn at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)rn at java.security.AccessController.doPrivileged(Native Method)rn at javax.security.auth.Subject.doAs(Subject.java:415)rn at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)rn at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)rn at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)rn at java.lang.Thread.run(Thread.java:745)[/code]rnrn
hive docker 运行命令
docker ps -a 查看所有容器 docker start containerID 启动容器 docker rm containerID docker stop containerID docker rmi imageID docker images docker rmi 31ca583bc130 docker rm af8496cf032e gzip ...
详细调研hive的metastore的管理机制
Hive 是建立在 Hadoop 上的数据仓库基础构架。它提供了一系列的工具,可以用来进行数据提取转化加载(ETL),这是一种可以存储、查询和分析存储在 Hadoop 中的大规模数据的机制。Hive 定义了简单的类 SQL 查询语言,称为 QL,它允许熟悉 SQL 的用户查询数据。同时,这个语言也允许熟悉 MapReduce 开发者的开发自定义的 mapper 和 reducer 来处理内建的 mapper 和 reducer 无法完成的复杂的分析工作。
Hive安装_配置MetaStore到MySQL
<span style="color:#404040;">Hive是基于Hadoop的一个数据仓库工具,将繁琐的MapReduce程序变成了简单方便的SQL语句实现,深受广大软件开发工程师喜爱。Hive同时也是进入互联网行业的大数据开发工程师必备技术之一。在本课程中,你将学习到,Hive架构原理、安装配置、hiveserver2、数据类型、数据定义、数据操作、查询、自定义UDF函数、窗口函数、压缩和存储、企业级调优、以及结合谷粒影音项目需求,把整个Hive的核心知识点贯穿起来。</span>
hive 用mysql做metastore 分区查询报错
select * from part_user where datetime='2015-09'; FAILED: SemanticException MetaException(message:You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version
在root用户执行hive命令报错
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root":hdfs:hdfs:drwxr-xr-x         a
hive运行job的时候报错
hive> select count(*) from techbbs;rnTotal jobs = 1rnLaunching Job 1 out of 1rnNumber of reduce tasks determined at compile time: 1rnIn order to change the average load for a reducer (in bytes):rn set hive.exec.reducers.bytes.per.reducer=rnIn order to limit the maximum number of reducers:rn set hive.exec.reducers.max=rnIn order to set a constant number of reducers:rn set mapreduce.job.reduces=rnStarting Job = job_1436192701429_0004, Tracking URL = http://hiter:8088/proxy/application_1436192701429_0004/rnKill Command = /usr/hadoop-2.5.1/bin/hadoop job -kill job_1436192701429_0004rnHadoop job information for Stage-1: number of mappers: 1; number of reducers: 1rn2015-07-07 00:11:21,693 Stage-1 map = 0%, reduce = 0%rn2015-07-07 00:11:37,009 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 6.33 secrn2015-07-07 00:11:53,332 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 9.2 secrn2015-07-07 00:11:58,933 Stage-1 map = 0%, reduce = 0%rnMapReduce Total cumulative CPU time: 9 seconds 200 msecrnEnded Job = job_1436192701429_0004 with errorsrn[color=#FF0000]Error during job, obtaining debugging information...rn[color=#FF0000]FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask[/color][/color]rnMapReduce Jobs Launched: rnJob 0: Map: 1 Reduce: 1 Cumulative CPU: 9.2 sec HDFS Read: 0 HDFS Write: 0 FAILrnTotal MapReduce CPU Time Spent: 9 seconds 200 msecrnrn日志信息:rnrn2015-07-07 00:12:03,378 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1436192701429_0004_000002rn2015-07-07 00:12:07,243 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.rn2015-07-07 00:12:07,260 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.rn2015-07-07 00:12:07,426 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablern2015-07-07 00:12:07,460 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:rn2015-07-07 00:12:07,461 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@c791b9)rn2015-07-07 00:12:07,568 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2 for application: 4. Attempt num: 2 is last retry: truern2015-07-07 00:12:07,899 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.rn2015-07-07 00:12:07,906 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.rn2015-07-07 00:12:09,499 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Attempt num: 2 is last retry: true because a commit was started.rn2015-07-07 00:12:09,509 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$NoopEventHandlerrn2015-07-07 00:12:09,519 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandlerrn2015-07-07 00:12:09,523 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouterrn2015-07-07 00:12:09,630 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Will not try to recover. recoveryEnabled: true recoverySupportedByCommitter: false numReduceTasks: 1 shuffleKeyValidForRecovery: true ApplicationAttemptID: 2rn2015-07-07 00:12:09,671 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous history file is at hdfs://hiter:9000/tmp/hadoop-yarn/staging/root/.staging/job_1436192701429_0004/job_1436192701429_0004_1.jhistrn2015-07-07 00:12:11,154 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandlerrn2015-07-07 00:12:11,379 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.propertiesrn2015-07-07 00:12:11,684 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).rn2015-07-07 00:12:11,684 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system startedrn2015-07-07 00:12:11,741 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:truern2015-07-07 00:12:11,741 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3rn2015-07-07 00:12:11,742 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33rn2015-07-07 00:12:12,040 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.rn2015-07-07 00:12:12,056 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.rn2015-07-07 00:12:12,079 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at hiter/192.168.1.204:8030rn2015-07-07 00:12:12,302 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: 8192rn2015-07-07 00:12:12,303 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: defaultrn2015-07-07 00:12:12,322 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryCopyService: History file is at hdfs://hiter:9000/tmp/hadoop-yarn/staging/root/.staging/job_1436192701429_0004/job_1436192701429_0004_1.jhistrn2015-07-07 00:12:12,734 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1436192701429_0004, File: hdfs://hiter:9000/tmp/hadoop-yarn/staging/root/.staging/job_1436192701429_0004/job_1436192701429_0004_2.jhistrn2015-07-07 00:12:13,016 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMasterrn[color=#FF0000]java.io.IOException: Was asked to shut down.rn at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1488)rn at java.security.AccessController.doPrivileged(Native Method)rn at javax.security.auth.Subject.doAs(Subject.java:396)rn at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)rn at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1482)rn at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1415)rn2015-07-07 00:12:13,027 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1[/color]rnrnstderr:rnrn[color=#FF0000]log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).[/color]rnlog4j:WARN Please initialize the log4j system properly.rnlog4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.rnrnrnrnrn
安装并使用mysql5.7作为hive的metastore
前言hive的metastore默认是使用derby来作为metastore,但是derby有一个缺点是不能支持多用户链接,虽然你可以通过切换目录来支持,但是不同目录的metastore会不一致,所以这里使用mysql来作为hive的metastore。在linux上安装mysql数据库1、下载最新的mysql数据库,这里使用的版本是5.7.13,这里是使用二进制rpm进行安装mysql-commu
Hive的Metastore三种配置方式分析
        Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供类SQL查询功能。而metastore是Hive元数据的集中存放地。metastore元数据存储主要体现在两个方面:服务和后台数据的存储。      关于Metastore的三种配置:内嵌配置,本地配置,远程配置。      1. 默认情况下,metastore服务和Hive的服务运行在同...
hive: metastore 无法启动(本地模式 Mysql)
hive: metastore 无法启动
Hive安装前扫盲之Derby和Metastore
大数据总是有很多英文单词,你不了解一下根本就没法推进。 比如Hive要涉及到的:derby metastore hiveServer2 后面内容都是转载的,大致内容简单来说就是:   Derby是一个数据库,非常轻量,而Hive只会把元数据存放在关系型数据库中。这是因为这样可以易于共享这些元数据。 Hive 将元数据存储在 RDBMS 中,一般常用 MySQL 和 Derby。默认情况下...
Hive metastore 无法解析分区字段 is not null问题排查
文章目录一、问题描述二、解决方案 一、问题描述 周中发现一个问题,metastore根据条件获取分区时发生异常,导致扫描所有分区,最终导致gc异常。 hive编译时会进行逻辑优化,在执行分区裁剪时,会根据相关的分区过滤条件去metastore查询要扫描的分区目录。metastore会根据hiveserver传过来的条件表达式进行解析,然后过滤不需要的分区。 目前的问题是hiveserver传了一个...
hive service
hive service jar hive service jar hive service jar hive service jar hive service jar
Hive教程之metastore的三种模式
Hive中metastore(元数据存储)的三种方式: 内嵌Derby方式 Local方式 Remote方式 详见:http://www.micmiu.com/opensource/hadoop/hive-metastore-config/  
HIVE的metastore存储到ORACLE上报JDBC错
有个问题请教下rn 我有1个NN,3个DN,HIVE放在NN节点上,然后hive.stats.dbconnectionstring配置了jdbcracle:thin:@192.168.1.10:1621/hivern在NN上启动hive shell,执行插入后检查日志,发现DN节点报错rn2014-02-25 10:32:20,876 ERROR org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher: Error during instantiating JDBC driver oracle.jdbc.driver.OracleDriver.rn java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriverrn我每个节点的HADOOP_CLASSPATH为export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/home/hadoop/apache/hive-0.10.0/lib,并且JDBC库都在下面,为什么还会在存储中间结果的时候报这个错呢?另外,我整个任务是可以正确完成,感觉很奇怪
Hive Metastore安装方式及部署架构
基于Hadoop CDH5和Spark新版本2.3.2详细讲述了大数据各种技术,包括HDFS、YARN、MapReduce、Hive、HBase、Flume、Kafka、Hue、Spark Streaming,Spark SQL、Spark Structured Streaming。主要内容包括MapReduce项目离线处理、Hive与HBase大数据分析与挖掘、Hue大数据项目可视化、Spark SQL大数据项目离线分析、Spark Streaming 大数据项目实时分析,Spark Structured Streaming大数据项目实时分析,Web项目可视化。rn
hive 操作(二)——使用 mysql 作为 hive 的metastore
Hive 操作(一) hive 默认使用 derby 作为映射表(SQL 操作映射为MapReduce Job,将SQL中创建的表映射为 hdfs 的文件/文件夹,字段映射为其中的行),但 derby 的一大缺陷在于它不允许多个客户端同时执行sql操作(可能新版本的hive会有所升级)。我们又知hive的metastore,除了derby,还可存放于 mysql 中;CentOS mysql 的安装
Hive Metastore 启动成功又失败
hive 在正常使用中 metastore 忽然停掉,查看日志,报一下错误:2017-06-19 12:11:15,134 ERROR [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(6080)) - org.apache.thrift.transport.TTransportException: Could no
SparkSQL整合Hive实现metastore元数据共享
一、需求 在兼容Hive技术的前提下,推进SparkSQL技术的使用,那么就会衍生出一个问题:如何让Hive和SparkSQL数据共享?,比如在Hive中操作,然后在SparkSQL中能够看到变化,反之亦然。 注意:记住一个前提,先使用Hive在先,后引入SparkSQL,笔者在操作过程中发现了一个问题,之前SparkSQL中的数据会看不到,只能看到Hive中的,这个问题有待进一步研究。 H...
ambari 安装集群 Hive Metastore Stopped
Metastore on master.hadoop failed (Traceback (most recent call last):rn File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_metastore.py", line 190, in executern timeout=int(check_command_timeout) )rn File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__rn self.env.run()rn File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in runrn self.run_action(resource, action)rn File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_actionrn provider_action()rn File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_runrn tries=self.resource.tries, try_sleep=self.resource.try_sleep)rn File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in innerrn result = function(command, **kwargs)rn File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_callrn tries=tries, try_sleep=try_sleep)rn File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapperrn result = _call(command, **kwargs_copy)rn File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _callrn raise Fail(err_msg)rnFail: Execution of 'export HIVE_CONF_DIR='/usr/hdp/current/hive-metastore/conf/conf.server' ; hive --hiveconf hive.metastore.uris=thrift://master.hadoop:9083 --hiveconf hive.metastore.client.connect.retry.delay=1 --hiveconf hive.metastore.failure.retries=1 --hiveconf hive.metastore.connect.retries=1 --hiveconf hive.metastore.client.socket.timeout=14 --hiveconf hive.execution.engine=mr -e 'show databases;'' returned 1. WARNING: Use "yarn jar" to launch YARN applications.rnrnLogging initialized using configuration in file:/etc/hive/2.4.2.0-258/0/conf.server/hive-log4j.propertiesrnException in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClientrn at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)rn at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:680)rn at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:497)rn at org.apache.hadoop.util.RunJar.run(RunJar.java:221)rn at org.apache.hadoop.util.RunJar.main(RunJar.java:136)rnCaused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClientrn at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)rn at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)rn at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)rn at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)rn at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)rn at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)rn at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:484)rn ... 8 morernCaused by: java.lang.reflect.InvocationTargetExceptionrn at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)rn at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)rn at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)rn at java.lang.reflect.Constructor.newInstance(Constructor.java:422)rn at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)rn ... 14 morernCaused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: 拒绝连接rn at org.apache.thrift.transport.TSocket.open(TSocket.java:187)rn at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:426)rn at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:236)rn at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)rn at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)rn at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)rn at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)rn at java.lang.reflect.Constructor.newInstance(Constructor.java:422)rn at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)rn at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)rn at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)rn at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)rn at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)rn at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)rn at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:484)rn at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:680)rn at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:497)rn at org.apache.hadoop.util.RunJar.run(RunJar.java:221)rn at org.apache.hadoop.util.RunJar.main(RunJar.java:136)rnCaused by: java.net.ConnectException: 拒绝连接rn at java.net.PlainSocketImpl.socketConnect(Native Method)rn at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)rn at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)rn at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)rn at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)rn at java.net.Socket.connect(Socket.java:589)rn at org.apache.thrift.transport.TSocket.open(TSocket.java:182)rn ... 22 morern)rn at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:472)rn at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:236)rn at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74)rn ... 19 morern)[img=https://img-bbs.csdn.net/upload/201606/15/1465978031_314353.png][/img]
Hive Arch和Hive metastore以及和传统数据库的对比1
<p>rn <strong><span style="font-size:16px;">课程的主要内容包括:</span></strong> rn</p>rn<p>rn 1.ZooKeeper-分布式过程协同组件rn</p>rn2.Hadoop3-大数据基础组件<br />rn3.Tez-Yarn底层计算引擎<br />rn4.Hive3-大数据仓库<br />rn5.Spark2实时大数据处理<br />rn<p>rn 6.Oozie5-大数据流程引擎rn</p>rn<p>rn <strong><span style="font-size:16px;">课程特点</span><span style="font-size:16px;">:</span></strong> rn</p>rn<p>rn <br />rn</p>rn<p>rn 1.最新API: Hadoop3/Spark2/Hive3/Oozie5<br />rn2.手工搭建集群环境:编译+搭建<br />rn3.配套资源:分阶段镜像+课件+安装资源,其中安装资源包括案例源码、脚本等<br />rn4.案例为主:分模块案例+天池数据分析竞赛<br />rn5.故障教学<br />rn6.完整实战项目:天池数据分析<br />rn<span></span> rn</p>
Hive Arch和Hive metastore以及和传统数据库的对比2
<p>rn <strong><span style="font-size:16px;">课程的主要内容包括:</span></strong> rn</p>rn<p>rn 1.ZooKeeper-分布式过程协同组件rn</p>rn2.Hadoop3-大数据基础组件<br />rn3.Tez-Yarn底层计算引擎<br />rn4.Hive3-大数据仓库<br />rn5.Spark2实时大数据处理<br />rn<p>rn 6.Oozie5-大数据流程引擎rn</p>rn<p>rn <strong><span style="font-size:16px;">课程特点</span><span style="font-size:16px;">:</span></strong> rn</p>rn<p>rn <br />rn</p>rn<p>rn 1.最新API: Hadoop3/Spark2/Hive3/Oozie5<br />rn2.手工搭建集群环境:编译+搭建<br />rn3.配套资源:分阶段镜像+课件+安装资源,其中安装资源包括案例源码、脚本等<br />rn4.案例为主:分模块案例+天池数据分析竞赛<br />rn5.故障教学<br />rn6.完整实战项目:天池数据分析<br />rn<span></span> rn</p>
Hive任务运行常见报错及解决方式汇总
有的时候hive任务运行到一半,会报错并强制结束,下面对工作中经常遇到的报错及解决措施进行一个汇总,因为都是平时遇到了临时简单记录一下,所以没有当时的报错截图,但是主要报错内容是有的。 以下报错内容均为从yarn任务监控页面(http://主机名:8088/cluster)中查到的运行日志中打印的具体报错,直接查看命令行或者其他运行日志,可能只能看到return code 1 或者ret...
hadodop之hive 第一章 hive原理及如何使用Mysql作为hive的metastore元数据库
hive是什么?体系结构简洁 Hive的安装与管理 HiveQL 数据类型,表以及表的擦欧洲哦 HiveQL 查询数据 Hive Java客户端 Hive的自定义函数 UDF --- 加深拓展 hive是facebook 应用的。 1、Hive 建立在Hadoop上的数据仓库基础架构。是一种可以存储、查询和分析存储在hadoop中的 大规模数据...
certmgr运行命令后报错
用命令行方式 运行certmgr.exe -add -c c:\xxx.cer -r localMachine -s rootrn结果报错 Error:Failed to open the destination storernCertMgr Failedrnrn谁能帮我解决下,该如何操作才能解决此问题?rn系统是2008 R2的系统