Java项目连接hbase时超时

本人小白一枚,现在有个项目是用Java项目连接hbase。我用的是windows下的Java项目连接linux虚拟机上的hbase,hbase开启之后连接提示连接超时(ps:主机之间可互通),求助。。。
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/E:/apache-tomcat-7.0.85-windows-x64/apache-tomcat-7.0.85/webapps/car_hbase/WEB-INF/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/E:/apache-tomcat-7.0.85-windows-x64/apache-tomcat-7.0.85/webapps/car_hbase/WEB-INF/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
e785dc9437424bf8a7714f460293896c HBASE表创建失败!
java.io.IOException: Failed to get result within timeout, timeout=60000ms
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:232)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:277)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:604)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:410)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:420)
at util.HBaseUtil.createTable(HBaseUtil.java:45)
at util.HbaseDemo.createTable(HbaseDemo.java:55)
at util.StartupListener.contextInitialized(StartupListener.java:31)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5118)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5641)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:1015)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:991)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1296)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:2038)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

5个回答

看主机名是不是无法识别,是否需要添加映射

可能dns解析不对,windows的hosts文件添加:x.x.x.x localhost.localdomain映射试试,x.x.x.x是linux的IP

windows的hosts文件有配置master-slave吗

加入配置文件<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


hbase.zookeeper.quorum
master,slave1,slave2


hbase.zookeeper.property.clientPort
2181


然后public static final Configuration CONFIGURATION=HBaseConfiguration.create();获取配置

配置的时候可能是hbase的路径不对,你看下你设置的是不是默认地址,我之前遇到连不上也是用了默认的地址,然后发现我自己的不是

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Java连接HBase超时解决方法
通过java连接hbase时,报出超时的错误,如下: 2017-09-13 20:25:01,882 [main] WARN  org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where ap
Java项目连接linux上的hbase失败
个人属于刚学习的小白。。。rn以下为现在的项目报错:rnlog4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).rnlog4j:WARN Please initialize the log4j system properly.rnSLF4J: Class path contains multiple SLF4J bindings.rnSLF4J: Found binding in [jar:file:/E:/apache-tomcat-7.0.85-windows-x64/apache-tomcat-7.0.85/webapps/car_hbase/WEB-INF/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]rnSLF4J: Found binding in [jar:file:/E:/apache-tomcat-7.0.85-windows-x64/apache-tomcat-7.0.85/webapps/car_hbase/WEB-INF/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]rnSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.rnf417ce44b94c4fcc9f21fb2d61013c1c HBASE表创建失败!rnjava.io.IOException: Failed to get result within timeout, timeout=60000msrn at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:232)rn at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)rn at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)rn at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:277)rn at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:438)rn at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:312)rn at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:604)rn at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)rn at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:410)rn at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:420)rn at util.HBaseUtil.createTable(HBaseUtil.java:45)rn at util.HbaseDemo.createTable(HbaseDemo.java:55)rn at util.StartupListener.contextInitialized(StartupListener.java:31)rn at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5118)rn at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5641)rn at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:145)rn at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:1015)rn at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:991)rn at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)rn at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1296)rn at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:2038)rn at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)rn at java.util.concurrent.FutureTask.run(FutureTask.java:266)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)rn at java.lang.Thread.run(Thread.java:745)
java项目连接mysql时报错:
错误提示: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server java项目连接jdbc报错:com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionE...
java client 连接hbase报错,超时
public static Configuration configuration; static { configuration = HBaseConfiguration.create(); configuration.set("hbase.zookeeper.property.clientPort", "2181"); configuration.set("hbase.zookeeper
Java 连接HBASE ,执行查询超时的解决方法
Hbase 的查询速度非常快,适合用于检索,但是,但待检索的数据量特别大的时候,很容易造成连接超时。通过修改java连接hbase的配置参数,可以有效解决这个问题。注意,单纯修改hbase的配置参数,如超时,并不会起作用。
连接sql server2000时超时。
连接sql server2000时超时。出现的错误号码时“80040e31”rn是什么原因。请各位高手帮帮忙。
hbase连接测试时的常见错误
hbase连接,常见错误 java.lang.NoClassDefFoundError: org/apache/htrace/Trace java.lang.NoClassDefFoundError: com/yammer/metrics/core/Gauge NoSuchMethodError: org.apache.hadoop.hbase.HTableDescriptor.addFamily
HBase调优之GC超时
1. HBase GC时间过长 1.1 问题描述 ... 2018-03-01 17:32:16,243 WARN org.apache.hadoop.hbase.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 16100ms GC pool 'Par...
spark写HBASE超时
通过SPARK scan HBASE再把结果写到HBASE另一张表上,一开始成功过,后来不知道怎么回事一直报错,在 hbase shell 中操作 scan 'test'能成功返回数据,但通过spark-submit提交时却显示超时,具体出错信息如下,碰到这问题不知道怎么办,在网上查了相关的错误说是/etc/hosts文件下没有添加IP -> HOST的映射,但我这是添加了的,而且相应的端口都能telnet上,关键是一开始我还是成功了的,过了一阵子先是一台不行,然后整个集群都不行。我的软件版本分别是spark2.1,hbase1.2.4 hadoop2.4.0 rnrnrn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/lib:/hadoop/hbase/hbase-1.2.4/lib:/hadoop/spark/spark-2.1.0-bin-hadoop2.4/jars:/lib:/hadoop/hadoop-2.4.0/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/librn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmprn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:java.compiler=rn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:os.name=Linuxrn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64rn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-123.9.3.el7.x86_64rn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:user.name=rootrn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:user.home=/rootrn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Client environment:user.dir=/hadoop/spark/spark-2.1.0-bin-hadoop2.4/binrn17/01/12 21:09:44 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=10.46.77.230:2181,10.46.78.57:2181,10.46.78.50:2181 sessionTimeout=90000 watcher=hconnection-0x33feb8050x0, quorum=10.46.77.230:2181,10.46.78.57:2181,10.46.78.50:2181, baseZNode=/hbasern17/01/12 21:09:44 INFO zookeeper.ClientCnxn: Opening socket connection to server cloud02/10.46.78.57:2181. Will not attempt to authenticate using SASL (unknown error)rn17/01/12 21:09:44 INFO zookeeper.ClientCnxn: Socket connection established to cloud02/10.46.78.57:2181, initiating sessionrn17/01/12 21:09:44 INFO zookeeper.ClientCnxn: Session establishment complete on server cloud02/10.46.78.57:2181, sessionid = 0x25929385f9400c0, negotiated timeout = 40000rn17/01/12 21:09:44 INFO util.RegionSizeCalculator: Calculating region sizes for table "test".rn17/01/12 21:10:23 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, started=38484 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:10:33 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, started=48564 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:10:53 INFO client.RpcRetryingCaller: Call exception, tries=12, retries=35, started=68634 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:11:13 INFO client.RpcRetryingCaller: Call exception, tries=13, retries=35, started=88763 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:11:33 INFO client.RpcRetryingCaller: Call exception, tries=14, retries=35, started=108781 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:11:53 INFO client.RpcRetryingCaller: Call exception, tries=15, retries=35, started=128863 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:12:13 INFO client.RpcRetryingCaller: Call exception, tries=16, retries=35, started=148949 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:12:33 INFO client.RpcRetryingCaller: Call exception, tries=17, retries=35, started=168999 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:12:54 INFO client.RpcRetryingCaller: Call exception, tries=18, retries=35, started=189179 ms ago, cancelled=false, msg=row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn17/01/12 21:12:54 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x25929385f9400c0rn17/01/12 21:12:54 INFO zookeeper.ZooKeeper: Session: 0x25929385f9400c0 closedrn17/01/12 21:12:54 INFO zookeeper.ClientCnxn: EventThread shut downrnException in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:rnThu Jan 12 21:12:54 CST 2017, null, java.net.SocketTimeoutException: callTimeout=200000, callDuration=209218: row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rnrn at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)rn at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)rn at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)rn at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)rn at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)rn at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)rn at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)rn at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)rn at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)rn at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)rn at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)rn at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)rn at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:89)rn at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)rn at org.apache.hadoop.hbase.util.RegionSizeCalculator.(RegionSizeCalculator.java:81)rn at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)rn at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:239)rn at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:121)rn at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)rn at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)rn at scala.Option.getOrElse(Option.scala:121)rn at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)rn at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)rn at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)rn at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)rn at scala.Option.getOrElse(Option.scala:121)rn at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)rn at org.apache.spark.SparkContext.runJob(SparkContext.scala:1913)rn at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:902)rn at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:900)rn at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)rn at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)rn at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)rn at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:900)rn at HBaseApp.run(HBaseApp.scala:76)rn at testApp$.main(testApp.scala:33)rn at testApp.main(testApp.scala)rn at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)rn at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)rn at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)rn at java.lang.reflect.Method.invoke(Method.java:497)rn at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)rn at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)rn at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)rn at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)rn at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)rnCaused by: java.net.SocketTimeoutException: callTimeout=200000, callDuration=209218: row 'test,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=cloud03,16020,1484225347538, seqNum=0rn at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)rn at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)rn at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)rn at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)rn at java.lang.Thread.run(Thread.java:745)rnCaused by: java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: com/yammer/metrics/core/Gaugernrn at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)rn at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)rn ... 4 morernCaused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: com/yammer/metrics/core/Gaugern at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)rnrnCaused by: java.lang.ClassNotFoundException: com.yammer.metrics.core.Gaugernrn
HBase之超时机制
客户端超时设置对整个系统的稳定性以及敏感性至关重要,一旦没有超时设置或超时时间设置过长,服务器端的长时间卡顿必然会引起客户端阻塞等待,进而影响上层应用。好在HBase提供了多个客户端参数设置超时,主要包括 hbase.rpc.timeout / hbase.client.operation.timeout/ hbase.client.scanner.timeout.period 一 hbase.
java项目中连接Oracle数据库时,出现ClassNotFoundException
我在自己的类中写了getConnection函数,读取文件database.properties中连接oracle的信息,然后返回这个数据库连接。rn在Eclipse里我也添加了ojdbc14.jar包,过程是Project->Properties->Java Build Path->Libraries->Add External JARsrn为什么总是出现如下错误:rn[code=Java]rnUnhandled exception type ClassNotFoundExceptionrn[/code]rnrngetConnection函数:rn[code=Java]rn public static Connection getConnection() throws SQLException, IOExceptionrn rn Properties props = new Properties();rn FileInputStream in = new FileInputStream("database.properties");rn props.load(in);rn in.close();rnrn String drivers = props.getProperty("jdbc.drivers");rn if (drivers != null) System.setProperty("jdbc.drivers", drivers);rn String url = props.getProperty("jdbc.url");rn String username = props.getProperty("jdbc.username");rn String password = props.getProperty("jdbc.password");rn rn Class.forName("oracle.jdbc.driver.OracleDriver");//ClassNotFoundExceptionrn rn return DriverManager.getConnection(url, username, password);rn rn[/code]rnrn文件database.properties的内容:rn[code=Java]rn#jdbc.drivers=oracle.jdbc.driver.OracleDriverrnjdbc.url=jdbc:oracle:thin:@localhost:1521:xernjdbc.username=schoolrnjdbc.password=schoolrn[/code]
tomcat java项目 连接mysql 超时异常。 急求解决
当在项目中 配置mysql 连接池时 ,,如果长时间不运行项目 ,连接mysql时候 ,就报超时异常,我修改了 配置 超时时间 比mysql 默认的短 ,可后面运行项目还是同样的问题出现,我用Ibaites 连接mysql 没用此问题,可是 用sql语句连接 就有此错误 。。。大家帮忙解决一下。。。。Thanks .rnrncom.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception:rnrn** BEGIN NESTED EXCEPTION **rnrnjava.io.EOFExceptionrnrnSTACKTRACE:rnrnjava.io.EOFExceptionrn at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1963)rn at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2375)rn at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2874)rn at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1623)rn at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1715)rn at com.mysql.jdbc.Connection.execSQL(Connection.java:3249)rn at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1268)rn at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1403)rn at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:93)rn at com.funambol.server.store.LastTimestampPersistentStore.read(Unknown Source)rn at com.funambol.framework.server.store.PersistentStoreManager.read(PersistentStoreManager.java:148)rn at com.funambol.server.engine.Sync4jEngine.prepareDatabases(Unknown Source)rn at com.funambol.server.session.SyncSessionHandler.processInitMessage(Unknown Source)rn at com.funambol.server.session.SyncSessionHandler.processInitSyncMapMessage(Unknown Source)rn at com.funambol.server.session.SyncSessionHandler.processMessage(Unknown Source)rn at com.funambol.server.engine.SyncAdapter.processInputMessage(Unknown Source)rn at com.funambol.server.engine.SyncAdapter.processSyncMLMessage(Unknown Source)rn at com.funambol.server.engine.SyncAdapter.processWBXMLMessage(Unknown Source)rn at com.funambol.transport.http.server.LocalSyncHolder.processWBXMLMessage(Unknown Source)rn at com.funambol.transport.http.server.Sync4jServlet.doPost(Unknown Source)rn at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)rn at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)rn at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)rn at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)rn at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)rn at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)rn at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)rn at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)rn at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:563)rn at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)rn at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263)rn at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)rn at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584)rn at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)rn at java.lang.Thread.run(Thread.java:619)rnrn
【python 连接hbase】python连接hbase 简单
python是通过thrift去访问操作hbase 1、首先需要先安装happyhbase和thrift pip install happybase pip install thrift 2、需要修改源码一个文件parser.py 如果执行的时候报错: py.parser.exc.ThriftParserError: ThriftPy does not support generati...
hbase总结:hbase连接异常
异常提示: java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.   分析:出这个错误,是因为win中没有这只HADOOP_HOME,但是win7中怎么会有HADOOP_HOME呢,设置后指向什么位置呢,我的hadoop在服务器端上的啊; 网络上的答案:HAD
hbase查询超时导致的错误
今天上线跑一个hbase程序出现如下错误: org.apache.hadoop.hbase.regionserver.LeaseException: org.apache.hadoop.hbase.regionserver.LeaseException: lease '5008606692699215376' does not exist   问题解决 HBase 客户端调用建立在由S...
连接远程数据库时的问题,为什么老超时?
sqlserver 2000rn连接一个sql数据库webdb,其位于202.15*.14*.27rn客户端网络设置为:202.112.149.27/webdb,有问题吗?\or /?rn然后老超时,或者提示连接失败,请检查sqlserver的注册属性,rn可是我的注册属性应该没有问题啊,什么用户名密码都有了.....是不是服务器段的设置有问题?rn还有可能那里的问题?rn谢了
Hbase在连接api时遇到的问题
一,首先请先看图,是不是遇到了这样的问题: 二,解决方法: 这是因为连接不上你的虚拟机,或是你在运行时没有将hadoop,zookeeper,hbase,其中的一个启动好。 如果是第一种情况: 就是外部的机器找不到,你虚拟机主机名对应的映射。所以你要找到下面的目录,指定你虚拟机主机名的映射。
Java 连接linux下的hbase数据库时报错
下面是报错图: 我的解决办法是: 关闭linux防火墙即可,service iptables stop
怎么解决连接远程数据库时超时的问题?
我在连接远程数据库时,当网速慢的时间就会出现超时的情况,怎么能做到像SQL那样,读一部分显示一部分,只到读完为至,不会出现超时的情况?我用的是ADODC和DATAGRID两个控件。
MapReduce连接Hbase时报错及处理
MapReduce连接Hbase时报错及处理我的Map class如下:package com.hbasepackage;import java.io.IOException;import org.apache.hadoop.hbase.client.Result;import org.apache.hadoop.hbase.io.ImmutableBytesWritable;import org.
eclipse连接hbase要存储数据时报错
13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmprn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=rn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linuxrn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386rn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.38-13-genericrn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:user.name=hadooprn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadooprn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/Desktop/寒假/HbasePicrn13/03/14 08:38:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.6.202:2181,192.168.6.201:2181,192.168.6.203:2181 sessionTimeout=180000 watcher=hconnectionrn13/03/14 08:38:14 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.6.202:2181rn13/03/14 08:38:14 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.rn13/03/14 08:38:14 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.rn13/03/14 08:38:14 INFO zookeeper.ClientCnxn: Socket connection established to datanode2.local/192.168.6.202:2181, initiating sessionrn13/03/14 08:38:14 INFO zookeeper.ClientCnxn: Session establishment complete on server datanode2.local/192.168.6.202:2181, sessionid = 0x23d62b54a350021, negotiated timeout = 40000rn13/03/14 08:38:14 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 2651@dell20-ubunturn13/03/14 08:38:15 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 0 of 10 failed; retrying after sleep of 1008rnjava.net.ConnectException: Connection refusedrn at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)rn at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)rn at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)rn at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)rn at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:416)rn at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:462)rn at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1150)rn at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1000)rn at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)rn at $Proxy5.getProtocolVersion(Unknown Source)rn at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183)rn at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:335)rn at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:312)rn at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:364)rn at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:682)rn at org.apache.hadoop.hbase.client.HBaseAdmin.(HBaseAdmin.java:110)rn at com.hbase.mcc.test.createTable(test.java:51)rn at com.hbase.mcc.test.main(test.java:234)rnrnrnhosts文件也尝试修改过了,可以还是报一样的错
连接SQL2000时,出现超时已过期
连接SQL2000时,出现超时已过期,但用查询分析器时,可以连接。
java项目中连接MySql
在Java之中,所有数据库操作的类和接口都保存在了Java.sql包中,在这个包中 一个类:DriverManager类 四个接口:Connection,Statement,ResultS,PreparedStatement所有JDBC连接数据库的操作流程都是固定的,按照如下几部完成: 1. 加载数据库的驱动程序(向容器加载) 2. 进行数据库连接(通过DriverManager类完成,
使用hbase的java api连接集群超时的问题
环境:hadoop2.6.0,hbase1.2.0 在windows环境下在eclipse中使用hbase Java api连接其他机器上的hbase报超时错误。后来移至Ubuntu环境下使用intelij idea连接hbase,发现除了本来报的超时错误,还报了unknown host:centos,原来是没做没做内网域名映射(我代码中是直接填的ip)。于是在ubuntu系统配置中加入此映射:
java写main方法连接hbase callTimeout=60000 超时问题???
代码大体如下:rnConfiguration conf = HBaseConfiguration.create();rnconf.set("hbase.zookeeper.quorum","集群hostname" );rnconf.set("hbase.zookeeper.property.clientPort", "2181");rnconf.set("zookeeper.znode.parent", "/hbase-unsecure");rnConnection connection = ConnectionFactory.createConnection(conf);rnAdmin admin = connection.getAdmin();rnList tableNames = new ArrayList();rnHTableDescriptor[] tables = admin.getTableDescriptorsByTableName(tableNames);rnrn就是说 能连上hbase,并且能查出tables,这几步没问题,但是rn我执行rnadmin.isTableEnabled(htableDescriptor.getTableName()) //获取table是否可用时rn超时了,rn报错如下:rn org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:rnFri Apr 22 14:10:54 CST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=75362: row 'aaa,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=testserver,16020,1431006913051, seqNum=0rnrn请教各位大神,还需要什么额外配置吗,还是我本地环境的某些参数不对?rn万分感谢!
Redis连接Java项目
1.新建一个Maven Project: Redis2.允许源文件夹的输出文件夹3.修改pom.xml文件4.新建一个单元测试类:尝试连接Linux上安装的Redis
java项目与sql2000连接问题
运行项目是报错:[Microsoft][SQLServer 2000 Driver for JDBC]Error 原因:sql2000 1433端口未打开 解决: 1、在cmd中运行:netstat -an 看1433端口是否打开 2、若没打开查看sql2000服务器的TCP/IP协议的服务是否启动 并确定其端口是1433 3、再则下载SQL2000-...
java项目连接数据库问题
我用的是以下这种连接方式:rn Class.forName("com.microsoft.jdbc.sqlserver.SQLServerDriver");rn con = DriverManager.getConnection("jdbc.microsoft:sqlserver://127.0.0.1:1433;databaseName=madc","sa", "");rn报的错误是:rnrnjava.sql.SQLException: No suitable driver found for jdbc.microsoft:sqlserver://127.0.0.1:1433;databaseName=madcrn at java.sql.DriverManager.getConnection(DriverManager.java:602)rn at java.sql.DriverManager.getConnection(DriverManager.java:185)rn at com.px.lx.ConnDB.getConn(ConnDB.java:15)rn at com.px.lx.UserBeanCl.checkUser(UserBeanCl.java:45)rn at com.px.control.LoginClServlet.doGet(LoginClServlet.java:53)rn at com.px.control.LoginClServlet.doPost(LoginClServlet.java:89)rn at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)rn at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)rn at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)rn at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)rn at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:228)rn at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)rn at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)rn at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)rn at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)rn at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:216)rn at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)rn at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:634)rn at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:445)rn at java.lang.Thread.run(Thread.java:619)rnjava.lang.NullPointerExceptionrn at com.px.lx.UserBeanCl.checkUser(UserBeanCl.java:46)rn at com.px.control.LoginClServlet.doGet(LoginClServlet.java:53)rn at com.px.control.LoginClServlet.doPost(LoginClServlet.java:89)rn at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)rn at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)rn at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)rn at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)rn at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:228)rn at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)rn at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)rn at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)rn at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)rn at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:216)rn at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)rn at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:634)rn at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:445)rn at java.lang.Thread.run(Thread.java:619)rn
java项目连接mongoDB报错
在本地搭了环境,启动mongoDB数据库服务并进行了连接验证,如下:n![图片说明](https://img-ask.csdn.net/upload/201708/14/1502695838_396993.png)nn![图片说明](https://img-ask.csdn.net/upload/201708/14/1502695905_168178.png)nn启动eclipse,登录页面时会访问mongoDB,但总是连接超时,报错信息如下:norg.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelectorreadPreference=primary. Client view of cluster state is type=UNKNOWN, servers=[address=172.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception=com.mongodb.MongoSocketOpenException: Exception opening socket, caused by java.net.SocketTimeoutException: connect timed out]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelectorreadPreference=primary. Client view of cluster state is type=UNKNOWN, servers=[address=172.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception=com.mongodb.MongoSocketOpenException: Exception opening socket, caused by java.net.SocketTimeoutException: connect timed out]n at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:73)n at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:2002)n at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1885)n at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1696)n at org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1679)n at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:598)n at org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:589)n at com.jftt.wifi.service.impl.ManageUserServiceImpl.findUserByCondition(ManageUserServiceImpl.java:301)n at com.jftt.wifi.service.impl.ManageUserServiceImpl.findUserByCondition(ManageUserServiceImpl.java:273)n at com.jftt.wifi.service.impl.ManageUserServiceImpl.findUserByUserName(ManageUserServiceImpl.java:664)n at com.jftt.wifi.action.LoginAction.login(LoginAction.java:369)n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)n at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)n at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)n at java.lang.reflect.Method.invoke(Unknown Source)n at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)n at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)n at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)n at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:781)n at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:721)n at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)n at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:943)n at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:877)n at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)n at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:863)n at javax.servlet.http.HttpServlet.service(HttpServlet.java:650)n at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)n at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)n at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)n at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)n at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)n at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)n at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)n at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88)n at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)n at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)n at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)n at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)n at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)n at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)n at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)n at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)n at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956)n at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)n at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)n at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)n at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)n at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:318)n at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)n at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)n at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)n at java.lang.Thread.run(Unknown Source)nCaused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelectorreadPreference=primary. Client view of cluster state is type=UNKNOWN, servers=[address=172.0.0.1:27017, type=UNKNOWN, state=CONNECTING, exception=com.mongodb.MongoSocketOpenException: Exception opening socket, caused by java.net.SocketTimeoutException: connect timed out]n at com.mongodb.connection.BaseCluster.createTimeoutException(BaseCluster.java:370)n at com.mongodb.connection.BaseCluster.selectServer(BaseCluster.java:101)n at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.(ClusterBinding.java:75)n at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.(ClusterBinding.java:71)n at com.mongodb.binding.ClusterBinding.getReadConnectionSource(ClusterBinding.java:63)n at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:166)n at com.mongodb.operation.FindOperation.execute(FindOperation.java:394)n at com.mongodb.operation.FindOperation.execute(FindOperation.java:57)n at com.mongodb.Mongo.execute(Mongo.java:738)n at com.mongodb.Mongo$2.execute(Mongo.java:725)n at com.mongodb.DBCursor.initializeCursor(DBCursor.java:815)n at com.mongodb.DBCursor.hasNext(DBCursor.java:149)n at org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1871)n ... 49 moren n 请问各位大侠,有碰到这样的问题么?如何解决呢?n
java项目java项目java项目java项目java项目
java项目java项目java项目java项目java项目java项目java项目java项目java项目java项目java项目java项目
使用JAVA连接HBase时查询数据时,无限等待问题
代码就不贴了,直接上打印出来的log情况。以下是log,然后一直等待,没有结果16/07/21 09:22:36 INFO zookeeper.ZooKeeperWrapper: Reconnecting to zookeeper 16/07/21 09:22:36 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.
连接mysql超时问题解决
近期在用MySQL开发的时候,遇到了一个很有意思的问题。就是MySQL特有的wait_time(超时自动断开连接)机制。Mysql服务器默认的”wait_timeout”是8小时(也就是默认的值默认是28800秒),也就是说一个connection空闲超过8个小时,Mysql将自动断开该connection。这对我们程序员来说,是一个很麻烦的地方,不过它也是mysql的一大优点。长期的连接会占用大
连接oracle超时
Data Source=(DESCRIPTION =(SDU = 32768)(enable = broken) (LOAD_BALANCE = yes)ADDRESS = (PROTOCOL = TCP)(HOST = gvu3825.austin.hp.com)(PORT = 1525)) ( (ADDRESS = (PROTOCOL = TCP)(HOST = gvu3824.austin.hp.com)(PORT = 1525)) (CONNECT_DATA =(SERVICE_NAME = GCSSDMP)) );User Id=Xianwei_gong;Password=500_Newpa:为什么 会出现连接超时的问题,怎么解决?rn rn rn rn rn rn rn rn rn
网络超时连接 select
connect(pclient->socket, (struct sockaddr *)&pclient->_addr, sizeof(struct sockaddr)); rn//select 模型,即设置超时 rnstruct timeval timeout ; rnfd_set r; // rnFD_ZERO(&r); rnFD_SET(pclient->socket, &r); rntimeout.tv_sec = 5; //连接超时5秒 rntimeout.tv_usec = 0; rnMainWnd.SetLoadingTimer(); //另外开辟一个线程,设置一个定时器,一秒之后触发 rnret = select(0, 0, &r, 0, &timeout);rnrnrn问题:当网络断开的时候,为什么每次都是超时5s之后才执行有定时器的线程,能不能让他们同时?
连接远程数据库超时问题
[code=C#]rnSqlConnection conn = new SqlConnection("Server=96.46.5.95;DataBase=5210ffddd;Uid=5210ffddd;pwd=yunone;");rnconn.Open();//打开连接rnSqlCommand scd = new SqlCommand("select * from jieducm_258shua_register where username=textBox1.Text and password1=textBox2.Text",conn);rnSqlDataReader sdr = scd.ExecuteReader();//读取数据rnif(sdr.Read())rnrn MessageBox.Show("登录成功");rnrnelsernrn MessageBox.Show("登录出错");rnrnrn麻痹。我换了N种写法了。。还是不行rn[/code]rn我客户端程序连接 我的虚拟主机 数据库 为什么 连接不上去 提示连接超时?
java连接mongodb超时
这个问题困扰我一天了,mongodb已经连接了就是不行插入不进去数据,一执行插入的时候就会报连接超时rnMongoClient mClient = new MongoClient("dds-2ze4c67d85f827b41.mongodb.rds.aliyuncs.com",3717);rn MongoDatabase db = mClient.getDatabase("root"); //数据库用户名rn System.out.println(db.getName());rn System.out.println("1");rnrn //得到集合,这里取"text"rn MongoCollection doc = db.getCollection("test"); //相当于表一个集合rn System.out.println(doc.getDocumentClass());rn System.out.println("2");rn //插入一个documentrn doc.insertOne(new Document("name","张三")); //执行到这里就失败rn rn System.out.println("3");rn rn rn rn rn 错误log日志rn 九月 26, 2017 5:55:28 下午 com.mongodb.diagnostics.logging.JULLogger logrn信息: Cluster created with settings hosts=[dds-2ze4c67d85f827b41.mongodb.rds.aliyuncs.com:3717], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500rnrootrn1rnclass org.bson.Documentrn2rn九月 26, 2017 5:55:29 下午 com.mongodb.diagnostics.logging.JULLogger logrn信息: No server chosen by WritableServerSelector from cluster description ClusterDescriptiontype=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescriptionaddress=dds-2ze4c67d85f827b41.mongodb.rds.aliyuncs.com:3717, type=UNKNOWN, state=CONNECTING]. Waiting for 30000 ms before timing outrn九月 26, 2017 5:55:49 下午 com.mongodb.diagnostics.logging.JULLogger logrn信息: Exception in monitor thread while connecting to server dds-2ze4c67d85f827b41.mongodb.rds.aliyuncs.com:3717rncom.mongodb.MongoSocketOpenException: Exception opening socketrn at com.mongodb.connection.SocketStream.open(SocketStream.java:63)rn at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115)rn at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:116)rn at java.lang.Thread.run(Unknown Source)rnCaused by: java.net.SocketTimeoutException: connect timed outrn at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)rn at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source)rn at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)rn at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)rn at java.net.AbstractPlainSocketImpl.connect(Unknown Source)rn at java.net.PlainSocketImpl.connect(Unknown Source)rn at java.net.SocksSocketImpl.connect(Unknown Source)rn at java.net.Socket.connect(Unknown Source)rn at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:50)rn at com.mongodb.connection.SocketStream.open(SocketStream.java:58)rn ... 3 morernrnException in thread "main" com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is type=UNKNOWN, servers=[address=dds-2ze4c67d85f827b41.mongodb.rds.aliyuncs.com:3717, type=UNKNOWN, state=CONNECTING, exception=com.mongodb.MongoSocketOpenException: Exception opening socket, caused by java.net.SocketTimeoutException: connect timed out]rn at com.mongodb.connection.BaseCluster.createTimeoutException(BaseCluster.java:375)rn at com.mongodb.connection.BaseCluster.selectServer(BaseCluster.java:104)rn at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.(ClusterBinding.java:75)rn at com.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.(ClusterBinding.java:71)rn at com.mongodb.binding.ClusterBinding.getWriteConnectionSource(ClusterBinding.java:68)rn at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:221)rn at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:169)rn at com.mongodb.operation.MixedBulkWriteOperation.execute(MixedBulkWriteOperation.java:75)rn at com.mongodb.Mongo.execute(Mongo.java:827)rn at com.mongodb.Mongo$2.execute(Mongo.java:810)rn at com.mongodb.MongoCollectionImpl.executeSingleWriteRequest(MongoCollectionImpl.java:515)rn at com.mongodb.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:306)rn at com.mongodb.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:297)rn at com.yingyan.jdbc.mongodb.main(mongodb.java:99)rn
连接服务器超时的Demo
下面这个例子是连接网络超时的例子: [code=&quot;java&quot;] package cn.com; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.Unsuppo...
CSV 连接服务器 超时
http://developer.postgresql.org/pgdocs/postgres/anoncvs.htmlrn这是postgresql 的cvs 详细 操作的页面地址。rnrncvs -d :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot loginrn用户名:anoncvsrn服务器:anoncvs.postgresql.orgrn路径:/projects/cvsrootrn密码:任意字符。rnrn但我的cvs为什么连不上,提示连接超时。请大家指教下。rnrn
连接oracle数据库超时
各位好:rn我的系统搭建在linux操作系统上,web容器为tomcat,数据库为oracle,tomcat启动一个多小时后就会出现:rnjava.sql.SQLException: Closed Connectionrn at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:111)rn at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:145)rn at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:207)rn at oracle.jdbc.driver.PhysicalConnection.setAutoCommit(PhysicalConnection.java:1056)rn at com.mchange.v2.c3p0.impl.NewProxyConnection.setAutoCommit(NewProxyConnection.java:881)rn at org.hibernate.transaction.JDBCTransaction.toggleAutoCommit(JDBCTransaction.java:224)rn at org.hibernate.transaction.JDBCTransaction.rollbackAndResetAutoCommit(JDBCTransaction.java:216)rn at org.hibernate.transaction.JDBCTransaction.rollback(JDBCTransaction.java:192)rn真的很奇怪什么业务也没做,重新启动tomcat就又链接上了,过一个多小时还是这样。rn有谁遇到过这个问题 还请帮帮忙,谢谢!
filezilla连接远程服务器超时
错误信息:rn[img=https://img-bbs.csdn.net/upload/201806/18/1529299282_248029.png][/img]rnrn端口,账号,密码 主动被动, 新建站点设置。这些方法都试过了 还是不好使 有没有哪位大佬有时间告诉一下
相关热词 c#部署端口监听项目、 c#接口中的属性使用方法 c# 昨天 c#func链接匿名方法 c#怎么创建文件夹 c#从键盘接收空格 c#da/ad c#部门请假管理系统 c#服务器socket c# 默认的访问修饰符