haddp 伪分布式搭建遇到master不能连接slave 5C

6台机器,搭建hadoop jdk,zookeeper;分别为
01,02 master
03 ResourceManager
04,05,06 databnode nodemanage
slave文件配置了 04,05,06
01启动start-dfs.sh可以启动02 namenode,04,05,06的datanode
03启动start-yarn.sh启动了03 ResourceManager 04,05,06的nodemanage

但是浏览01的hadoop管理界面看不到datanode
给hdfs上传文件报错,报错的意思就是没有可用的datanode
六台虚拟机均已关闭防火墙,网上各种方式都用过没有解决。
请大牛伸出援手

4个回答

机器之间是可以相互ssh的,然后重新格式化datanode,namenode

xiaozhuzhuhb
xiaozhuzhuhb 机器之间可以相互ssh,重新格式化试过几次,重新安装hadoop也试了几次,每次都一样,浏览hadoop管理界面 就是看不到datanode,可以查询 上传就报错,没有可用的datanode
接近 2 年之前 回复

机器之间可以相互ssh,重新格式化试过几次,重新安装hadoop也试了几次,每次都一样,浏览hadoop管理界面 就是看不到datanode,可以查询
上传就报错,没有可用的datanode

是不是数据目录配置的内存太小了,你可以看下namenode和datanode的日志啊

这种情况直接看namenode和datanode的日志就知道了啊。。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Hadoop3.1.0分布式环境搭建
1,环境VMWare,CentOS6.5, JDK1.8(oracle), Hadoop3.1.0 在master节点使用start-dfs.sh时,只会启动master节点的namenode和datanode,以及slave1节点的secondarynamenode,使用start-dfs.sh时能启动master的namenode和datanode,以及slave1上的secondarynamenode. 其它所有子节点的datanode进程都不会启动,必须在子节点上使用 hdfs --daemon start datanode 命令手动启动datanode. master节点namenode和datanode的日志中未出现任何异常情况。 [root@master hadoop-3.1.0]# start-dfs.sh Starting namenodes on [master] Starting datanodes Starting secondary namenodes [node1] 2019-07-14 11:04:46,521 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@master hadoop-3.1.0]# jps 29025 NameNode 29147 DataNode 29420 Jps 问题: 为什么无法通过start-dfs.sh启动集群中其它slave节点中的datanode,而又能启动slave中的secondarynamenode? 请各位大神帮忙看看。 ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563100513_35604.png)
单数据库系统(MySQL)升级为Master-Slave模式(Master写,Slave读)
我想将单数据库系统(MySQL)升级为Master-Slave模式(Master写,Slave读). 做了三个简单测试,如下: 1. replicationdriver+直接创建jdbc连接, 可以实现写master,读slave的目标。 2. tomcat自带的dbcp连接池+replicationdriver,可以实现写master,读slave的目标。 3. hibernate+tomcat自带的dbcp连接池+replicationdriver, 读写始终在master中进行。 不知道有没有人结合hibernate,dbcp连接池,replicationdriver在实际项目中应用过,或做过测试? 请指点! 或者有其它方式能实现此目标。 Notes: 1. 我使用的是hibernate3.0.x 2. 不准备使用MySQL Proxy 怀疑Hibernate不支持ReplicationConnection, 一个Session对应一个Connection, 不能对应一个ReplicationConnection。
hadoop集群搭建好后Datenode诡异的再master机器上开启,没有在slave机器上开启
我是一个master,3个slave 问题是这样的: hadoop集群搭建好后在master机器上start-all.sh,结果datanode也在此机器启动没在slaves机器启动 我用的hadoop3.1+jdk1.8,之前照书上搭建的hadoop2.6+openjdk10可以搭建可以 正常启动,namenode等在master上启动,datanode等在slave上启动,现在换了新 版本就不行了,整了一天。。。 目前条件:各个机器能相互ping,也能ssh 都能正常上网 如果一台机器既做master又做slave,可以正常开启50070(当然hadoop3后改成了9870)网页,一切正常 在master上开启start-all.sh时: WARNING: Attempting to start all Apache Hadoop daemons as hduser in 10 seconds. WARNING: This is not a recommended production deployment configuration. WARNING: Use CTRL-C to abort. Starting namenodes on [emaster] Starting datanodes Starting secondary namenodes [emaster] 2018-05-04 22:39:37,858 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting resourcemanager Starting nodemanagers 根本没有去找slave jps后发现: 3233 SecondaryNameNode 3492 ResourceManager 2836 NameNode 3653 NodeManager 3973 Jps 3003 DataNode 全都在master上启动了,slave机器什么也没启动 查看datanode日志,发现它开了3次master的datanode(我的master名字是emaster):(展示部分) STARTUP_MSG: Starting DataNode STARTUP_MSG: host = emaster/192.168.56.100 STARTUP_MSG: args = [] STARTUP_MSG: version = 3.1.0 而且每遍有报错: ``` java.io.EOFException: End of File Exception between local host is: "emaster/192.168.56.100"; destination host is: "emaster":9000; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:789) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495) at org.apache.hadoop.ipc.Client.call(Client.java:1437) at org.apache.hadoop.ipc.Client.call(Client.java:1347) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy20.sendHeartbeat(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:166) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:514) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:645) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:841) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1796) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1165) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1061) 2018-05-04 21:49:43,320 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM ``` 我的猜测,我设置了3台slave,却不知道哪里配置出了问题,使得master不去开启 slave的datanode,却开启自己的datanode,但我实在找不出哪里出错了,什么 masters文件,slaves文件都配了啊,而且各个机器间可以ping通,有大神可以指点下本小白吗,真的万分感谢!!!!
grant replication slave on *.* to 'abc'@'%' identified by '123456'; 报错
grant replication slave on *.* to 'abc'@'%' identified by '123456'; 报错 > 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'identified by '123456'' at line 1 > 时间: 0.062s ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810807_820944.png)
java程序读取PLC的浮点数,读取到的小数位错误,请问如何解决?
1.采用modbus4j实现了modbus通信,但读取PLC的浮点数时,读取到整数位正确,小数位错误,请问如何解决? 2. 读取PLC保持寄存器的方法如下: ``` readHoldingRegistersTest(master,1,1,4); private static void readHoldingRegistersTest(ModbusMaster master, int slaveId, int start, int len) { try { ReadHoldingRegistersRequest request = new ReadHoldingRegistersRequest( slaveId, start, len); ReadHoldingRegistersResponse response = (ReadHoldingRegistersResponse) master .send(request); if (response.isException()) { System.out.println("Exception response: message=" + response.getExceptionMessage()); } else { System.out.println(Arrays.toString(response.getShortData())); short[] list = response.getShortData(); byte[] bb = shortToBytes(list); Float ff=byte2float(bb,0); System.out.println("读到"+ff); } } catch (ModbusTransportException e) { e.printStackTrace(); } } ``` 3.和PLC通讯时,PLC给出的数字为46.68,程序读取到的数字为46.5,![图片说明](https://img-ask.csdn.net/upload/201912/10/1575987797_782854.png) 4.和modbus slave仿真软件通讯时,数据读取正常。控制台输出如下:
hadoop2.6.5集群master启动时只能启动自身作为datanode,slave节点无法控制且没有日志?
1个master,2个slave,多次格式化删除/usr/local/hadoop/logs与/tmp无效 报错日志显示: 2019-05-22 15:43:46,212 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: Master/219.226.109.130:9000 但是所有节点防火墙均为关闭状态 /etc/hosts中配置均为: #127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 219.226.109.130 Master 219.226.109.129 Slave1 219.226.109.131 Slave2
hadoop2.7.2搭建分布式环境,格式化后,namenode没启动成功
第一步:执行hadoop namenode -formate STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z STARTUP_MSG: java = 1.7.0_76 ************************************************************/ 16/08/02 04:26:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/08/02 04:26:16 INFO namenode.NameNode: createNameNode [-formate] Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-rollback] | [-rollingUpgrade <rollback|downgrade|started> ] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force] ] | [-metadataVersion ] ] 16/08/02 04:26:16 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100 第二步:执行start-all.sh 结果如下: [root@master sbin]# sh start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 16/08/02 05:45:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master] master: starting namenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-master.out slave2: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave2.out slave3: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave3.out slave1: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-secondarynamenode-master.out 16/08/02 05:46:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-master.out slave2: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave2.out slave3: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave3.out slave1: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave1.out [root@master sbin]# jps 2613 ResourceManager 2467 SecondaryNameNode 2684 Jps namenode日志: 2016-08-02 05:49:49,910 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,928 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-08-02 05:49:49,928 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 2016-08-02 05:49:49,930 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 2016-08-02 05:49:49,934 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-08-02 05:49:49,935 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,949 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-08-02 05:49:49,961 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100
怎么实现hadoop的并行化
现在分布式的hadoop已经搭建完成,在master节点上写并行代码,在master节点上运行,就可以实现代码的并行化吗?用不用在slave节点上写代码。谢谢
redis使用哨兵实现主从后,恢复故障的matser后,不会成为新master下的slave节点
各位大神,我通过哨兵实现主从后,进行测试时发现一个问题: 我启动了3个redis,3个哨兵,01的redis为主,02、03的redis为备节点。 ![图片说明](https://img-ask.csdn.net/upload/201901/15/1547537146_354364.png) 当我将01主节点的redis手动关闭后,哨兵日志显示将master切换到02上了。03也变成了02的备节点。 ![图片说明](https://img-ask.csdn.net/upload/201901/15/1547537699_768708.png) ![图片说明](https://img-ask.csdn.net/upload/201901/15/1547537687_640748.png) 但是当我手动将01的redis手动启动后,哨兵日志没有任何回显,使用info Relication命令查看02的redis的状态,发现slave个数还是一个,地址也只有03的地址。 使用infoinfo Relication命令查看01的redis状态,发现01的redis状态为master,slave节点为0.竟然没有变成slave状态。 现在相当于有两个master 一个slave。请问怎么解决master恢复后不变成从的问题 ![图片说明](https://img-ask.csdn.net/upload/201901/15/1547537902_849552.png) 附上redis配置与sentinel配置 主redis的配置 daemonize yes port 6379 bind 0.0.0.0 timeout 0 save 900 1 save 300 10 save 60 10000 requirepass 123456 logfile /DATA/redis1/log/redis.log 备redis的配置 daemonize yes port 6380 bind 0.0.0.0 timeout 0 save 900 1 save 300 10 save 60 10000 requirepass 123456 logfile /DATA/redis2/log/redis.log slaveof 10.221.149.136 6379 masterauth 123456 sentinel配置的配置 port 26380 daemonize yes logfile /DATA/redis2/log/sentinel.log sentinel monitor mymaster 10.221.149.136 6379 1 sentinel auth-pass mymaster 123456 sentinel down-after-milliseconds mymaster 3000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 10000
MOdbus 的 RTU连接报错
我在开发Modbus的RTU连接时遇到在测试类里面走main方法没有任何问题,在我处理业务时候调取就报错![图片说明](https://img-ask.csdn.net/upload/201908/12/1565582300_435079.png) 一下是我的代码 ``` package com.crrcdt.zn.common.utils.Modbus; import com.serotonin.modbus4j.BatchRead; import com.serotonin.modbus4j.BatchResults; import com.serotonin.modbus4j.ModbusFactory; import com.serotonin.modbus4j.ModbusMaster; import com.serotonin.modbus4j.code.DataType; import com.serotonin.modbus4j.exception.ErrorResponseException; import com.serotonin.modbus4j.exception.ModbusInitException; import com.serotonin.modbus4j.exception.ModbusTransportException; import com.serotonin.modbus4j.locator.BaseLocator; import com.serotonin.modbus4j.msg.WriteCoilRequest; import com.serotonin.modbus4j.msg.WriteCoilResponse; import java.util.ArrayList; import java.util.Date; public class Modbus4jUtils { /** * 工厂。 */ static ModbusFactory modbusFactory; static { if (modbusFactory == null) { modbusFactory = new ModbusFactory(); } } /** * 获取master * * @return * @throws ModbusInitException */ public static ModbusMaster getMaster() throws ModbusInitException { // IpParameters params = new IpParameters(); // params.setHost(host); // params.setPort(port); // // modbusFactory.createRtuMaster(wapper); //RTU 协议 // modbusFactory.createUdpMaster(params);//UDP 协议 // modbusFactory.createAsciiMaster(wrapper);//ASCII 协议 // ModbusMaster master = modbusFactory.createTcpMaster(params, false);// TCP 协议 ModbusMaster master1 = modbusFactory.createRtuMaster(new SerialPortWrapperImpl()); master1.init(); return master1; } /** * 读取[01 Coil Status 0x]类型 开关数据 * * @param slaveId slaveId * @param offset 位置 * @return 读取值 * @throws ModbusTransportException 异常 * @throws ErrorResponseException 异常 * @throws ModbusInitException 异常 */ public static Boolean readCoilStatus(ModbusMaster master, int slaveId, int offset) throws ModbusTransportException, ErrorResponseException, ModbusInitException { // 01 Coil Status BaseLocator<Boolean> loc = BaseLocator.coilStatus(slaveId, offset); Boolean value = master.getValue(loc); return value; } /** * 读取[02 Input Status 1x]类型 开关数据 * * @param slaveId * @param offset * @return * @throws ModbusTransportException * @throws ErrorResponseException * @throws ModbusInitException */ public static Boolean readInputStatus(ModbusMaster master, int slaveId, int offset) throws ModbusTransportException, ErrorResponseException { // 02 Input Status BaseLocator<Boolean> loc = BaseLocator.inputStatus(slaveId, offset); Boolean value = master.getValue(loc); return value; } /** * 读取[03 Holding Register类型 2x]模拟量数据 * * @param slaveId slave Id * @param offset 位置 * @param dataType 数据类型,来自com.serotonin.modbus4j.code.DataType * @return * @throws ModbusTransportException 异常 * @throws ErrorResponseException 异常 * @throws ModbusInitException 异常 */ public static Number readHoldingRegister(ModbusMaster master, int slaveId, int offset, int dataType) throws ModbusTransportException, ErrorResponseException, ModbusInitException { // 03 Holding Register类型数据读取 BaseLocator<Number> loc = BaseLocator.holdingRegister(slaveId, offset, dataType); Number value = master.getValue(loc); return value; } /** * 读取[04 Input Registers 3x]类型 模拟量数据 * * @param slaveId slaveId * @param offset 位置 * @param dataType 数据类型,来自com.serotonin.modbus4j.code.DataType * @return 返回结果 * @throws ModbusTransportException 异常 * @throws ErrorResponseException 异常 * @throws ModbusInitException 异常 */ public static Number readInputRegisters(ModbusMaster master, int slaveId, int offset, int dataType) throws ModbusTransportException, ErrorResponseException, ModbusInitException { // 04 Input Registers类型数据读取 BaseLocator<Number> loc = BaseLocator.inputRegister(slaveId, offset, dataType); Number value = master.getValue(loc); return value; } /** * 批量读取使用方法 * * @throws ModbusTransportException * @throws ErrorResponseException * @throws ModbusInitException */ public static void batchRead(ModbusMaster master, ArrayList data) throws ModbusTransportException, ErrorResponseException, ModbusInitException { BatchRead<Integer> batch = new BatchRead<Integer>(); batch.addLocator(0, BaseLocator.holdingRegister(1, 1, DataType.FOUR_BYTE_FLOAT)); batch.addLocator(1, BaseLocator.inputStatus(1, 0)); batch.setContiguousRequests(false); BatchResults<Integer> results = master.send(batch); System.out.println(results.getValue(0)); System.out.println(results.getValue(1)); } public static boolean writeCoilTest(ModbusMaster master, int slaveId, int offset, boolean value) { boolean values=false; try { WriteCoilRequest request = new WriteCoilRequest(slaveId, offset, value); WriteCoilResponse response = (WriteCoilResponse) master.send(request); if (response.isException()) System.out.println("Exception response: message=" + response.getExceptionMessage()); else System.out.println("功能码:1,写入"+value+"成功!"); System.out.println(value+"结束"+new Date()); values=true; } catch (ModbusTransportException e) { e.printStackTrace(); } return values; } } ``` 下面是测试类: ``` package com; import com.crrcdt.zn.common.utils.Modbus.Modbus4jUtils; import com.serotonin.modbus4j.ModbusMaster; import com.serotonin.modbus4j.exception.ErrorResponseException; import com.serotonin.modbus4j.exception.ModbusInitException; import com.serotonin.modbus4j.exception.ModbusTransportException; public class MudbusTest { // 输入/输出流 // private static InputStream inputStream; // private static OutputStream outputStream; public static void main(String[] args){ //modbus rtu try { ModbusMaster master = Modbus4jUtils.getMaster(); // while (true){ System.out.println(Modbus4jUtils.readCoilStatus(master,1,0)); Modbus4jUtils.writeCoilTest(master,1,0,true); // System.out.println(Modbus4jUtils.readHoldingRegister(master,1,16, DataType.FOUR_BYTE_FLOAT)); // } master.destroy(); } catch (ModbusInitException e) { e.printStackTrace(); } catch (ModbusTransportException e) { e.printStackTrace(); } catch (ErrorResponseException e) { e.printStackTrace(); } } } ``` 下面是我调用 ``` package com.crrcdt.zn.zngjx.modbus; import com.crrcdt.zn.common.utils.Modbus.Modbus4jUtils; import com.serotonin.modbus4j.ModbusMaster; import com.serotonin.modbus4j.exception.ErrorResponseException; import com.serotonin.modbus4j.exception.ModbusInitException; import com.serotonin.modbus4j.exception.ModbusTransportException; //import com.crrcdt.zn.common.utils.ModbusRTU; import java.util.ArrayList; import java.util.Date; import java.util.List; public class ModbusRTUController { /** * 写入开门指令 * * @param address * 地址 * @param com * 连接的com口 * @param slaveId * 连接的设备地址 */ public static boolean writeCoil(){ boolean returns=false; //RTU传输方式 try { ModbusMaster master = Modbus4jUtils.getMaster(); // while (true){ System.out.println("开门开始时间"+new Date()); // System.out.println(Modbus4jUtils.readCoilStatus(master,1,0)); returns= Modbus4jUtils.writeCoilTest(master,1,0,true); // System.out.println(Modbus4jUtils.readHoldingRegister(master,1,16, DataType.FOUR_BYTE_FLOAT)); // } master.destroy(); } catch (ModbusInitException e) { e.printStackTrace(); } return returns; } public static boolean writeCoil1(){ boolean returns=false; //RTU传输方式 try { ModbusMaster master = Modbus4jUtils.getMaster(); // while (true){ System.out.println("关门开始时间"+new Date()); // System.out.println(Modbus4jUtils.readCoilStatus(master,1,0)); returns= Modbus4jUtils.writeCoilTest(master,1,0,false); // System.out.println(Modbus4jUtils.readHoldingRegister(master,1,16, DataType.FOUR_BYTE_FLOAT)); // } master.destroy(); } catch (ModbusInitException e) { e.printStackTrace(); } return returns; } /** * 读入开门指令 * * @param address * 地址 * @param com * 连接的com口 */ public static boolean readCoil(){ boolean returns=false; //RTU传输方式 try { ModbusMaster master = Modbus4jUtils.getMaster(); // while (true){ System.out.println("读取开始时间"+new Date()); returns=Modbus4jUtils.readCoilStatus(master,1,0); // returns= Modbus4jUtils.writeCoilTest(master,1,0,true); // System.out.println(Modbus4jUtils.readHoldingRegister(master,1,16, DataType.FOUR_BYTE_FLOAT)); // } master.destroy(); } catch (ModbusInitException e) { e.printStackTrace(); } catch (ModbusTransportException e) { e.printStackTrace(); } catch (ErrorResponseException e) { e.printStackTrace(); } return returns; } /** * 读多个开门指令 * * @param address * 地址 * @param com * 连接的com口 * @param len * 读取长度 */ // public boolean[] readCoillist(int address, String com,int len,int slaveId){ // boolean[] returns=new boolean[len]; // //RTU传输方式 // ModbusMaster master =ModbusRTU.getRtuMaster(com); // try { // //初始化 // //设置超时时间 // master.setTimeout(50); // //设置重连次数 // master.setRetries(1); // try { // master.init(); // } catch (ModbusInitException e) { // e.printStackTrace(); // } // //设置从站ID //// int slaveId = 1; // //开关量读单个数据 // returns=ModbusRTU.readCoilTest(master, slaveId, address, len); // } // finally { // master.destroy(); // } // return returns; // } /** * 读取保持寄存器数据 * * @param address * 地址 * @param com * 连接的com口 */ // public static int readHoldingRegisters(int address, String com, int slaveId){ // int i; // //RTU传输方式 // ModbusMaster master =ModbusRTU.getRtuMaster(com); // try { // //初始化 // //设置超时时间 // master.setTimeout(50); // //设置重连次数 // master.setRetries(100); // try { // master.init(); // } catch (ModbusInitException e) { // e.printStackTrace(); // } // //设置从站ID //// int slaveId = 1; // //开关量读单个数据 // short[] returns=ModbusRTU.readHoldingRegistersTest(master, slaveId, address, 1); // i=returns[0]; // } // finally { // master.destroy(); // } // return i; // } /** * 读取保持寄存器数据 * @param address * 地址 * @param com * @param slaveId */ // public List<Integer> readHoldingRegisterslist(int address, String com, List<Integer> slaveId){ // List<Integer>returns=new ArrayList<>(); // //RTU传输方式 // ModbusMaster master =ModbusRTU.getRtuMaster(com); // try { // //初始化 // //设置超时时间 // master.setTimeout(50); // //设置重连次数 // master.setRetries(100); // try { // master.init(); // } catch (ModbusInitException e) { // e.printStackTrace(); // } // //设置从站ID //// int slaveId = 1; // for (int i:slaveId) { // //开关量读单个数据 // short[] returns1 = ModbusRTU.readHoldingRegistersTest(master, i, address, 1); // if (returns1!=null&&returns1.length>0){ // returns.add((int) returns1[0]); // } // // } // // } // finally { // master.destroy(); // } // return returns; // } } ``` 求大神帮忙!!!
ActiveMQ,master/slave模式.关闭master,slave自动重启后,访问报错
INFO | Using Persistence Adapter: JDBCPersistenceAdapter(org.apache.commons.dbcp.BasicDataSource@683d57c4) INFO | Database adapter driver override recognized for : [oracle_jdbc_driver] - adapter: class org.apache.activemq.store.jdbc.adapter.OracleJDBCAdapter INFO | Database lock driver override not found for : [oracle_jdbc_driver]. Will use default implementation. INFO | Attempting to acquire the exclusive lock to become the Master broker INFO | Becoming the master on dataSource: org.apache.commons.dbcp.BasicDataSource@683d57c4 INFO | ActiveMQ 5.5.0 JMS Message Broker (testBroker) is starting INFO | For help or more information please see: http://activemq.apache.org/ INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi INFO | Listening for connections at: tcp://ck:20002 INFO | Connector openwire Started INFO | ActiveMQ JMS Message Broker (testBroker, ID:ck-2430-1511311173619-0:1) started INFO | jetty-7.1.6.v20100715 INFO | ActiveMQ WebConsole initialized. INFO | Initializing Spring FrameworkServlet 'dispatcher' INFO | ActiveMQ Console at http://0.0.0.0:10002/admin INFO | Initializing Spring root WebApplicationContext INFO | OSGi environment not detected. INFO | Apache Camel 2.7.0 (CamelContext: camel) is starting INFO | JMX enabled. Using ManagedManagementStrategy. INFO | Found 5 packages with 16 @Converter classes to load INFO | Loaded 152 type converters in 0.509 seconds WARN | Broker localhost not started so using testBroker instead INFO | Connector vm://localhost Started INFO | Route: route1 started and consuming from: Endpoint[activemq://example.A] INFO | Total 1 routes, of which 1 is started. INFO | Apache Camel 2.7.0 (CamelContext: camel) started in 1.265 seconds INFO | Camel Console at http://0.0.0.0:10002/camel INFO | ActiveMQ Web Demos at http://0.0.0.0:10002/demo INFO | RESTful file access application at http://0.0.0.0:10002/fileserver INFO | Started SelectChannelConnector@0.0.0.0:10002 WARN | /admin/topics.jsp javax.el.ELException: java.lang.reflect.UndeclaredThrowableException at javax.el.BeanELResolver.getValue(BeanELResolver.java:298) at javax.el.CompositeELResolver.getValue(CompositeELResolver.java:175) at com.sun.el.parser.AstValue.getValue(AstValue.java:138) at com.sun.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:206) at org.apache.jasper.runtime.PageContextImpl.evaluateExpression(PageContextImpl.java:1001) at org.apache.jsp.topics_jsp._jspx_meth_c_out_1(org.apache.jsp.topics_jsp:218) at org.apache.jsp.topics_jsp._jspx_meth_c_forEach_0(org.apache.jsp.topics_jsp:162) at org.apache.jsp.topics_jsp._jspService(org.apache.jsp.topics_jsp:104) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:109) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:389) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:486) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:380) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:527) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1216) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:83) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187) at org.apache.activemq.web.SessionFilter.doFilter(SessionFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187) at org.apache.activemq.web.filter.ApplicationContextFilter.doFilter(ApplicationContextFilter.java:81) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187) at com.opensymphony.module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:118) at com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:52) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:421) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:493) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:930) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:358) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:866) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:456) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:113) at org.eclipse.jetty.server.Server.handle(Server.java:351) at org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:594) at org.eclipse.jetty.server.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:1042) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:549) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:211) at org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:424) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:506) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.UndeclaredThrowableException at com.sun.proxy.$Proxy68.getName(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at javax.el.BeanELResolver.getValue(BeanELResolver.java:293) ... 47 more Caused by: javax.management.InstanceNotFoundException: org.apache.activemq:BrokerName=testBroker,Type=Topic,Destination=ActiveMQ.Advisory.MasterBroker at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) at javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267) ... 53 more
hadoop2.5.1在win7下开发报通信错误
集群环境是centos6.2,2master,4slave。 在集群上跑example.jar测试通过,管理页面显示所有节点正常。jps显示正常。 但在win7下建立用eclipse安装csdn下载的插件显示update错误,不过可以连接上hadoop集群,正常显示hdfs。 使用example里面的wordcount.java,照博客 http://www.cnblogs.com/huligong1234/p/4137133.html 的方法运行后显示可以连接上master但不能连接slave: WARN - Failed to connect to /192.168.11.94:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information java.net.ConnectException: Connection timed out: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)。。。 4个slave都显示通信超时。不知道有人知道是怎么回事吗??
求解 FLINK 集群 Standalone 模式 高可用部署无法启动。
##### 根据教程部署的hdfs,zookeeper,flink 集群。 ##### HDFS , zookeeper,工作正常,flink-standalone 启动正常。在搭建HA集群时,集群启动未报错,查看jps时发现没有进程,查看日志出现如下内容。(启动顺序zkServer.sh start ==> start-dfs.sh, start-yarn.sh bin/start-cluster.sh ) ##### 集群环境: Centos 7.3, Hadoop-2.8.5, java 1.8, scala-2.12, flink-1.9.0-2.12, zookeeper 3.4.14 ``` 2019-09-05 21:38:02,658 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint (Version: 1.9.0, Rev:9c32ed9, Date:19.08.2019 @ 16:16:55 UTC) 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: root 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: <no hadoop dependency found> 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.221-b11 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 989 MiBytes 2019-09-05 21:38:02,660 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - No Hadoop Dependency available 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog.file=/opt/software/flink/log/flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/opt/software/flink/conf/log4j.properties 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/opt/software/flink/conf/logback.xml 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments: 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /opt/software/flink/conf 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --executionMode 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster 2019-09-05 21:38:02,661 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --host 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - master 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --webui-port 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 8081 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /opt/software/flink/lib/flink-table_2.12-1.9.0.jar:/opt/software/flink/lib/flink-table-blink_2.12-1.9.0.jar:/opt/software/flink/lib/log4j-1.2.17.jar:/opt/software/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/software/flink/lib/flink-dist_2.12-1.9.0.jar::/usr/hadoop/hadoop/etc/hadoop: 2019-09-05 21:38:02,662 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -------------------------------------------------------------------------------- 2019-09-05 21:38:02,663 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT] 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more [root@master log]# vim flink-root-standalonesession-14-master.log 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: env.java.home, /usr/java/jdk1.8.0_221-amd64 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, master 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123 2019-09-05 21:38:02,696 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability, zookeeper 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.storageDir, hdfs:///flink/ha/ 2019-09-05 21:38:02,697 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.quorum, master:2181,slave02:2181,slave03:2181 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.path.root, /flink 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.cluster-id, /cluster_one 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.execution.failover-strategy, region 2019-09-05 21:38:02,698 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.port, 8081 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-port, 8080-8090 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: rest.bind-address, master,slave03 2019-09-05 21:38:02,699 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: web.submit.enable, false 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneSessionClusterEntrypoint. 2019-09-05 21:38:02,842 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem. 2019-09-05 21:38:02,877 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. 2019-09-05 21:38:02,903 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context. 2019-09-05 21:38:02,914 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,926 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath. 2019-09-05 21:38:02,927 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services. 2019-09-05 21:38:03,430 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at master:0 2019-09-05 21:38:04,268 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started 2019-09-05 21:38:04,314 INFO akka.remote.Remoting - Starting remoting 2019-09-05 21:38:04,566 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink@master:36882] 2019-09-05 21:38:04,674 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink@master:36882 2019-09-05 21:38:04,701 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more . 2019-09-05 21:38:04,708 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon. 2019-09-05 21:38:04,738 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports. 2019-09-05 21:38:04,765 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down. 2019-09-05 21:38:04,815 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service. 2019-09-05 21:38:04,816 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501) at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:65) Caused by: java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119) at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:92) at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:120) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:292) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:257) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:202) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:164) at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:163) ... 2 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:447) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:359) at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298) at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:116) ... 10 more Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:443) ... 13 more ``` ##### 环境变量已经配置如下(/etc/profile): ``` export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:/lib export PATH=$PATH:$JAVA_HOME/bin:. export ZOOKEEPER_HOME=/opt/software/zookeeper export PATH=$PATH:$ZOOKEEPER_HOME/bin:. export HADOOP_HOME=/usr/hadoop/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:. export PYTHON_HOME=/usr/local/python3 export PATH=$PATH:$PYTHON_HOME/bin:. export SCALA_HOME=/usr/local/scala export PATH=$PATH:/usr/local/scala/bin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_HOME=$HADOOP_HOME export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server:/usr/local/lib:/usr/hadoop/hadoop/lib/native ``` ##### flinl-conf.yaml HA配置: ``` high-availability: zookeeper high-availability.storageDir: hdfs:///flink/ha/ high-availability.zookeeper.quorum: master:2181,slave02:2181,slave03:2181 high-availability.zookeeper.path.root: /flink high-availability.cluster-id: /cluster_one ``` ##### 因日志中提到如下信息,猜测可能是环境变量或者hadoop 依赖路径的问题。 ``` 2019-09-06 11:44:39,820 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available. …… 2019-09-06 11:44:41,943 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneSessionClusterEntrypoint down with application status FAILED. Diagnostics java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir) …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. ^ Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. …… 2019-09-06 11:44:42,075 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneSessionClusterEntrypoint. org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneSessionClusterEntrypoint. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. …… Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. ``` ##### 接下来在flink-conf.yaml文件中添加hdfs配置。 ``` fs.hdfs.hadoopconf: /usr/hadoop/hadoop/etc/hadoop fs.hdfs.hdfsdefault: /usr/hadoop/hadoop/etc/hadoop/hdfs-default.xml fs.hdfs.hdfssite: /usr/hadoop/hadoop/etc/hadoop/hdfs-site.xml ``` ##### flink集群依然无法按启动,日志内容与之前没有差别。 ##### 小弟在此向社区各路大神求教,如果有遇到相关情况,是否有解决办法。 ##### 非常感谢 ##### ps:flink on yarn 搭建方式尚未尝试。
Linux使用i2cdetect列举出的i2c bus上的设备地址与具体设备的datesheet的i2c slave address不一致问题
我现在在搭载zynq芯片的板子上上了Linux系统,使用I2C挂载一个温度检测芯片MAX31730。根据datesheet上,其I2C slave 地址如图所示, ![图片说明](https://img-ask.csdn.net/upload/201909/18/1568795870_312806.png) 然而板子通过i2cdetect -y -r 1命令,显示出的设备地址却是0x50,这个是为什么呢?(PS,其中0x32是另一个device,可不管) 不管我在dts中有没有compatible,或者修改reg,i2cdetect出来的依然是0x50。 ![图片说明](https://img-ask.csdn.net/upload/201909/18/1568796173_148921.jpg) 请问,大神们这是什么原因?
TCP/IP网络通信,自己写的client向modbus slave发送一串数据,modbus slave无法接收
问题如题。先说我的目的:我想让我的client发送{0,0,0,0,0,6,1,6,0,0,0,1}这样一串数据,对应的modbus slave会将第0位置1. 现在不止modbus salve没有动作,communication也抓不到任何信息。 如下贴上client的代码。 使用的是本机的IP地址,port口6530. ``` #include <iostream> #include <fstream> #include <winsock2.h> #include <stdio.h> #include <windows.h> using namespace std; void main() { int socket_return; int socket_length = sizeof(SOCKADDR); char my_recvbuf[200]; char my_sendbuf[12]={0,0,0,0,0,6,1,6,0,0,0,1}; //创建套接字库 WORD my_version_word; WSADATA my_wsaData; my_version_word = MAKEWORD(2,2); socket_return = WSAStartup(my_version_word,&my_wsaData); if (socket_return != 0) { return;//成功返回值为0 } if (LOBYTE(my_wsaData.wVersion) != 2 || HIBYTE(my_wsaData.wVersion) != 2) { WSACleanup(); return; } //创建套接字 SOCKET my_socket = socket(AF_INET,SOCK_STREAM,0); SOCKADDR_IN my_socket_address; my_socket_address.sin_addr.S_un.S_addr = inet_addr("172.16.66.68"); my_socket_address.sin_family = AF_INET; my_socket_address.sin_port = htons(6530); //绑定套接字 bind(my_socket,(SOCKADDR*)&my_socket_address,socket_length); //connect if (connect(my_socket,(SOCKADDR*)&my_socket_address,socket_length) == 0) cout<<"connect the server correct!"<<endl; else cout<<"connect the server failed!"<<endl; while (1) { Sleep(1000); send(my_socket,(const char*)(char*)my_sendbuf,12,0); cout<<"the message has sent correctly!"<<endl; /*recv(my_socket,my_recvbuf,sizeof(my_recvbuf)+1,0); cout<<my_recvbuf<<endl;*/ //excetely successed } } ``` 我使用别的网络收听工具时,是可以接收到我这段信息的。 贴图如下: ![图片说明](https://img-ask.csdn.net/upload/201908/01/1564650277_511146.png) 为什么modbus slave收不到这段数据呢? 我怀疑是不是我发送的是char数组,数制不符合modbus的要求? 小弟愚钝,目前还是趟水过河的阶段,请各位大神不吝赐教。万分感谢。 ////////////////////////////////////////////////////////////////// 标记一哈,我好像发现了一点问题。 我使用的另一个网口调试助手是network_debug_tool. 我之前发送的12个元素的数组,要么是用的char,要么用的是UINT8,这些都是1bite的,按理说网口的速率应该是12bps,但是network可以接收这些数据,我看了一下server的RXspeed,是24bps。 我是用network模拟client,发送同样的一组数据给modbus slave,发现TXspeed也是24bps。这样的一组数据发给modbus slave,对方就可以响应。 我考虑把我的那一组数组改成UINT16,看看modbus slave是否会响应。 /////////////////////////////////////////////////////////////////// 改成UINT16,UINT32,统统都是24bps,感觉和数制的关系不大。改为uint16,network模拟的server接收到这样一串数据,“0,0,0,0,0,0,0,0,0,0,6,0”,而我发送的数据是“0,0,0,0,0,6,1,6,0,0,0,0,1”,应该是我默认发送的元素是12个?,所以把我发送的uint16的数字全拆开发送了?这样看来也不能说和数值关系不大。但是肯定不能用bps那个数来判断了。可惜我找不到监听工具,不知道我到底发出去的信息是什么样的。
Hadoop0.20.2完全分布式搭载不成功,求助
彼此之间均可ping ssh连通 但是通过master无法启动slave上的DataNode。 在master上运行start-all会出现(在此之前已经运行过stop-all) 两个slave的data节点只有一个现实运行 另一个hadoop datanode running 在此stop-all之后启动,会出现提示log无法生成, ![图片说明](https://img-ask.csdn.net/upload/201904/03/1554256773_482506.png)![图片说明](https://img-ask.csdn.net/upload/201904/03/1554256779_618982.png)![图片说明](https://img-ask.csdn.net/upload/201904/03/1554256785_786839.png)![图片说明](https://img-ask.csdn.net/upload/201904/03/1554256801_443202.png) 个人感觉是master并没有调动slave 利用slave的虚拟机也无法看到其hadoop在运行
连接redis报错Can't connect to master: redis://172.17.111.181:7033 with slot ranges: [[10923-16383]]
1.环境:<br> springboot2 + redisson3.8.2 + redis4.0.2集群模式 2.redisson采用yml配置 如下: ``` clusterServersConfig: idleConnectionTimeout: 10000 pingTimeout: 1000 connectTimeout: 10000 timeout: 3000 retryAttempts: 3 retryInterval: 1500 failedSlaveReconnectionInterval: 3000 failedSlaveCheckInterval: 3000 password: null subscriptionsPerConnection: 5 clientName: null loadBalancer: !<org.redisson.connection.balancer.RoundRobinLoadBalancer> {} slaveSubscriptionConnectionMinimumIdleSize: 1 slaveSubscriptionConnectionPoolSize: 10 slaveConnectionMinimumIdleSize: 3 slaveConnectionPoolSize: 7 masterConnectionMinimumIdleSize: 3 masterConnectionPoolSize: 7 readMode: "MASTER_SLAVE" nodeAddresses: - "redis://39.106.*.*:7033" - "redis://39.106.*.*:7032" - "redis://39.106.*.*:7031" - "redis://39.106.*.*:7034" - "redis://39.106.*.*:7035" - "redis://39.106.*.*:7036" scanInterval: 1000 threads: 0 nettyThreads: 0 codec: !<org.redisson.codec.JsonJacksonCodec> {} transportMode: "NIO" ``` **3. 链接时报错信息** ``` 2019-07-30 12:26:31.142 INFO 8592 --- [isson-netty-1-5] o.r.cluster.ClusterConnectionManager : slaves: [redis://39.106.*.*:7034] added for slot ranges: [[0-5460]] 2019-07-30 12:26:31.142 INFO 8592 --- [isson-netty-1-4] o.r.cluster.ClusterConnectionManager : slaves: [redis://172.17.111.181:7035] added for slot ranges: [[5461-10922]] 2019-07-30 12:26:31.179 INFO 8592 --- [isson-netty-1-3] o.r.c.pool.MasterPubSubConnectionPool : 1 connections initialized for 39.106.*.*/39.106.*.*:7032 2019-07-30 12:26:31.179 INFO 8592 --- [isson-netty-1-1] o.r.c.pool.MasterPubSubConnectionPool : 1 connections initialized for 39.106.*.*/39.106.*.*:7031 2019-07-30 12:26:31.186 INFO 8592 --- [isson-netty-1-5] o.r.cluster.ClusterConnectionManager : master: redis://39.106.*.*:7032 added for slot ranges: [[5461-10922]] 2019-07-30 12:26:31.186 INFO 8592 --- [isson-netty-1-5] o.r.c.pool.MasterConnectionPool : 3 connections initialized for 39.106.*.*/39.106.*.*:7032 2019-07-30 12:26:31.187 INFO 8592 --- [isson-netty-1-8] o.r.cluster.ClusterConnectionManager : master: redis://39.106.*.*:7031 added for slot ranges: [[0-5460]] 2019-07-30 12:26:31.187 INFO 8592 --- [isson-netty-1-8] o.r.c.pool.MasterConnectionPool : 3 connections initialized for 39.106.*.*/39.106.*.*:7031 2019-07-30 12:26:31.208 INFO 8592 --- [isson-netty-1-2] o.r.c.pool.PubSubConnectionPool : 1 connections initialized for 39.106.*.*/39.106.*.*:7034 2019-07-30 12:26:31.209 INFO 8592 --- [isson-netty-1-3] o.r.connection.pool.SlaveConnectionPool : 3 connections initialized for 39.106.*.*/39.106.*.*:7034 2019-07-30 12:26:41.063 ERROR 8592 --- [isson-netty-1-7] o.r.cluster.ClusterConnectionManager : Can't connect to master: redis://172.17.111.181:7033 with slot ranges: [[10923-16383]] 2019-07-30 12:26:43.264 WARN 8592 --- [ main] o.s.w.c.s.GenericWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'redisClusterService': Unsatisfied dependency expressed through field 'redissonClient'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'redisson' defined in class path resource [com/qiriver/tools/monkey/distributed/redis/RedisConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.redisson.api.RedissonClient]: Factory method 'redisson' threw exception; nested exception is org.redisson.client.RedisConnectionException: Not all slots are covered! Only 10923 slots are avaliable 2019-07-30 12:26:43.264 INFO 8592 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default' ``` **7031 7032 7033 是主节点 连接不上的节点是yml配置中第一个节点。如果将7031放在配置第一行,报错将会是连不上第一个节点** *.*是我服务器的ip 有谁遇到过这个问题吗
嵌入式开发板使用libmodbus库,轮询不同Slave的设备时,为什么报文同事发出?
各位大神,求助! 我在使用Linux 开发板时需要访问modbus设备,库函数使用的是libmodbus-3.1.6。 但是在轮询不同设备时,两条报文同时发送了,导致无法正确读取信息。 大家有没有发生过这样的事件,怎么解决的? 十分感谢!
mysql 主从复制,主主热备的问题
我的实验环境是 master-1:172.16.1.50 master-2:172.16.1.51 slave-1: 172.16.1.53 两台master做主主复制,slave分别和两台做主从复制,我记得mysql一个是不能做多主一从的,然后我在slave上做多实例,端口分别为3306,3307 ,其中3306的实例对应master-2,3307对应master-1 问题: 当我在master-1上创建一个数据库data,master-2和slave-1的3307实例都能进行备份,但是slave-1的3306实例却不能同步 同理:当我在master-2上创建一个数据库data1,master-2和slave-1的3306实例都能进行备份,但是slave-1的3307实例却不能同步 这是为什么 求大神解惑!!!多谢了
Kafka实战(三) - Kafka的自我修养与定位
Apache Kafka是消息引擎系统,也是一个分布式流处理平台(Distributed Streaming Platform) Kafka是LinkedIn公司内部孵化的项目。LinkedIn最开始有强烈的数据强实时处理方面的需求,其内部的诸多子系统要执行多种类型的数据处理与分析,主要包括业务系统和应用程序性能监控,以及用户行为数据处理等。 遇到的主要问题: 数据正确性不足 数据的收集主要...
volatile 与 synchronize 详解
Java支持多个线程同时访问一个对象或者对象的成员变量,由于每个线程可以拥有这个变量的拷贝(虽然对象以及成员变量分配的内存是在共享内存中的,但是每个执行的线程还是可以拥有一份拷贝,这样做的目的是加速程序的执行,这是现代多核处理器的一个显著特性),所以程序在执行过程中,一个线程看到的变量并不一定是最新的。 volatile 关键字volatile可以用来修饰字段(成员变量),就是告知程序任何对该变量...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
GitHub开源史上最大规模中文知识图谱
近日,一直致力于知识图谱研究的 OwnThink 平台在 Github 上开源了史上最大规模 1.4 亿中文知识图谱,其中数据是以(实体、属性、值),(实体、关系、实体)混合的形式组织,数据格式采用 csv 格式。 到目前为止,OwnThink 项目开放了对话机器人、知识图谱、语义理解、自然语言处理工具。知识图谱融合了两千五百多万的实体,拥有亿级别的实体属性关系,机器人采用了基于知识图谱的语义感...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
微信支付崩溃了,但是更让马化腾和张小龙崩溃的竟然是……
loonggg读完需要3分钟速读仅需1分钟事件还得还原到昨天晚上,10 月 29 日晚上 20:09-21:14 之间,微信支付发生故障,全国微信支付交易无法正常进行。然...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
YouTube排名第一的励志英文演讲《Dream(梦想)》
Idon’t know what that dream is that you have, I don't care how disappointing it might have been as you've been working toward that dream,but that dream that you’re holding in your mind, that it’s po...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问