zookeeper集群leader信息 5C

请问,可以用java获取zookeeper集群中leader的IP吗?应该怎么做?

1个回答

keeper集群leader信息 5C
javazookeeper服务器集群
请问,可以用

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Python+OpenCV计算机视觉

Python+OpenCV计算机视觉

怎样获取zookeeper集群中leader的IP并应用

现在有一个需求,想调用zookeeper集群中leader的IP,请问应该如何用java实现获取leader的IP?求指教

zookeeper集群,宕掉1个后,都不能用了

我用VMWare搭建了3个主机的zookeeper,已经实现了数据同步了(在一个主机创建了数据,在其他主机,启动客户端,都能查看了)。但,其中一个kill -9 id后,其他的都不能用了。 ``` [root@wqb99 ~]# /itcast/zookeeper-3.4.9/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /itcast/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: leader [root@wqb99 ~]# jps 2739 Jps 2531 QuorumPeerMain [root@wqb99 ~]# kill -9 2531 [root@wqb99 ~]# jps 2749 Jps [root@wqb99 ~]# ``` 发现其他节点也宕机了 (1) [root@wqb88 ~]# /itcast/zookeeper-3.4.9/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /itcast/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower [root@wqb88 ~]# /itcast/zookeeper-3.4.9/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /itcast/zookeeper-3.4.9/bin/../conf/zoo.cfg Error contacting service. It is probably not running. [root@wqb88 ~]# (2) [root@wqb66 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /itcast/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower [root@wqb66 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /itcast/zookeeper-3.4.9/bin/../conf/zoo.cfg Error contacting service. It is probably not running. [root@wqb66 bin]#

ZOOKEEPER分布式集群,断掉LEADER网络后出现问题

公司在用zk管理分布式集群,现在出现了一个问题,问题描述如下。 假设zk作为server端,存在三个节点,zk1(leader),zk2(follower),zk3(follower)。 然后公司的client端,也存在三个节点,C1,C2,C3。分别与ZK的节点相对应。 现存在如下问题,当断掉ZK1的网络时,C1的client无服务,这是正常现象。 此时ZK由于leader被down,重新选取leader,如果ZK2成为leader,那么c2,c3此时会向zk2读取节点数据。 但是zk2,zk3,此时正在进行同步,似乎是无法提供服务的状态,就会导致c2或者c3自杀。 现在请问各位大佬,有没有办法解决这种问题,比如zk在同步时可以感知zk对外提供服务的状态,当zk可以对外服务时再进行获取节点数据。 由于zk都是java代码,而我们用的是c++/c,所以只接触了部分的zk的相关API。还请各位大佬帮忙

zookeeper集群搭建问题

2016-10-09 22:36:38,628 [myid:3] - INFO [main:QuorumPeer@1005] - initLimit set to 10 2016-10-09 22:36:38,646 [myid:3] - INFO [Thread-1:QuorumCnxManager$Listener@504] - My election bind port: /123.207.11.*:3883 2016-10-09 22:36:38,647 [myid:3] - ERROR [/123.207.11.*:3883:QuorumCnxManager$Listener@517] - Exception while listening java.net.BindException: Cannot assign requested address at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376) at java.net.ServerSocket.bind(ServerSocket.java:376) at java.net.ServerSocket.bind(ServerSocket.java:330) at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:507) 2016-10-09 22:36:38,654 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumPeer@714] - LOOKING 2016-10-09 22:36:38,656 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2183:FastLeaderElection@815] - New election. My id = 3, proposed zxid=0x0 2016-10-09 22:36:38,663 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@382] - Cannot open channel to 1 at election address /123.207.18.*:3881 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430) at java.lang.Thread.run(Thread.java:745) 2016-10-09 22:36:38,667 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager@382] - Cannot open channel to 2 at election address /119.29.130.*:3882 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449) at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430) at java.lang.Thread.run(Thread.java:745) 2016-10-09 22:36:38,668 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection@597] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state) 2016-10-09 22:36:38,870 [myid:3] - WARN [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumCnxManager@382] - Cannot open channel to 2 at election address /119.29.*3882 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762) 2016-10-09 22:36:38,871 [myid:3] - WARN [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumCnxManager@382] - Cannot open channel to 1 at election address /123.207.*:3881 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762) 2016-10-09 22:36:38,872 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2183:FastLeaderElection@849] - Notification time out: 400 2016-10-09 22:36:39,273 [myid:3] - WARN [QuorumPeer[myid=3]/0.0.0.0:2183:QuorumCnxManager@382] - Cannot open channel to 2 at election address /119.29.*:3882 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) 这到底是什么原因 我的zoo.cfg是 server.1=123.207.18.*:2881:3881 server.2=119.29.130.*:2882:3882 server.3=123.207.11.*2883:3883 myid有配置,防火墙关闭了,我的服务器是真实的服务器,腾讯云上的服务器。 腾讯云安全组也配置了。为什么一直报错。难道是腾讯云服务器的问题,有没有 办法解决?求解

java项目怎么和zookeeper连接

在一台服务器上发布了java项目,其他服务器做了zookeeper集群,现在在发布的项目里需要用到集群中的leader,应该怎么做?谢谢解答

zookeeper 如何保证半数提交后剩下的节点上最新的数据呢?

zookeeper 的leader和follower的prepare和commit时,只要半数的节点通过就算同意,leader就会commit,那么剩下的半数节点的数据如何同步到最新的呢?

在搭建hadoop+zookeeper集群,初始化zkfc报错,连接不上从节点,求指导是哪里的问题。

本人新手,学习搭建hadoop+zookeeper的集群,hadoop版本3.2.1,zookeeper版本3.6.0,用的virtualbox建了5个虚拟机,装的centos7带gnome,网络设置为hostonly。 5个节点分别是nna(主节点),nns(备用),dn1,dn2,dn3是三个worker。 hosts里面配置了 192.168.56.101 nna 192.168.56.102 nns 192.168.56.103 dn1 192.168.56.104 dn2 192.168.56.105 dn3 hostname,network里相关的内容都已修改,myid,workers文件也设置了,5个虚拟机的ip地址设置为自动获取。 zoo.cfg里面配置了: server.1=dn1:2888:3888 server.2=dn2:2888:3888 server.3=dn3:2888:3888 core-site.xml hdfs-site.xml等等这些配置文件里相应的ha.zookeeper.quorum值也设置为了dn1:2181,dn2:2181,dn3:2181,空格之类的也都检查过了。 我按顺序先启动zookeeper(zkServer.sh status显示1个leader2个follower),然后从主节点nna启动worker上的journalnode(jps显示启动成功),再在主节点nna上格式化namenode也成功了。 接着格式化zkfc:hdfs zkfc -formatZk,显示失败,报错信息如下: 2020-04-07 17:43:53,517 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server dn3/<unresolved>:2181. Will not attempt to authenticate using SASL (unknown error) 2020-04-07 17:43:53,517 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server dn3/<unresolved>:2181, unexpected error, closing socket connection and attempting reconnect java.nio.channels.UnresolvedAddressException at java.base/sun.nio.ch.Net.checkAddress(Net.java:139) at java.base/sun.nio.ch.SocketChannelImpl.checkRemote(SocketChannelImpl.java:727) at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:741) at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277) at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287) at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064) 显然是没找到各节点的ip地址,求高手指教是哪里的配置不对,万分感谢!

solr集群搭建的zookeeper和之前学dubbo时的zookeeper放在一个虚拟机

之前学dubbo时的zookeeper,和现在solr搭建集群的zookepper(把之前的复制了三份到/usr/local/solr-cloud下,命名后缀01、02、03),放在同一虚拟机没关系吧? 如果没关系,现在出现了问题。配置好三个zookeeper0x/data/myid,zookeeper0x/conf/zoo.cfg(其中zookeeper01的zoo.cfg端口和之前dubbo的端口都是2181,不知道这样行不行,zookeeper02,03则分别配置了2182,2183)。用批处理文件开启三个zookeeper后,只有第一个是standalone,(不是leader或follower),其余两个都是It is probably not running

问个zookeeper脑裂问题

假设有个9台服务器的zookeeper集群 a b c d e f g h i,一开始a是leader,然后因为网络问题,分裂成三部分, (a b) (c d) (e f g h i) 这时候(e f g h i)这组应该会重新选出一个leader,然后通知客户端吧,那此时(a b) (c d)这两组是处于什么状态?然后如果网络都恢复了,这9个节点又会怎么处理

java程序里如何集成zookeeper应用

在网上的资料全是zookeeper的安装测试,这些我已经完成了,现在想在java程序里调用zookeeper集群选举出来的leader的IP地址,这个需求应该怎么用java实现,求高手解答,谢谢!

zookeeper最基础的一些疑问

非计算机专业,小白一枚,最近刚看zookeeper,以前从未接触过这方面的东西,故有些很基础的问题想问一下各位大虾! 问题一:在windows系统上已经安装好zookeeper,搭建好了伪集群(配了3个Zookeeper 实例),并运行起来 server.1=127.0.0.1:2887:3887 server.2=127.0.0.1:2888:3888 server.3=127.0.0.1:2889:3889 想知道这三个server哪个是leader,哪个是follow和observer,或者怎么配置3个server使之分别成为leader、follow和observer又或者这3个都不是? 问题二、现在假设我有3个应用,分别为app1,app2和app3,应用之间通过rpc或者rest通信,如果我想把着3个应用部署到zookeeper上面应该怎么弄?是把3个应用部署在同一台server上还是3个分别部署在不同的server或者其他?具体怎么操作? 1、如果3个部署在同一个server上,那么部署在哪一个server上?如果此时的这个server挂掉后zookeeper会怎么处理? 2、如果部署在不同的server上,那么其中的一个server挂掉后zookeeper会怎么处理? 附,以上一些说法很有可能不正确,望各位大虾看在本人小白的份上勿怪

hadoop集群,hdfs dfs -ls / 目录出错

搭建了一个hadoop集群,用hdfs dfs -ls /命令,列出的是本地系统的根目录。 用hdfs dfs -ls hdfs://servicename/ 列出的目录才是hdfs上的目录,可能是什么原因? 执行hive创建的目录也是在本地系统目录上。 集群的配置如下 集群规划: 主机名 IP 安装的软件 运行的进程 hadoop01 192.168.175.129 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop02 192.168.175.127 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop03 192.168.175.126 jdk、hadoop ResourceManager hadoop04 192.168.175.125 jdk、hadoop ResourceManager hadoop05 192.168.175.124 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop06 192.168.175.123 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop07 192.168.175.122 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain windows:NLB LINUX:LVS 1.liunx虚拟机安装后,虚拟机连接模式要选择host-only模式。然后分配IP(以hadoop01为例) DEVICE="eth0" BOOTPROTO="static" ### HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c" IPADDR="192.168.175.129" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.175.1" ### 2.修改主机名: vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop01 ### 3.关闭防火墙: #查看防火墙状态 service iptables status #关闭防火墙 service iptables stop #查看防火墙开机启动状态 chkconfig iptables --list #关闭防火墙开机启动 chkconfig iptables off 4.免登录配置: #生成ssh免登陆密钥 #进入到我的home目录 cd ~/.ssh ssh-keygen -t rsa (四个回车) 执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥) 将公钥拷贝到要免登陆的机器上 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 或 若报错ssh-copy-id: ERROR: No identities found,是因为找不到公钥路径,加上-i然后再加上路径即可 则用 $ ssh-copy-id -i ~/.ssh/id_rsa.pub user@remote_ip 5.主机IP映射关系(/etc/hosts每台机器上都要配置全部映射关系) 192.168.175.129 hadoop01 192.168.175.127 hadoop02 192.168.175.126 hadoop03 192.168.175.125 hadoop04 192.168.175.124 hadoop05 192.168.175.123 hadoop06 192.168.175.122 hadoop07 6./etc/profile下配置java环境变量: export JAVA_HOME=/lichangwu/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin #刷新profile source /etc/profile 若版本报错,vi /etc/selinux/config,设置SELINUX=disabled,然后重启虚拟机 7.安装zookeeper: 1.安装配置zooekeeper集群(在hadoop05上): 1.1解压 tar -zxvf zookeeper-3.4.6.tar.gz -C /lichangwu/ 1.2修改配置 cd /lichangwu/zookeeper-3.4.6/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg 修改:dataDir=/lichangwu/zookeeper-3.4.6/tmp 在最后添加: server.1=hadoop05:2888:3888 server.2=hadoop06:2888:3888 server.3=hadoop07:2888:3888 保存退出 然后创建一个tmp文件夹 mkdir /lichangwu/zookeeper-3.4.6/tmp 再创建一个空文件 touch /lichangwu/zookeeper-3.4.6/tmp/myid 最后向该文件写入ID echo 1 > /lichangwu/zookeeper-3.4.6/tmp/myid 1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个lichangwu目录:mkdir /lichangwu) scp -r /lichangwu/zookeeper-3.4.6/ hadoop06:/lichangwu/ scp -r /lichangwu/zookeeper-3.4.6/ hadoop07:/lichangwu/ 注意:修改hadoop06、hadoop07对应/lichangwu/zookeeper-3.4.6/tmp/myid内容 itcast06: echo 2 > /lichangwu/zookeeper-3.4.6/tmp/myid itcast07: echo 3 > /lichangwu/zookeeper-3.4.6/tmp/myid 8.安装配置hadoop集群(在hadoop01上操作): 2.1解压 tar -zxvf hadoop-2.4.1.tar.gz -C /lichangwu/ 2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下) #将hadoop添加到环境变量中 vim /etc/profile export JAVA_HOME=/lichangwu/jdk1.7.0_79 export HADOOP_HOME=/lichangwu/hadoop-2.4.1 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下 cd /lichangwu/hadoop-2.4.1/etc/hadoop 2.2.1修改hadoo-env.sh export JAVA_HOME=/lichangwu/jdk1.7.0_79 2.2.2修改core-site.xml <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/lichangwu/hadoop-2.4.1/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> </configuration> 2.2.3修改hdfs-site.xml <configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop01:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop01:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop02:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop02:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/lichangwu/hadoop-2.4.1/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration> 2.2.4修改mapred-site.xml <configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 2.2.5修改yarn-site.xml <configuration> <!-- 开启RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop04</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 2.2.6修改slaves(slaves是指定子节点的位置,因为要在itcast01上启动HDFS、在itcast03启动yarn, 所以itcast01上的slaves文件指定的是datanode的位置,itcast03上的slaves文件指定的是nodemanager的位置) hadoop05 hadoop06 hadoop07 2.2.7配置免密码登陆 #首先要配置itcast01到hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop01上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点,包括自己 ssh-coyp-id hadoop01 ssh-coyp-id hadoop02 ssh-coyp-id hadoop03 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop03上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop02到hadoop01的免登陆 在hadoop02上生产一对钥匙 ssh-keygen -t rsa ssh-coyp-id -i hadoop01 2.4将配置好的hadoop拷贝到其他节点 scp -r hadoop-2.4.1/ hadoop02:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop03:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop04:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop05:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop06:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop07:/lichangwu/hadoop-2.4.1/ ###注意:严格按照下面的步骤 2.5启动zookeeper集群(分别在hadoop05、hadoop06、hadoop07上启动zk) cd /lichangwu/zookeeper-3.4.6/bin/ ./zkServer.sh start #查看状态:一个leader,两个follower ./zkServer.sh status 2.6启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行) cd /lichangwu/hadoop-2.4.1 sbin/hadoop-daemon.sh start journalnode #运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程 2.7格式化HDFS #在hadoop01上执行命令: hdfs namenode -format #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件, 这里我配置的是/lichangwu/hadoop-2.4.1/tmp,然后将/lichangwu/hadoop-2.4.1/tmp拷贝到hadoop02的/lichangwu/hadoop-2.4.1/下。 scp -r tmp/ hadoop02:/lichangwu/hadoop-2.4.1/ 2.8格式化ZK(在hadoop01上执行即可) hdfs zkfc -formatZK 2.9启动HDFS(在hadoop01上执行) sbin/start-dfs.sh 2.10启动YARN(#####注意#####:是在hadoop03上执行start-yarn.sh, 如果hadoop04上没有启动成功,则在hadoop04上再启动一次start-yarn.sh; 把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动) sbin/start-yarn.sh 到此,hadoop-2.4.1配置完毕,可以统计浏览器访问: http://192.168.175.129:50070 NameNode 'hadoop01:9000' (active) http://192.168.175.127:50070 NameNode 'hadoop02:9000' (standby)

萌妹子 求解hadoop集群搭建 ZKFC报错

hdfs zkfc -formatZK执行后: WARNING: Before proceeding, ensure that all HDFS services and failover controllers are stopped! =============================================== Proceed formatting /hadoop-ha/mycluster? (Y or N) 16/02/26 01:18:56 INFO ha.ActiveStandbyElector: Session connected. y 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK... 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK. 16/02/26 01:19:12 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK. 16/02/26 01:19:12 INFO zookeeper.ClientCnxn: EventThread shut down 16/02/26 01:19:12 INFO zookeeper.ZooKeeper: Session: 0x153196bc9790000 closed 结果ZKFC还是没起来,然后我尝试: [root@h1 ~]# hadoop-daemon.sh start DFSZKFailoverController ......... Error: Could not find or load main class DFSZKFailoverController 但仍然报错!

saprk集群执行任务时slaves显示拒绝连接,求解决!!!

搭建完spark HA 集群后执行任务显示master拒绝访问 集群的两台slaves都是这种情况,求解决 ![图片说明](https://img-ask.csdn.net/upload/202003/12/1583997250_677288.png) 下面为查看spark日志所显示的信息 ``` 2020-03-12 15:00:13 INFO ZooKeeper:438 - Initiating client connection, connectString=Hadoop01:2181,Hadoop02:2181,Hadoop03:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@54ff5e34 2020-03-12 15:00:13 INFO ClientCnxn:975 - Opening socket connection to server Hadoop01/192.168.128.151:2181. Will not attempt to authenticate using SASL (unknown error) 2020-03-12 15:00:13 INFO ClientCnxn:852 - Socket connection established to Hadoop01/192.168.128.151:2181, initiating session 2020-03-12 15:00:14 INFO ClientCnxn:1235 - Session establishment complete on server Hadoop01/192.168.128.151:2181, sessionid = 0x170cd8aa81b0000, negotiated timeout = 40000 2020-03-12 15:00:14 INFO ConnectionStateManager:228 - State change: CONNECTED 2020-03-12 15:00:16 INFO ZooKeeperLeaderElectionAgent:54 - Starting ZooKeeper LeaderElection agent 2020-03-12 15:00:16 INFO CuratorFrameworkImpl:224 - Starting 2020-03-12 15:00:16 INFO ZooKeeper:438 - Initiating client connection, connectString=Hadoop01:2181,Hadoop02:2181,Hadoop03:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@1a1c9fb4 2020-03-12 15:00:16 INFO ClientCnxn:975 - Opening socket connection to server Hadoop01/192.168.128.151:2181. Will not attempt to authenticate using SASL (unknown error) 2020-03-12 15:00:16 INFO ClientCnxn:852 - Socket connection established to Hadoop01/192.168.128.151:2181, initiating session 2020-03-12 15:00:16 INFO ClientCnxn:1235 - Session establishment complete on server Hadoop01/192.168.128.151:2181, sessionid = 0x170cd8aa81b0001, negotiated timeout = 40000 2020-03-12 15:00:16 INFO ConnectionStateManager:228 - State change: CONNECTED 2020-03-12 15:00:20 INFO ZooKeeperLeaderElectionAgent:54 - We have gained leadership 2020-03-12 15:00:20 INFO Master:54 - I have been elected leader! New state: RECOVERING 2020-03-12 15:00:20 INFO Master:54 - Trying to recover worker: worker-20200311210734-192.168.128.152-53095 2020-03-12 15:00:20 INFO Master:54 - Trying to recover worker: worker-20200311210734-192.168.128.153-51359 2020-03-12 15:00:21 WARN OneWayOutboxMessage:87 - Failed to send one-way RPC. java.io.IOException: Failed to connect to /192.168.128.152:53095 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: /192.168.128.152:53095 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more Caused by: java.net.ConnectException: 拒绝连接 ... 11 more 2020-03-12 15:00:21 WARN OneWayOutboxMessage:87 - Failed to send one-way RPC. java.io.IOException: Failed to connect to /192.168.128.153:51359 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: /192.168.128.153:51359 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more Caused by: java.net.ConnectException: 拒绝连接 ... 11 more 2020-03-12 15:00:24 INFO Master:54 - Registering worker 192.168.128.152:46027 with 2 cores, 1024.0 MB RAM 2020-03-12 15:00:26 INFO Master:54 - Registering worker 192.168.128.153:59036 with 2 cores, 1024.0 MB RAM 2020-03-12 15:01:21 INFO Master:54 - Removing worker worker-20200311210734-192.168.128.152-53095 on 192.168.128.152:53095 2020-03-12 15:01:21 INFO Master:54 - Telling app of lost worker: worker-20200311210734-192.168.128.152-53095 2020-03-12 15:01:21 INFO Master:54 - Removing worker worker-20200311210734-192.168.128.153-51359 on 192.168.128.153:51359 2020-03-12 15:01:21 INFO Master:54 - Telling app of lost worker: worker-20200311210734-192.168.128.153-51359 2020-03-12 15:01:21 INFO Master:54 - Recovery complete - resuming operations! 2020-03-12 15:05:05 INFO Master:54 - Registering app Spark Pi 2020-03-12 15:05:05 INFO Master:54 - Registered app Spark Pi with ID app-20200312150505-0000 2020-03-12 15:05:05 INFO Master:54 - Launching executor app-20200312150505-0000/0 on worker worker-20200312150020-192.168.128.153-59036 2020-03-12 15:05:16 INFO Master:54 - Received unregister request from application app-20200312150505-0000 2020-03-12 15:05:16 INFO Master:54 - Removing app app-20200312150505-0000 2020-03-12 15:05:16 WARN OneWayOutboxMessage:87 - Failed to send one-way RPC. java.io.IOException: Failed to connect to /192.168.128.153:51359 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: /192.168.128.153:51359 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more Caused by: java.net.ConnectException: 拒绝连接 ... 11 more 2020-03-12 15:05:16 WARN OneWayOutboxMessage:87 - Failed to send one-way RPC. java.io.IOException: Failed to connect to /192.168.128.152:53095 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 拒绝连接: /192.168.128.152:53095 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) ... 1 more Caused by: java.net.ConnectException: 拒绝连接 ... 11 more 2020-03-12 15:05:16 INFO Master:54 - 192.168.128.151:55888 got disassociated, removing it. 2020-03-12 15:05:16 INFO Master:54 - Hadoop01:39679 got disassociated, removing it. 2020-03-12 15:05:16 WARN Master:66 - Got status update for unknown executor app-20200312150505-0000/0 ```

kafka.common.KafkaException:

package com; import java.util.Properties; import java.util.concurrent.TimeUnit; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import kafka.serializer.StringEncoder; public class kafkaProducer extends Thread{ private String topic; public kafkaProducer(String topic){ super(); this.topic = topic; } @Override public void run() { Producer producer = createProducer(); int i=0; while(true){ producer.send(new KeyedMessage<Integer, String>(topic, "message: " + i++)); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } } private Producer createProducer() { Properties properties = new Properties(); properties.put("zookeeper.connect", "localhost:2181");//声明zk properties.put("serializer.class", StringEncoder.class.getName()); properties.put("metadata.broker.list", "localhost:9092");// 声明kafka broker return new Producer<Integer, String>(new ProducerConfig(properties)); } public static void main(String[] args) { new kafkaProducer("test").start();// 使用kafka集群中创建好的主题 test } } kafka.common.KafkaException: fetching topic metadata for topics [Set(test)] from broker [ArrayBuffer(id:0,host:localhost,port:9092)] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72) at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82) at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67) at kafka.utils.Utils$.swallow(Utils.scala:172) at kafka.utils.Logging$class.swallowError(Logging.scala:106) at kafka.utils.Utils$.swallowError(Utils.scala:45) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.kafkaProducer.run(kafkaProducer.java:29) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) at kafka.producer.SyncProducer.send(SyncProducer.scala:113) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58) ... 9 more ``` ```

2019 Python开发者日-培训

2019 Python开发者日-培训

150讲轻松搞定Python网络爬虫

150讲轻松搞定Python网络爬虫

设计模式(JAVA语言实现)--20种设计模式附带源码

设计模式(JAVA语言实现)--20种设计模式附带源码

YOLOv3目标检测实战:训练自己的数据集

YOLOv3目标检测实战:训练自己的数据集

java后台+微信小程序 实现完整的点餐系统

java后台+微信小程序 实现完整的点餐系统

三个项目玩转深度学习(附1G源码)

三个项目玩转深度学习(附1G源码)

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

2019 AI开发者大会

2019 AI开发者大会

玩转Linux:常用命令实例指南

玩转Linux:常用命令实例指南

一学即懂的计算机视觉(第一季)

一学即懂的计算机视觉(第一季)

4小时玩转微信小程序——基础入门与微信支付实战

4小时玩转微信小程序——基础入门与微信支付实战

Git 实用技巧

Git 实用技巧

Python数据清洗实战入门

Python数据清洗实战入门

使用TensorFlow+keras快速构建图像分类模型

使用TensorFlow+keras快速构建图像分类模型

实用主义学Python(小白也容易上手的Python实用案例)

实用主义学Python(小白也容易上手的Python实用案例)

程序员的算法通关课:知己知彼(第一季)

程序员的算法通关课:知己知彼(第一季)

MySQL数据库从入门到实战应用

MySQL数据库从入门到实战应用

机器学习初学者必会的案例精讲

机器学习初学者必会的案例精讲

手把手实现Java图书管理系统(附源码)

手把手实现Java图书管理系统(附源码)

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

.net core快速开发框架

.net core快速开发框架

玩转Python-Python3基础入门

玩转Python-Python3基础入门

Python数据挖掘简易入门

Python数据挖掘简易入门

微信公众平台开发入门

微信公众平台开发入门

程序员的兼职技能课

程序员的兼职技能课

Windows版YOLOv4目标检测实战:训练自己的数据集

Windows版YOLOv4目标检测实战:训练自己的数据集

HoloLens2开发入门教程

HoloLens2开发入门教程

微信小程序开发实战

微信小程序开发实战

Java8零基础入门视频教程

Java8零基础入门视频教程

Python可以这样学(第一季:Python内功修炼)

Python可以这样学(第一季:Python内功修炼)

C++语言基础视频教程

C++语言基础视频教程

Python可以这样学(第四季:数据分析与科学计算可视化)

Python可以这样学(第四季:数据分析与科学计算可视化)

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

相关热词 c# 按行txt c#怎么扫条形码 c#打包html c# 实现刷新数据 c# 两个自定义控件重叠 c#浮点类型计算 c#.net 中文乱码 c# 时间排序 c# 必备书籍 c#异步网络通信
立即提问