hadoop启动start-dfs.sh找不到命令

[root@sparkproject1 sbin]# start-dfs.sh
-bash: start-dfs.sh: command not found

hadoop-env.sh已经配置java home
hadoop version 可以看到版本号

2个回答

配置环境变量
将hadoop的bin目录以及sbin目录加入/etc/profile
然后source /etc/profile使修改生效

shuai7boy
shuai7boy 回复Meng6026775: 问下怎么解决的,我今天也遇到这个问题了,头痛
6 个月之前 回复
FlyAngle1
我是一只小小小小小鸟 回复Meng6026775: OK
大约 2 年之前 回复
Meng6026775
Meng6026775 回复我是一只小小小小小鸟: 谢谢你已经解决了
大约 2 年之前 回复
Meng6026775
Meng6026775 回复我是一只小小小小小鸟: export JAVA_HOME=/usr/java/latest export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
大约 2 年之前 回复
FlyAngle1
我是一只小小小小小鸟 使用source命令了也不行的话,贴一下你的hadoop目录以及profile的配置吧
大约 2 年之前 回复
Meng6026775
Meng6026775 回复Meng6026775: 还是不行
大约 2 年之前 回复
Meng6026775
Meng6026775 环境变量配置了
大约 2 年之前 回复

~/.bashrc配置
export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

hdoop路径 /usr/local/hadoop

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
初入Hadoop,start-all.sh问题
Hadoop 版本:hadoop-2.6.5 环境: ![图片说明](https://img-ask.csdn.net/upload/201711/08/1510104898_307684.png) HODOOP_HOME 目录也不同:都在各自用户的目录下 e.g: /home/yann/hadoop /home/ubuntu01/hadoop /home/ubuntu02/hadoop - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 当我在启动的时候start-all.sh 提示我如下错误: yann@yann-laptop:~/develop/tool/hadoop-2.6.5/sbin$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [yann-laptop] yann-laptop: namenode running as process 7041. Stop it first. yann@ubuntu01-virtual-machine's password: The authenticity of host 'ubuntu02-virtual-machine (192.168.2.182)' can't be established. ECDSA key fingerprint is 57:ee:92:a8:85:85:ef:16:26:a3:b7:1d:54:77:19:18. Are you sure you want to continue connecting (yes/no)? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - hadoop/etc/hadoop/slaves: ubuntu01-virtual-machine ubuntu02-virtual-machine - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - SSH公钥已经配置好: ssh-rsa XXXXX......XXXXXX yann@yann-laptop ssh-rsa XXXXX......XXXXXX ubuntu01@ubuntu01-virtual-machine ssh-rsa XXXXX......XXXXXX ubuntu02@ubuntu02-virtual-machine - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 从提示来看它使用了yann去连接ubuntu01-virtual-machine。 请问这种情况如何配置(主从机器不同的用户名)?
hadoop运行start-dfs.sh时,报master: ERROR: JAVA_HOME is not set and could not be found.
我已经在hadoop _ env.sh中设置了JAVA _ HOME的绝对路径,还是会报错 ``` ### # Generic settings for HADOOP ### # Technically, the only required environment variable is JAVA_HOME. # All others are optional. However, the defaults are probably not # preferred. Many sites configure these options outside of Hadoop, # such as in /etc/profile.d # The java implementation to use. By default, this environment # variable is REQUIRED on ALL platforms except OS X! export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_211 ``` 我的虚拟机VMware Workstation14.0.0 linux是ubuntu 12.04 desktop amd64 jdk是jdk-1.8.0_211 hadoop是 3.1.2
hadoop集群start-yarn.sh报错,目测是jdk的原因,求指导
**之前jdk的安装目录为/usr/local/jdk1.7.0_80,后来新建了一个文件夹jdk把jdk1.7.0_80放进文件夹里了/usr/local/jdk/jdk1.7.0_80** **/etc/profile的JAVA_HOME也改了也source了**_![图片说明](https://img-ask.csdn.net/upload/201902/21/1550740813_416088.png) **which java也对** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550740890_610489.png) **还有hadoop-env.sh的JAVA_HOME也改了** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550741528_856329.png) **我先start-dfs.sh,没问题** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550741211_41850.jpg) **再start-yarn.sh就报错了** ![图片说明](https://img-ask.csdn.net/upload/201902/21/1550741335_349468.png) **请大佬指导**
hadoop start-all.sh问题
Hadoop 版本:hadoop-2.6.5 环境: ![图片说明](https://img-ask.csdn.net/upload/201711/08/1510127732_32215.png) HODOOP_HOME 目录也不同:都在各自用户的目录下 e.g: /home/yann/hadoop /home/ubuntu01/hadoop /home/ubuntu02/hadoop 当我在启动的时候start-all.sh 提示我如下错误: yann@yann-laptop:~/develop/tool/hadoop-2.6.5/sbin$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [yann-laptop] yann-laptop: namenode running as process 7041. Stop it first. yann@ubuntu01-virtual-machine's password: The authenticity of host 'ubuntu02-virtual-machine (192.168.2.182)' can't be established. ECDSA key fingerprint is 57:ee:92:a8:85:85:ef:16:26:a3:b7:1d:54:77:19:18. Are you sure you want to continue connecting (yes/no)? hadoop/etc/hadoop/slaves: ubuntu01-virtual-machine ubuntu02-virtual-machine SSH公钥已经配置好: ssh-rsa XXXXX......XXXXXX yann@yann-laptop ssh-rsa XXXXX......XXXXXX ubuntu01@ubuntu01-virtual-machine ssh-rsa XXXXX......XXXXXX ubuntu02@ubuntu02-virtual-machine 从提示来看它使用了yann去连接ubuntu01-virtual-machine。 请问这种情况如何配置?或者这种配置不提倡? (1)主从机器不同的用户名 (当我把slaves文件改成 ubuntu01@ubuntu01-virtual-machine , 以上问题没有了,随之而来的是,它在我的ubuntu01这台机子上区找/home/yann/hadoop这个目录,当然没有,所以报了目录找不到,也就是下面第二点是否可行) (2)hadoop安装目录不同 (貌似看到很多朋友把主从机器上的hadoop都是装在/usr/下)
【萌芽求助】start-dfs.sh
**我的namenode和datanode启动正常,但执行start-dfs.sh命令时,出现错误, 每句中都报..Could not resolve hostname...!** 连接方式是hostonly,hadoop2.6.0 ``` [root@h1 ~]# cat /etc/hostname 192.168.1.101 [root@h1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.101 h1 192.168.1.102 h2 192.168.1.103 h3 192.168.1.104 h4 192.168.1.105 h5 192.168.1.106 h6 192.168.1.107 h7 [root@h1 ~]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=h1 ``` 这是我自己的模拟搭建,多多包涵~ 待大神解答n(*≧▽≦*)n ``` [root@h1 ~]# **start-dfs.sh** 16/02/27 14:15:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. h1 h2] sed: -e expression #1, char 6: unknown option to `s' HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Temporary failure in name resolution warning:: ssh: Could not resolve hostname warning:: Temporary failure in name resolution VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution You: ssh: Could not resolve hostname You: Temporary failure in name resolution Java: ssh: Could not resolve hostname Java: Temporary failure in name resolution have: ssh: Could not resolve hostname have: Temporary failure in name resolution loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution which: ssh: Could not resolve hostname which: Temporary failure in name resolution might: ssh: Could not resolve hostname might: Temporary failure in name resolution disabled: ssh: Could not resolve hostname disabled: Temporary failure in name resolution stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution library: ssh: Could not resolve hostname library: Temporary failure in name resolution guard.: ssh: Could not resolve hostname guard.: Temporary failure in name resolution The: ssh: Could not resolve hostname The: Temporary failure in name resolution have: ssh: Could not resolve hostname have: Temporary failure in name resolution VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution Client: ssh: Could not resolve hostname Client: Temporary failure in name resolution fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution to: ssh: Could not resolve hostname to: Temporary failure in name resolution try: ssh: Could not resolve hostname try: Temporary failure in name resolution will: ssh: Could not resolve hostname will: Temporary failure in name resolution guard: ssh: Could not resolve hostname guard: Temporary failure in name resolution now.: ssh: Could not resolve hostname now.: Temporary failure in name resolution the: ssh: Could not resolve hostname the: Temporary failure in name resolution recommended: ssh: Could not resolve hostname recommended: Temporary failure in name resolution stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution that: ssh: Could not resolve hostname that: Temporary failure in name resolution It's: ssh: Could not resolve hostname It's: Temporary failure in name resolution highly: ssh: Could not resolve hostname highly: Temporary failure in name resolution with: ssh: Could not resolve hostname with: Temporary failure in name resolution the: ssh: Could not resolve hostname the: Temporary failure in name resolution -c: Unknown cipher type 'cd' fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution you: ssh: Could not resolve hostname you: Temporary failure in name resolution 'execstack: ssh: Could not resolve hostname 'execstack: Temporary failure in name resolution library: ssh: Could not resolve hostname library: Temporary failure in name resolution link: ssh: Could not resolve hostname link: Temporary failure in name resolution <libfile>',: ssh: Could not resolve hostname <libfile>',: Temporary failure in name resolution with: ssh: Could not resolve hostname with: Temporary failure in name resolution it: ssh: Could not resolve hostname it: Temporary failure in name resolution or: ssh: Could not resolve hostname or: Temporary failure in name resolution '-z: ssh: Could not resolve hostname '-z: Temporary failure in name resolution noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Temporary failure in name resolution **h1: starting namenode**, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-h1.out **h2: starting namenode**, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-namenode-h2.out **192.168.1.105: starting datanode,** logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-h5.out **192.168.1.107: starting datanode**, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-h7.out **192.168.1.106: starting datanode**, logging to /usr/local/hadoop-2.6.0/logs/hadoop-root-datanode-h6.out Starting secondary namenodes [Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.] sed: -e expression #1, char 6: unknown option to `s' HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Temporary failure in name resolution warning:: ssh: Could not resolve hostname warning:: Temporary failure in name resolution Client: ssh: Could not resolve hostname Client: Temporary failure in name resolution Java: ssh: Could not resolve hostname Java: Temporary failure in name resolution which: ssh: Could not resolve hostname which: Temporary failure in name resolution VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution have: ssh: Could not resolve hostname have: Temporary failure in name resolution You: ssh: Could not resolve hostname You: Temporary failure in name resolution might: ssh: Could not resolve hostname might: Temporary failure in name resolution library: ssh: Could not resolve hostname library: Temporary failure in name resolution loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution have: ssh: Could not resolve hostname have: Temporary failure in name resolution disabled: ssh: Could not resolve hostname disabled: Temporary failure in name resolution guard.: ssh: Could not resolve hostname guard.: Temporary failure in name resolution VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution The: ssh: Could not resolve hostname The: Temporary failure in name resolution will: ssh: Could not resolve hostname will: Temporary failure in name resolution stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution to: ssh: Could not resolve hostname to: Temporary failure in name resolution try: ssh: Could not resolve hostname try: Temporary failure in name resolution recommended: ssh: Could not resolve hostname recommended: Temporary failure in name resolution now.: ssh: Could not resolve hostname now.: Temporary failure in name resolution guard: ssh: Could not resolve hostname guard: Temporary failure in name resolution It's: ssh: Could not resolve hostname It's: Temporary failure in name resolution you: ssh: Could not resolve hostname you: Temporary failure in name resolution that: ssh: Could not resolve hostname that: Temporary failure in name resolution the: ssh: Could not resolve hostname the: Temporary failure in name resolution library: ssh: Could not resolve hostname library: Temporary failure in name resolution highly: ssh: Could not resolve hostname highly: Temporary failure in name resolution -c: Unknown cipher type 'cd' the: ssh: Could not resolve hostname the: Temporary failure in name resolution with: ssh: Could not resolve hostname with: Temporary failure in name resolution <libfile>',: ssh: Could not resolve hostname <libfile>',: Temporary failure in name resolution fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution 'execstack: ssh: Could not resolve hostname 'execstack: Temporary failure in name resolution link: ssh: Could not resolve hostname link: Temporary failure in name resolution or: ssh: Could not resolve hostname or: Temporary failure in name resolution with: ssh: Could not resolve hostname with: Temporary failure in name resolution '-z: ssh: Could not resolve hostname '-z: Temporary failure in name resolution it: ssh: Could not resolve hostname it: Temporary failure in name resolution noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Temporary failure in name resolution 16/02/27 14:16:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable ```
Hadoop搭建集群环境用start-all.sh启动时报如下错误
![图片说明](https://img-ask.csdn.net/upload/201512/28/1451311731_549842.png) 看了下,应该是start-dfs.sh脚本的问题,但是里面没有配置绝对路径....
hadoop启动dfs时出现问题
刚刚接触hadoop,namenode格式化后,启动hadoop sudo sbin/start-dfs.sh 出现错误: hadoop@qiaoyu-Lenovo-G460:/usr/local/hadoop-2.4.1$ sudo sbin/start-dfs.sh [sudo] password for hadoop: Starting namenodes on [localhost] root@localhost's password: localhost: Permission denied, please try again. 上网找了好久,最多的就是说sudo passwd改变密码 试了之后仍然出现上面的情况,没有变化 急着进行下去,求助啊
Hadoop3.1.0分布式环境搭建
1,环境VMWare,CentOS6.5, JDK1.8(oracle), Hadoop3.1.0 在master节点使用start-dfs.sh时,只会启动master节点的namenode和datanode,以及slave1节点的secondarynamenode,使用start-dfs.sh时能启动master的namenode和datanode,以及slave1上的secondarynamenode. 其它所有子节点的datanode进程都不会启动,必须在子节点上使用 hdfs --daemon start datanode 命令手动启动datanode. master节点namenode和datanode的日志中未出现任何异常情况。 [root@master hadoop-3.1.0]# start-dfs.sh Starting namenodes on [master] Starting datanodes Starting secondary namenodes [node1] 2019-07-14 11:04:46,521 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@master hadoop-3.1.0]# jps 29025 NameNode 29147 DataNode 29420 Jps 问题: 为什么无法通过start-dfs.sh启动集群中其它slave节点中的datanode,而又能启动slave中的secondarynamenode? 请各位大神帮忙看看。 ![图片说明](https://img-ask.csdn.net/upload/201907/14/1563100513_35604.png)
诡异的Hadoop启动脚本
**跟踪start-dfs.sh脚本启动时,发现极其诡异的一点,如下:** --- Hadoop版本: Apache Hadoop2.6.5 --- **1、start-dfs.sh** start-dfs.sh中有这样语句: ``` "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \ --config "$HADOOP_CONF_DIR" \ --hostnames "$NAMENODES" \ --script "$bin/hdfs" start namenode $nameStartOpt ``` 对应执行命令为 ``` hadoop-daemons.sh --config "$HADOOP_CONF_DIR" --hostnames "$NAMENODES" --script "$bin/hdfs" start namenode $nameStartOpt ``` 跟踪到hadoop-daemons.sh --- **2、hadoop-daemons.sh** ``` exec "$bin/slaves.sh" --config $HADOOP_CONF_DIR cd "$HADOOP_PREFIX" \; "$bin/hadoop-daemon.sh" --config $HADOOP_CONF_DIR "$@" ``` 接着执行到这里,最后执行 ``` "$bin/hadoop-daemon.sh" --config $HADOOP_CONF_DIR "$@" ``` 这里的"$@" 为我理解为从[start-dfs.sh]中传过来的 ``` --config "$HADOOP_CONF_DIR" --hostnames "$NAMENODES" --script "$bin/hdfs" start namenode $nameStartOpt ``` 这么一大串参数。 接着跟踪,执行到hadoop-daemon.sh脚本 --- **3、hadoop-daemon.sh** ``` hadoopScript="$HADOOP_PREFIX"/bin/hadoop if [ "--script" = "$1" ] then shift hadoopScript=$1 shift fi startStop=$1 shift command=$1 shift ``` 前后都没有进行过 shift 操作。 这里比较 ``` $1="--script" ``` 是啥意思? 从hadoop-daemons.sh传入到此脚本的参数难道不是应该为 ``` --config "$HADOOP_CONF_DIR" --config "$HADOOP_CONF_DIR" --hostnames "$NAMENODES" --script "$bin/hdfs" start namenode $nameStartOpt ``` 这几个参数吗? 为什么上面的处理过程好像传入的只有 ``` --script "$bin/hdfs" start namenode $nameStartOpt ``` 这个参数? 实在是搞不懂。
hadoop集群,hdfs dfs -ls / 目录出错
搭建了一个hadoop集群,用hdfs dfs -ls /命令,列出的是本地系统的根目录。 用hdfs dfs -ls hdfs://servicename/ 列出的目录才是hdfs上的目录,可能是什么原因? 执行hive创建的目录也是在本地系统目录上。 集群的配置如下 集群规划: 主机名 IP 安装的软件 运行的进程 hadoop01 192.168.175.129 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop02 192.168.175.127 jdk、hadoop NameNode、DFSZKFailoverController(zkfc) hadoop03 192.168.175.126 jdk、hadoop ResourceManager hadoop04 192.168.175.125 jdk、hadoop ResourceManager hadoop05 192.168.175.124 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop06 192.168.175.123 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain hadoop07 192.168.175.122 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain windows:NLB LINUX:LVS 1.liunx虚拟机安装后,虚拟机连接模式要选择host-only模式。然后分配IP(以hadoop01为例) DEVICE="eth0" BOOTPROTO="static" ### HWADDR="00:0C:29:3C:BF:E7" IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c" IPADDR="192.168.175.129" ### NETMASK="255.255.255.0" ### GATEWAY="192.168.175.1" ### 2.修改主机名: vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=hadoop01 ### 3.关闭防火墙: #查看防火墙状态 service iptables status #关闭防火墙 service iptables stop #查看防火墙开机启动状态 chkconfig iptables --list #关闭防火墙开机启动 chkconfig iptables off 4.免登录配置: #生成ssh免登陆密钥 #进入到我的home目录 cd ~/.ssh ssh-keygen -t rsa (四个回车) 执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥) 将公钥拷贝到要免登陆的机器上 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 或 若报错ssh-copy-id: ERROR: No identities found,是因为找不到公钥路径,加上-i然后再加上路径即可 则用 $ ssh-copy-id -i ~/.ssh/id_rsa.pub user@remote_ip 5.主机IP映射关系(/etc/hosts每台机器上都要配置全部映射关系) 192.168.175.129 hadoop01 192.168.175.127 hadoop02 192.168.175.126 hadoop03 192.168.175.125 hadoop04 192.168.175.124 hadoop05 192.168.175.123 hadoop06 192.168.175.122 hadoop07 6./etc/profile下配置java环境变量: export JAVA_HOME=/lichangwu/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin #刷新profile source /etc/profile 若版本报错,vi /etc/selinux/config,设置SELINUX=disabled,然后重启虚拟机 7.安装zookeeper: 1.安装配置zooekeeper集群(在hadoop05上): 1.1解压 tar -zxvf zookeeper-3.4.6.tar.gz -C /lichangwu/ 1.2修改配置 cd /lichangwu/zookeeper-3.4.6/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg 修改:dataDir=/lichangwu/zookeeper-3.4.6/tmp 在最后添加: server.1=hadoop05:2888:3888 server.2=hadoop06:2888:3888 server.3=hadoop07:2888:3888 保存退出 然后创建一个tmp文件夹 mkdir /lichangwu/zookeeper-3.4.6/tmp 再创建一个空文件 touch /lichangwu/zookeeper-3.4.6/tmp/myid 最后向该文件写入ID echo 1 > /lichangwu/zookeeper-3.4.6/tmp/myid 1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个lichangwu目录:mkdir /lichangwu) scp -r /lichangwu/zookeeper-3.4.6/ hadoop06:/lichangwu/ scp -r /lichangwu/zookeeper-3.4.6/ hadoop07:/lichangwu/ 注意:修改hadoop06、hadoop07对应/lichangwu/zookeeper-3.4.6/tmp/myid内容 itcast06: echo 2 > /lichangwu/zookeeper-3.4.6/tmp/myid itcast07: echo 3 > /lichangwu/zookeeper-3.4.6/tmp/myid 8.安装配置hadoop集群(在hadoop01上操作): 2.1解压 tar -zxvf hadoop-2.4.1.tar.gz -C /lichangwu/ 2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下) #将hadoop添加到环境变量中 vim /etc/profile export JAVA_HOME=/lichangwu/jdk1.7.0_79 export HADOOP_HOME=/lichangwu/hadoop-2.4.1 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin #hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下 cd /lichangwu/hadoop-2.4.1/etc/hadoop 2.2.1修改hadoo-env.sh export JAVA_HOME=/lichangwu/jdk1.7.0_79 2.2.2修改core-site.xml <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/lichangwu/hadoop-2.4.1/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> </configuration> 2.2.3修改hdfs-site.xml <configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop01:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop01:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop02:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop02:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/lichangwu/hadoop-2.4.1/journal</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration> 2.2.4修改mapred-site.xml <configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> 2.2.5修改yarn-site.xml <configuration> <!-- 开启RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop04</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 2.2.6修改slaves(slaves是指定子节点的位置,因为要在itcast01上启动HDFS、在itcast03启动yarn, 所以itcast01上的slaves文件指定的是datanode的位置,itcast03上的slaves文件指定的是nodemanager的位置) hadoop05 hadoop06 hadoop07 2.2.7配置免密码登陆 #首先要配置itcast01到hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop01上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点,包括自己 ssh-coyp-id hadoop01 ssh-coyp-id hadoop02 ssh-coyp-id hadoop03 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #配置hadoop03到hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆 #在hadoop03上生产一对钥匙 ssh-keygen -t rsa #将公钥拷贝到其他节点 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07 #注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop02到hadoop01的免登陆 在hadoop02上生产一对钥匙 ssh-keygen -t rsa ssh-coyp-id -i hadoop01 2.4将配置好的hadoop拷贝到其他节点 scp -r hadoop-2.4.1/ hadoop02:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop03:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop04:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop05:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop06:/lichangwu/hadoop-2.4.1/ scp -r hadoop-2.4.1/ hadoop07:/lichangwu/hadoop-2.4.1/ ###注意:严格按照下面的步骤 2.5启动zookeeper集群(分别在hadoop05、hadoop06、hadoop07上启动zk) cd /lichangwu/zookeeper-3.4.6/bin/ ./zkServer.sh start #查看状态:一个leader,两个follower ./zkServer.sh status 2.6启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行) cd /lichangwu/hadoop-2.4.1 sbin/hadoop-daemon.sh start journalnode #运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程 2.7格式化HDFS #在hadoop01上执行命令: hdfs namenode -format #格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件, 这里我配置的是/lichangwu/hadoop-2.4.1/tmp,然后将/lichangwu/hadoop-2.4.1/tmp拷贝到hadoop02的/lichangwu/hadoop-2.4.1/下。 scp -r tmp/ hadoop02:/lichangwu/hadoop-2.4.1/ 2.8格式化ZK(在hadoop01上执行即可) hdfs zkfc -formatZK 2.9启动HDFS(在hadoop01上执行) sbin/start-dfs.sh 2.10启动YARN(#####注意#####:是在hadoop03上执行start-yarn.sh, 如果hadoop04上没有启动成功,则在hadoop04上再启动一次start-yarn.sh; 把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动) sbin/start-yarn.sh 到此,hadoop-2.4.1配置完毕,可以统计浏览器访问: http://192.168.175.129:50070 NameNode 'hadoop01:9000' (active) http://192.168.175.127:50070 NameNode 'hadoop02:9000' (standby)
HDFS启动警告 WARN util.NativeCodeLoader
启动HDFS报入校警告: [hadoop@hadoop tmp]$ start-dfs.sh 15/02/02 20:39:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-namenode-hadoop.out localhost: starting datanode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-datanode-hadoop.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-hadoop.out 15/02/02 20:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 网上查说是liunx版本问题 [hadoop@hadoop ~]$ uname -a Linux hadoop 2.6.32-431.el6.i686 #1 SMP Fri Nov 22 00:26:36 UTC 2013 i686 i686 i386 GNU/Linux JDK版本 [hadoop@hadoop ~]$ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) Client VM (build 19.1-b02, mixed mode, sharing) Hadoop版本: [hadoop@hadoop ~]$ hadoop version Hadoop 2.5.2 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0 Compiled by jenkins on 2014-11-14T23:45Z Compiled with protoc 2.5.0 From source with checksum df7537a4faa4658983d397abf4514320 This command was run using /usr/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar 请高手帮忙看下
hadoop2.7.2搭建分布式环境,格式化后,namenode没启动成功
第一步:执行hadoop namenode -formate STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b165c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08Z STARTUP_MSG: java = 1.7.0_76 ************************************************************/ 16/08/02 04:26:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/08/02 04:26:16 INFO namenode.NameNode: createNameNode [-formate] Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | [-rollback] | [-rollingUpgrade <rollback|downgrade|started> ] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force] ] | [-metadataVersion ] ] 16/08/02 04:26:16 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100 第二步:执行start-all.sh 结果如下: [root@master sbin]# sh start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 16/08/02 05:45:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master] master: starting namenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-master.out slave2: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave2.out slave3: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave3.out slave1: starting datanode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.7.2/logs/hadoop-root-secondarynamenode-master.out 16/08/02 05:46:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-master.out slave2: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave2.out slave3: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave3.out slave1: starting nodemanager, logging to /usr/hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-slave1.out [root@master sbin]# jps 2613 ResourceManager 2467 SecondaryNameNode 2684 Jps namenode日志: 2016-08-02 05:49:49,910 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,928 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 2016-08-02 05:49:49,928 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 2016-08-02 05:49:49,930 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 2016-08-02 05:49:49,934 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-08-02 05:49:49,935 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-08-02 05:49:49,935 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 2016-08-02 05:49:49,949 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-08-02 05:49:49,961 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.234.100
Hadoop测试Wordcount出现的错误
启动Hadoop-2.7.0 ,sbin/start-dfs.sh 。用浏览器打开http://localhost:50070 ,出现以下错误提示:Non Heap Memory used 21.41MB of 22.62MB commited Non Heap Memory. Max Non Heap Memory is -1B 通过百度知道是非堆内存溢出的问题,可是应该怎么解决?
单机伪分布配置Hadoop,SecondaryNameNode无法启动,求解
环境配置: VMware fusion中安装ubuntu-14.04.4-desktop-amd64 hadoop版本:2.6.5 单机模式安装伪分布配置后,执行 ./sbin/start-dfs.sh 不报错,但通过jps查看,没有SecondaryNameNode进程。 core-site.xml配置如下: <configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/user/local/hadoop/tmp</value> <description>Abase for other temporary directories</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> hdfs-site.xml配置如下: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/data</value> </property> </configuration> 无论重启start-dfs.sh,还是重新格式化namenode,毫无作用,跪求大神高见~
Windows Cygwin下 Hadoop安装
出现以下情况是什么原因? ``` Administrator@ZKCRJB84CNJ0ZTJ /cygdrive/c/hadoop/sbin $ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 15/12/31 20:04:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable ]tarting namenodes on [172.29.66.77 : Name or service not knownstname 172.29.66.77 localhost: starting datanode, logging to /cygdrive/c/hadoop/logs/hadoop-Administrator-datanode-ZKCRJB84CNJ0ZTJ.out ]tarting secondary namenodes [0.0.0.0 : Name or service not knownstname 0.0.0.0 15/12/31 20:05:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /cygdrive/c/hadoop/logs/yarn-Administrator-resourcemanager-ZKCRJB84CNJ0ZTJ.out localhost: starting nodemanager, logging to /cygdrive/c/hadoop/logs/yarn-Administrator-nodemanager-ZKCRJB84CNJ0ZTJ.out Administrator@ZKCRJB84CNJ0ZTJ /cygdrive/c/hadoop/sbin $ jps 1164 ResourceManager 9948 Jps ```
Hadoop 同时启动HDFS、YARN出错
``` liuye@liuye-VirtualBox:~$ start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 Starting namenodes on [] liuye@localhost's password: localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-liuye-namenode-liuye-VirtualBox.out localhost: /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 liuye@localhost's password: localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-liuye-datanode-liuye-VirtualBox.out localhost: /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 /usr/local/hadoop/bin/hdfs: 行 276: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-liuye-resourcemanager-liuye-VirtualBox.out /usr/local/hadoop/bin/yarn: 行 284: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 liuye@localhost's password: localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-liuye-nodemanager-liuye-VirtualBox.out localhost: /usr/local/hadoop/bin/yarn: 行 284: /usr/lib/jvm/java-7-openjdk-amd64/bin/java: 没有那个文件或目录 liuye@liuye-VirtualBox:~$ ```
haddp 伪分布式搭建遇到master不能连接slave
6台机器,搭建hadoop jdk,zookeeper;分别为 01,02 master 03 ResourceManager 04,05,06 databnode nodemanage slave文件配置了 04,05,06 01启动start-dfs.sh可以启动02 namenode,04,05,06的datanode 03启动start-yarn.sh启动了03 ResourceManager 04,05,06的nodemanage 但是浏览01的hadoop管理界面看不到datanode 给hdfs上传文件报错,报错的意思就是没有可用的datanode 六台虚拟机均已关闭防火墙,网上各种方式都用过没有解决。 请大牛伸出援手
hadoop配置问题,jps命令没有namenode
配置HADOOP的时候,输入jps没有namenode出现,其他的都有 我在输入start-dfs.sh之后,出现下图 ![图片说明](https://img-ask.csdn.net/upload/201708/22/1503409611_645641.jpg) 然后我在输入hadoop namnode -format之后,出现下图的error: ![图片说明](https://img-ask.csdn.net/upload/201708/22/1503409642_287747.jpg) 请问这种情况该怎么解决?
hadoop大佬进来看看。。。。。。。求助解答!
[root@hadoop1 sbin]# start-dfs.sh 17/12/01 22:08:31 WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null. Check your hdfs-site.xml file to ensure namenodes are configured properly. Starting namenodes on [hadoop01] hadoop01: ssh: Could not resolve hostname hadoop01: Temporary failure in name resolution hadoop启动老是报这个,不知道为啥?求解?
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片...
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什...
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给袈...
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库 回到首页 目录: Python语言高频重点汇总 目录: 1. 函数-传参 2. 元类 3. @staticmethod和@classmethod两个装饰器 4. 类属性和实例属性 5. Python的自省 6. 列表、集合、字典推导式 7. Python中单下划线和双下划线 8. 格式化字符串中的%和format 9. 迭代器和生成器 10...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
ES6基础-ES6的扩展
进行对字符串扩展,正则扩展,数值扩展,函数扩展,对象扩展,数组扩展。 开发环境准备: 编辑器(VS Code, Atom,Sublime)或者IDE(Webstorm) 浏览器最新的Chrome 字符串的扩展: 模板字符串,部分新的方法,新的unicode表示和遍历方法: 部分新的字符串方法 padStart,padEnd,repeat,startsWith,endsWith,includes 字...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
web前端javascript+jquery知识点总结
Javascript javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ,语法同java类似,是一种解释性语言,边执行边解释。 JavaScript的组成: ECMAScipt 用于描述: 语法,变量和数据类型,运算符,逻辑控制语句,关键字保留字,对象。 浏览器对象模型(Br
Qt实践录:开篇
本系列文章介绍笔者的Qt实践之路。
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
为什么听过很多道理,依然过不好这一生?
记录学习笔记是一个重要的习惯,不希望学习过的东西成为过眼云烟。做总结的同时也是一次复盘思考的过程。 本文是根据阅读得到 App上《万维钢·精英日课》部分文章后所做的一点笔记和思考。学习是一个系统的过程,思维模型的建立需要相对完整的学习和思考过程。以下观点是在碎片化阅读后总结的一点心得总结。
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
一条链接即可让黑客跟踪你的位置! | Seeker工具使用
搬运自:冰崖的部落阁(icecliffsnet) 严正声明:本文仅限于技术讨论,严禁用于其他用途。 请遵守相对应法律规则,禁止用作违法途径,出事后果自负! 上次写的防社工文章里边提到的gps定位信息(如何防止自己被社工或人肉) 除了主动收集他人位置信息以外,我们还可以进行被动收集 (没有技术含量) Seeker作为一款高精度地理位置跟踪工具,同时也是社交工程学(社会工程学)爱好者...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧...... 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c# 数组类型 泛型约束 c#的赛狗日程序 c# 传递数组 可变参数 c# 生成存储过程 c# list 补集 c#获得所有窗体 c# 当前秒数转成年月日 c#中的枚举 c# 计算校验和 连续随机数不重复c#
立即提问