Liangqingooo 2024-04-24 15:50 采纳率: 73.9%
浏览 10

Hadoop在虚拟机伪分布式配置异常

Hadoop安装配置到虚拟机VM中,在Hadoop伪分布式配置出现了异常。

正确的操作是如下:

img

不知道该步骤是在哪里打开进行操作的,我先在虚拟机上试着打开core-size.xml文件,是如下这样

img

在windows上打开core-size.xml文件是这样:

img


该怎么改啊,还是哪里出了问题,帮忙解答一下,感谢感谢

  • 写回答

4条回答 默认 最新

  • 专家-赤兔[在线] 优质创作者: 编程框架技术领域 2024-04-24 15:50
    关注

    引自免费微信小程序:皆我百晓生

    看起来你在尝试配置Hadoop伪分布式模式,并且对如何修改core-site.xml文件有些困惑。别担心,我会一步步指导你完成这个过程。

    首先,确认你已经正确地下载并解压了Hadoop的二进制包到你的Linux虚拟机中。然后,你需要设置环境变量。在Linux中,这通常是在.bashrc.bash_profile文件中添加以下内容:

    export HADOOP_HOME=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin
    

    之后,你需要重新加载这些环境变量,可以执行以下命令:

    source ~/.bashrc
    

    现在我们开始编辑core-site.xml文件。由于你已经在Linux虚拟机上打开了这个文件,你应该能看到一个文本编辑器(可能是vi或gedit)。按照提示,你需要将文件内容替换为以下内容:

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- 
    Licensed under the Apache License, Version 2.0(the"License"); 
    you may not use this file except in compliance with the License. 
    You may obtain a copy of the License at 
    http://www.apache.org/licenses/LICENSE-2.0 
    Unless required by applicable law or agreed to in writing, software 
    distributed under the License is distributed on an "AS IS" BASIS, 
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
    See the License for the specific language governing permissions and 
    limitations under the License . See accompanying LICENSE file . 
    -->
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
      <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop/tmp</value>
        <description>A base for other temporary directories.</description>
      </property>
      <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
        <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. e.g., "hdfs://namenode.example.com:9000"</description>
      </property>
    </configuration>
    

    确保路径/usr/local/hadoop/tmp存在,如果没有,你需要创建它:

    mkdir -p /usr/local/hadoop/tmp
    chmod 777 /usr/local/hadoop/tmp
    

    接下来,我们需要编辑另一个配置文件hdfs-site.xml。同样,找到这个文件(通常是/usr/local/hadoop/etc/hadoop/hdfs-site.xml),然后添加以下内容:

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- 
     Licensed under the Apache License, Version 2.0 (the "License");
     you may not use this file except in compliance with the License.
     You may obtain a copy of the License at
     
     http://www.apache.org/licenses/LICENSE-2.0
     
     Unless required by applicable law or agreed to in writing, software
     distributed under the License is distributed on an "AS IS" BASIS,
     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     See the License for the specific language governing permissions and
     limitations under the License. See accompanying LICENSE file.
    -->
    
    <configuration>
      <property>
        <name>dfs.replication</name>
        <value>1</value>
        <description>Default block replication.
          The actual number of replications can be specified when the file is created.
          The default is used if replication is not specified in create time.
        </description>
      </property>
    </configuration>
    

    保存并关闭这两个文件后,启动Hadoop集群。首先启动NameNode和SecondaryNameNode(如果有):

    sudo $HADOOP_HOME/sbin/start-dfs.sh
    

    接着启动YARN资源管理器:

    sudo $HADOOP_HOME/sbin/start-yarn.sh
    

    最后,检查所有服务是否正常运行:

    jps
    

    你应该看到NameNode、DataNode、ResourceManager和NodeManager等进程在运行。如果遇到任何错误,请检查/usr/local/hadoop/logs下的日志文件,它们会提供详细的错误信息。

    如果你在Windows上做这个操作,你可以使用像Notepad++这样的文本编辑器打开core-site.xml,然后按照上述内容进行编辑。其他步骤与Linux相同,只是环境变量的设置和命令执行略有不同。

    评论 编辑记录

报告相同问题?

问题事件

  • 创建了问题 4月24日