xiao321yang 2017-10-10 13:03 采纳率: 0%
浏览 2577

ceph部署时osd create在/home/osd下出问题

今天我在实机部署的时候,因为之前分区大部分空间都在home下,所以只能将osd都弄在home下了,但是ceph-deploy osd create osd1:/home/ceph/osd 和激活的时候,出现下面的错,求解决。

[ceph@admin my-cluster]# ceph-deploy osd activate mon3:/home/ceph/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy osd activate mon3:/home/ceph/osd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('mon3', '/home/ceph/osd', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks mon3:/home/ceph/osd:
[mon3][DEBUG ] connection detected need for sudo
[mon3][DEBUG ] connected to host: mon3
[mon3][DEBUG ] detect platform information from remote host
[mon3][DEBUG ] detect machine type
[mon3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host mon3 disk /home/ceph/osd
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[mon3][DEBUG ] find the location of an executable
[mon3][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/ceph/osd
[mon3][WARNIN] main_activate: path = /home/ceph/osd
[mon3][WARNIN] activate: Cluster uuid is 20fa28ad-98e6-4d89-bc2a-771e94e0de43
[mon3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[mon3][WARNIN] activate: Cluster name is ceph
[mon3][WARNIN] activate: OSD uuid is f2243a79-0e54-475a-ab83-11a2c4811ddb
[mon3][WARNIN] allocate_osd_id: Allocating OSD id...
[mon3][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise f2243a79-0e54-475a-ab83-11a2c4811ddb
[mon3][WARNIN] command: Running command: /sbin/restorecon -R /home/ceph/osd/whoami.4359.tmp
[mon3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/ceph/osd/whoami.4359.tmp
[mon3][WARNIN] activate: OSD id is 1
[mon3][WARNIN] activate: Initializing OSD...
[mon3][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /home/ceph/osd/activate.monmap
[mon3][WARNIN] got monmap epoch 1
[mon3][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ceph/osd/activate.monmap --osd-data /home/ceph/osd --osd-journal /home/ceph/osd/journal --osd-uuid f2243a79-0e54-475a-ab83-11a2c4811ddb --keyring /home/ceph/osd/keyring --setuser ceph --setgroup ceph
[mon3][WARNIN] activate: Marking with init system systemd
[mon3][WARNIN] command: Running command: /sbin/restorecon -R /home/ceph/osd/systemd
[mon3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/ceph/osd/systemd
[mon3][WARNIN] activate: Authorizing OSD key...
[mon3][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i /home/ceph/osd/keyring osd allow * mon allow profile osd
[mon3][WARNIN] added key for osd.1
[mon3][WARNIN] command: Running command: /sbin/restorecon -R /home/ceph/osd/active.4359.tmp
[mon3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/ceph/osd/active.4359.tmp
[mon3][WARNIN] activate: ceph osd.1 data dir is ready at /home/ceph/osd
[mon3][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-1 -> /home/ceph/osd
[mon3][WARNIN] start_daemon: Starting ceph osd.1...
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@1
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@1 --runtime
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@1
[mon3][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@1
[mon3][WARNIN] Job for ceph-osd@1.service failed because the control process exited with error code. See "systemctl status ceph-osd@1.service" and "journalctl -xe" for details.
[mon3][WARNIN] Traceback (most recent call last):
[mon3][WARNIN] File "/usr/sbin/ceph-disk", line 9, in
[mon3][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5371, in run
[mon3][WARNIN] main(sys.argv[1:])
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5322, in main
[mon3][WARNIN] args.func(args)
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3483, in main_activate
[mon3][WARNIN] osd_id=osd_id,
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3060, in start_daemon
[mon3][WARNIN] raise Error('ceph osd start failed', e)
[mon3][WARNIN] ceph_disk.main.Error: Error: ceph osd start failed: Command '['/usr/bin/systemctl', 'start', 'ceph-osd@1']' returned non-zero exit status 1
[mon3][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/ceph/osd
但是如果我重新挂载磁盘到/osd。然后 ceph-deploy osd create hostname:/osd/home/ceph/osd 这样就没问题,为什么?

  • 写回答

1条回答 默认 最新

  • 你知我知皆知 2024-08-07 00:23
    关注

    以下回答参考 皆我百晓生券券喵儿 等免费微信小程序相关内容作答,并由本人整理回复。

    您的问题主要在于您尝试在/home/ceph/osd中创建OSD时使用了错误的方法。实际上,在Ceph集群中,创建OSD应该通过ceph osd create命令来完成,而不是像您那样直接在主机上创建文件系统。正确的做法是:

    ceph osd create hostname:/osd/home/ceph/osd
    

    其中hostname是您想要添加OSD的主机名称(或IP地址),而/osd/home/ceph/osd是要挂载到的目标路径。

    至于为什么更改挂载点后能够正常工作,可能是因为ceph osd create命令会自动处理挂载和初始化过程,并且它会在目标路径上创建一个空目录用于存储数据块。如果您直接在目标路径上创建文件系统,则可能会遇到权限问题或者无法正确挂载的情况。

    以下是一个完整的示例脚本,展示了如何使用ceph osd create命令创建OSD:

    #!/bin/bash
    
    # 设置环境变量
    export CEPH_USER=root
    export CEPH_CONF=/etc/ceph/ceph.conf
    export CEPH_METADATA=/var/lib/ceph
    export CEPH_DATA=/var/lib/ceph/data
    export CEPH_MON_IPS=mon1 mon2 mon3
    
    # 创建OSD
    ceph osd create $CEPH_MON_IPS:/osd/home/ceph/osd
    

    请注意,这个示例脚本中的主机名和目录路径应根据实际情况进行修改。此外,确保您具有足够的权限来执行这些操作,特别是当涉及到挂载和初始化步骤时。

    评论

报告相同问题?