当前位置: 首页 > news >正文

培训网站建设公司seo优化多少钱

培训网站建设公司,seo优化多少钱,社区网站做的比较好的有哪些,本地网站有什么可以做ssh-copy-id,部署机免密登录其他三台主机 所有机器硬盘配置参考如下,计划采用vdb作为ceph数据盘 下载ceph-deploy pip install ceph-deploy 免密登录设置主机名 hostnamectl --static set-hostname ceph-0 .. 3 配置hosts 172.17.163.105 ceph-0 172.…

ssh-copy-id,部署机免密登录其他三台主机

所有机器硬盘配置参考如下,计划采用vdb作为ceph数据盘

下载ceph-deploy

 pip install ceph-deploy

免密登录+设置主机名

hostnamectl --static set-hostname ceph-0 .. 3

配置hosts

172.17.163.105 ceph-0
172.17.112.206 ceph-1
172.17.227.100 ceph-2
172.17.67.157 ceph-3
 scp /etc/hosts root@ceph-1:/etc/hostsscp /etc/hosts root@ceph-2:/etc/hostsscp /etc/hosts root@ceph-3:/etc/hosts

本机先安装各软件包

 rpm -ivhU liboath/liboath-*

过滤掉安装不上的包

 find rpmbuild/RPMS/ | grep \\.rpm | grep -v debug | grep -v k8s | grep -v mgr\-rook | grep -v mgr\-ssh | xargs -i echo "{} \\"

添加用户,安装软件

useradd ceph
yum install -y rpmbuild/RPMS/noarch/ceph-mgr-dashboard-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-cloud-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-grafana-dashboards-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-local-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/aarch64/librgw-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-base-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-mirror-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-test-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rados-objclass-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mds-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mgr-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd1-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libradospp-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-nbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mon-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-radosgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rados-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librgw2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-common-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-compat-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-osd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rados-14.2.10-0.oe1.bclinux.aarch64.rpm

安装成功日志(截图是二次reinstall)

注意:如未提前创建用户,警报

分发编译好的rpm包+el8的 liboath

ceph-0

rsync -avr -P liboath root@ceph-1:~/
rsync -avr -P liboath root@ceph-2:~/
rsync -avr -P liboath root@ceph-3:~/
rsync -avr -P rpmbuild/RPMS root@ceph-1:~/rpmbuild/
rsync -avr -P rpmbuild/RPMS root@ceph-2:~/rpmbuild/
rsync -avr -P rpmbuild/RPMS root@ceph-3:~/rpmbuild/

分别登录ceph-1、ceph-2、ceph-3执行(后续可以考虑ansible封装)

cd ~ 
rpm -ivhU liboath/liboath-*
useradd ceph
yum install -y rpmbuild/RPMS/noarch/ceph-mgr-dashboard-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-cloud-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-grafana-dashboards-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-local-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/aarch64/librgw-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-base-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-mirror-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-test-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rados-objclass-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mds-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mgr-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd1-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libradospp-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-nbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mon-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-radosgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rados-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librgw2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-common-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-compat-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-osd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rados-14.2.10-0.oe1.bclinux.aarch64.rpm

安装日志

时间同步ntpdate

ceph-0

yum install ntpdate

编辑/etc/ntp.conf

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noepeer noquery
restrict source nomodify notrap noepeer noquery
restrict 127.0.0.1 
restrict ::1
tos maxclock 5
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
server asia.pool.ntp.org

 启动ntpd

systemctl enable ntpd --now

ceph-1 ceph-2 ceph-3

 yum install -y ntpdate 

 编辑/etc/ntp.conf

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noepeer noquery
restrict source nomodify notrap noepeer noquery
restrict 127.0.0.1 
restrict ::1
tos maxclock 5
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
server ceph-0

  启动ntpd

systemctl enable ntpd --now

部署mon节点,生成ceph.conf

cd /etc/ceph/
ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3

报错,如下

[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: bclinux 21.10U3 LTS 21.10U3

修改/usr/lib/python2.7/site-packages/ceph_deploy/calamari.py,新增一个bclinux 差异如下

[root@ceph-0 ceph_deploy]# diff calamari.py calamari.py.bak -Npr
*** calamari.py	2023-11-10 16:56:49.445013228 +0800
--- calamari.py.bak	2023-11-10 16:56:14.793013228 +0800
*************** def distro_is_supported(distro_name):
*** 13,19 ****An enforcer of supported distros that can differ from what ceph-deploysupports."""
!     supported = ['centos', 'redhat', 'ubuntu', 'debian', 'bclinux']if distro_name in supported:return Truereturn False
--- 13,19 ----An enforcer of supported distros that can differ from what ceph-deploysupports."""
!     supported = ['centos', 'redhat', 'ubuntu', 'debian']if distro_name in supported:return Truereturn False

修改/usr/lib/python2.7/site-packages/ceph_deploy/hosts/__init__.py

[root@ceph-0 ceph_deploy]# diff -Npr hosts/__init__.py hosts/__init__.py.bak 
*** hosts/__init__.py	2023-11-10 17:06:27.585013228 +0800
--- hosts/__init__.py.bak	2023-11-10 17:05:48.697013228 +0800
*************** def _get_distro(distro, fallback=None, u
*** 101,107 ****'fedora': fedora,'suse': suse,'virtuozzo': centos,
-         'bclinux': centos,'arch': arch}--- 101,106 ----

ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3

成功生成ceph.conf,过程日志

[root@ceph-0 ceph]# ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffb246c280>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xffffb236e9d0>
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /usr/sbin/ip link show
[ceph-0][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-0][DEBUG ] IP addresses found: [u'172.18.0.1', u'172.17.163.105']
[ceph_deploy.new][DEBUG ] Resolving host ceph-0
[ceph_deploy.new][DEBUG ] Monitor ceph-0 at 172.17.163.105
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-1][DEBUG ] connected to host: ceph-0 
[ceph-1][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-1
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /usr/sbin/ip link show
[ceph-1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-1][DEBUG ] IP addresses found: [u'172.17.112.206']
[ceph_deploy.new][DEBUG ] Resolving host ceph-1
[ceph_deploy.new][DEBUG ] Monitor ceph-1 at 172.17.112.206
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-2][DEBUG ] connected to host: ceph-0 
[ceph-2][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-2
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /usr/sbin/ip link show
[ceph-2][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-2][DEBUG ] IP addresses found: [u'172.17.227.100']
[ceph_deploy.new][DEBUG ] Resolving host ceph-2
[ceph_deploy.new][DEBUG ] Monitor ceph-2 at 172.17.227.100
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-3][DEBUG ] connected to host: ceph-0 
[ceph-3][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-3
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /usr/sbin/ip link show
[ceph-3][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-3][DEBUG ] IP addresses found: [u'172.17.67.157']
[ceph_deploy.new][DEBUG ] Resolving host ceph-3
[ceph_deploy.new][DEBUG ] Monitor ceph-3 at 172.17.67.157
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.17.163.105', '172.17.112.206', '172.17.227.100', '172.17.67.157']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

自动生成的/etc/ceph/ceph.conf内容如下

[global]
fsid = ff72b496-d036-4f1b-b2ad-55358f3c16cb
mon_initial_members = ceph-0, ceph-1, ceph-2, ceph-3
mon_host = 172.17.163.105,172.17.112.206,172.17.227.100,172.17.67.157
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

由于只是测试环境,且未挂第二个网络,暂时不设置public_network参数

部署monitor

cd /etc/ceph
ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3

故障

[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][ERROR ] Traceback (most recent call last):
[ceph-3][ERROR ]   File "/bin/ceph", line 151, in <module>
[ceph-3][ERROR ]     from ceph_daemon import admin_socket, DaemonWatcher, Termsize
[ceph-3][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_daemon.py", line 24, in <module>
[ceph-3][ERROR ]     from prettytable import PrettyTable, HEADER
[ceph-3][ERROR ] ImportError: No module named prettytable
[ceph-3][WARNIN] monitor: mon.ceph-3, might not be running yet
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][ERROR ] Traceback (most recent call last):
[ceph-3][ERROR ]   File "/bin/ceph", line 151, in <module>
[ceph-3][ERROR ]     from ceph_daemon import admin_socket, DaemonWatcher, Termsize
[ceph-3][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_daemon.py", line 24, in <module>
[ceph-3][ERROR ]     from prettytable import PrettyTable, HEADER
[ceph-3][ERROR ] ImportError: No module named prettytable

[ceph-3][WARNIN] monitor ceph-3 does not exist in monmap
[ceph-3][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors
[ceph-3][WARNIN] monitors may not be able to form quorum

No module named prettytable

pip install PrettyTable

下载离线包

分发到其他三台机器上离线安装

rsync -avr -P preetytable-python27 root@ceph-1:~/
rsync -avr -P preetytable-python27 root@ceph-2:~/
rsync -avr -P preetytable-python27 root@ceph-3:~/

再次部署monitor

cd /etc/ceph
ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3

日志记录

[root@ceph-0 ~]# cd /etc/ceph
[root@ceph-0 ceph]# ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff992fb320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0xffff993967d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-0 ...
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-0][DEBUG ] determining if provided host has same hostname in remote
[ceph-0][DEBUG ] get remote short hostname
[ceph-0][DEBUG ] deploying mon to ceph-0
[ceph-0][DEBUG ] get remote short hostname
[ceph-0][DEBUG ] remote hostname: ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][DEBUG ] create the mon path if it does not exist
[ceph-0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-0/done
[ceph-0][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-0][DEBUG ] create the init path if it does not exist
[ceph-0][INFO  ] Running command: systemctl enable ceph.target
[ceph-0][INFO  ] Running command: systemctl enable ceph-mon@ceph-0
[ceph-0][INFO  ] Running command: systemctl start ceph-mon@ceph-0
[ceph-0][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-0.asok mon_status
[ceph-0][DEBUG ] ********************************************************************************
[ceph-0][DEBUG ] status for monitor: mon.ceph-0
[ceph-0][DEBUG ] {
[ceph-0][DEBUG ]   "election_epoch": 8, 
[ceph-0][DEBUG ]   "extra_probe_peers": [
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }
[ceph-0][DEBUG ]   ], 
[ceph-0][DEBUG ]   "feature_map": {
[ceph-0][DEBUG ]     "mon": [
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-0][DEBUG ]         "num": 1, 
[ceph-0][DEBUG ]         "release": "luminous"
[ceph-0][DEBUG ]       }
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "features": {
[ceph-0][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-0][DEBUG ]     "quorum_mon": [
[ceph-0][DEBUG ]       "kraken", 
[ceph-0][DEBUG ]       "luminous", 
[ceph-0][DEBUG ]       "mimic", 
[ceph-0][DEBUG ]       "osdmap-prune", 
[ceph-0][DEBUG ]       "nautilus"
[ceph-0][DEBUG ]     ], 
[ceph-0][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-0][DEBUG ]     "required_mon": [
[ceph-0][DEBUG ]       "kraken", 
[ceph-0][DEBUG ]       "luminous", 
[ceph-0][DEBUG ]       "mimic", 
[ceph-0][DEBUG ]       "osdmap-prune", 
[ceph-0][DEBUG ]       "nautilus"
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "monmap": {
[ceph-0][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-0][DEBUG ]     "epoch": 1, 
[ceph-0][DEBUG ]     "features": {
[ceph-0][DEBUG ]       "optional": [], 
[ceph-0][DEBUG ]       "persistent": [
[ceph-0][DEBUG ]         "kraken", 
[ceph-0][DEBUG ]         "luminous", 
[ceph-0][DEBUG ]         "mimic", 
[ceph-0][DEBUG ]         "osdmap-prune", 
[ceph-0][DEBUG ]         "nautilus"
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-0][DEBUG ]     "min_mon_release": 14, 
[ceph-0][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-0][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-0][DEBUG ]     "mons": [
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-3", 
[ceph-0][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 0
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-1", 
[ceph-0][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 1
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-0", 
[ceph-0][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 2
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-2", 
[ceph-0][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 3
[ceph-0][DEBUG ]       }
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "name": "ceph-0", 
[ceph-0][DEBUG ]   "outside_quorum": [], 
[ceph-0][DEBUG ]   "quorum": [
[ceph-0][DEBUG ]     0, 
[ceph-0][DEBUG ]     1, 
[ceph-0][DEBUG ]     2, 
[ceph-0][DEBUG ]     3
[ceph-0][DEBUG ]   ], 
[ceph-0][DEBUG ]   "quorum_age": 917, 
[ceph-0][DEBUG ]   "rank": 2, 
[ceph-0][DEBUG ]   "state": "peon", 
[ceph-0][DEBUG ]   "sync_provider": []
[ceph-0][DEBUG ] }
[ceph-0][DEBUG ] ********************************************************************************
[ceph-0][INFO  ] monitor: mon.ceph-0 is running
[ceph-0][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-1 ...
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-1][DEBUG ] determining if provided host has same hostname in remote
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] deploying mon to ceph-1
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] remote hostname: ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][DEBUG ] create the mon path if it does not exist
[ceph-1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-1/done
[ceph-1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-1][DEBUG ] create the init path if it does not exist
[ceph-1][INFO  ] Running command: systemctl enable ceph.target
[ceph-1][INFO  ] Running command: systemctl enable ceph-mon@ceph-1
[ceph-1][INFO  ] Running command: systemctl start ceph-mon@ceph-1
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][DEBUG ] status for monitor: mon.ceph-1
[ceph-1][DEBUG ] {
[ceph-1][DEBUG ]   "election_epoch": 8, 
[ceph-1][DEBUG ]   "extra_probe_peers": [
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }
[ceph-1][DEBUG ]   ], 
[ceph-1][DEBUG ]   "feature_map": {
[ceph-1][DEBUG ]     "mon": [
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-1][DEBUG ]         "num": 1, 
[ceph-1][DEBUG ]         "release": "luminous"
[ceph-1][DEBUG ]       }
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "features": {
[ceph-1][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-1][DEBUG ]     "quorum_mon": [
[ceph-1][DEBUG ]       "kraken", 
[ceph-1][DEBUG ]       "luminous", 
[ceph-1][DEBUG ]       "mimic", 
[ceph-1][DEBUG ]       "osdmap-prune", 
[ceph-1][DEBUG ]       "nautilus"
[ceph-1][DEBUG ]     ], 
[ceph-1][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-1][DEBUG ]     "required_mon": [
[ceph-1][DEBUG ]       "kraken", 
[ceph-1][DEBUG ]       "luminous", 
[ceph-1][DEBUG ]       "mimic", 
[ceph-1][DEBUG ]       "osdmap-prune", 
[ceph-1][DEBUG ]       "nautilus"
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "monmap": {
[ceph-1][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-1][DEBUG ]     "epoch": 1, 
[ceph-1][DEBUG ]     "features": {
[ceph-1][DEBUG ]       "optional": [], 
[ceph-1][DEBUG ]       "persistent": [
[ceph-1][DEBUG ]         "kraken", 
[ceph-1][DEBUG ]         "luminous", 
[ceph-1][DEBUG ]         "mimic", 
[ceph-1][DEBUG ]         "osdmap-prune", 
[ceph-1][DEBUG ]         "nautilus"
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-1][DEBUG ]     "min_mon_release": 14, 
[ceph-1][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-1][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-1][DEBUG ]     "mons": [
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-3", 
[ceph-1][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 0
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-1", 
[ceph-1][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 1
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-0", 
[ceph-1][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 2
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-2", 
[ceph-1][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 3
[ceph-1][DEBUG ]       }
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "name": "ceph-1", 
[ceph-1][DEBUG ]   "outside_quorum": [], 
[ceph-1][DEBUG ]   "quorum": [
[ceph-1][DEBUG ]     0, 
[ceph-1][DEBUG ]     1, 
[ceph-1][DEBUG ]     2, 
[ceph-1][DEBUG ]     3
[ceph-1][DEBUG ]   ], 
[ceph-1][DEBUG ]   "quorum_age": 921, 
[ceph-1][DEBUG ]   "rank": 1, 
[ceph-1][DEBUG ]   "state": "peon", 
[ceph-1][DEBUG ]   "sync_provider": []
[ceph-1][DEBUG ] }
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][INFO  ] monitor: mon.ceph-1 is running
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-2 ...
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-2][DEBUG ] determining if provided host has same hostname in remote
[ceph-2][DEBUG ] get remote short hostname
[ceph-2][DEBUG ] deploying mon to ceph-2
[ceph-2][DEBUG ] get remote short hostname
[ceph-2][DEBUG ] remote hostname: ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][DEBUG ] create the mon path if it does not exist
[ceph-2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-2/done
[ceph-2][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-2][DEBUG ] create the init path if it does not exist
[ceph-2][INFO  ] Running command: systemctl enable ceph.target
[ceph-2][INFO  ] Running command: systemctl enable ceph-mon@ceph-2
[ceph-2][INFO  ] Running command: systemctl start ceph-mon@ceph-2
[ceph-2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-2.asok mon_status
[ceph-2][DEBUG ] ********************************************************************************
[ceph-2][DEBUG ] status for monitor: mon.ceph-2
[ceph-2][DEBUG ] {
[ceph-2][DEBUG ]   "election_epoch": 8, 
[ceph-2][DEBUG ]   "extra_probe_peers": [
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }
[ceph-2][DEBUG ]   ], 
[ceph-2][DEBUG ]   "feature_map": {
[ceph-2][DEBUG ]     "mon": [
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-2][DEBUG ]         "num": 1, 
[ceph-2][DEBUG ]         "release": "luminous"
[ceph-2][DEBUG ]       }
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "features": {
[ceph-2][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-2][DEBUG ]     "quorum_mon": [
[ceph-2][DEBUG ]       "kraken", 
[ceph-2][DEBUG ]       "luminous", 
[ceph-2][DEBUG ]       "mimic", 
[ceph-2][DEBUG ]       "osdmap-prune", 
[ceph-2][DEBUG ]       "nautilus"
[ceph-2][DEBUG ]     ], 
[ceph-2][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-2][DEBUG ]     "required_mon": [
[ceph-2][DEBUG ]       "kraken", 
[ceph-2][DEBUG ]       "luminous", 
[ceph-2][DEBUG ]       "mimic", 
[ceph-2][DEBUG ]       "osdmap-prune", 
[ceph-2][DEBUG ]       "nautilus"
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "monmap": {
[ceph-2][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-2][DEBUG ]     "epoch": 1, 
[ceph-2][DEBUG ]     "features": {
[ceph-2][DEBUG ]       "optional": [], 
[ceph-2][DEBUG ]       "persistent": [
[ceph-2][DEBUG ]         "kraken", 
[ceph-2][DEBUG ]         "luminous", 
[ceph-2][DEBUG ]         "mimic", 
[ceph-2][DEBUG ]         "osdmap-prune", 
[ceph-2][DEBUG ]         "nautilus"
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-2][DEBUG ]     "min_mon_release": 14, 
[ceph-2][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-2][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-2][DEBUG ]     "mons": [
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-3", 
[ceph-2][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 0
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-1", 
[ceph-2][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 1
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-0", 
[ceph-2][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 2
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-2", 
[ceph-2][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 3
[ceph-2][DEBUG ]       }
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "name": "ceph-2", 
[ceph-2][DEBUG ]   "outside_quorum": [], 
[ceph-2][DEBUG ]   "quorum": [
[ceph-2][DEBUG ]     0, 
[ceph-2][DEBUG ]     1, 
[ceph-2][DEBUG ]     2, 
[ceph-2][DEBUG ]     3
[ceph-2][DEBUG ]   ], 
[ceph-2][DEBUG ]   "quorum_age": 926, 
[ceph-2][DEBUG ]   "rank": 3, 
[ceph-2][DEBUG ]   "state": "peon", 
[ceph-2][DEBUG ]   "sync_provider": []
[ceph-2][DEBUG ] }
[ceph-2][DEBUG ] ********************************************************************************
[ceph-2][INFO  ] monitor: mon.ceph-2 is running
[ceph-2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-3 ...
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-3][DEBUG ] determining if provided host has same hostname in remote
[ceph-3][DEBUG ] get remote short hostname
[ceph-3][DEBUG ] deploying mon to ceph-3
[ceph-3][DEBUG ] get remote short hostname
[ceph-3][DEBUG ] remote hostname: ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][DEBUG ] create the mon path if it does not exist
[ceph-3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-3/done
[ceph-3][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-3][DEBUG ] create the init path if it does not exist
[ceph-3][INFO  ] Running command: systemctl enable ceph.target
[ceph-3][INFO  ] Running command: systemctl enable ceph-mon@ceph-3
[ceph-3][INFO  ] Running command: systemctl start ceph-mon@ceph-3
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][DEBUG ] ********************************************************************************
[ceph-3][DEBUG ] status for monitor: mon.ceph-3
[ceph-3][DEBUG ] {
[ceph-3][DEBUG ]   "election_epoch": 8, 
[ceph-3][DEBUG ]   "extra_probe_peers": [
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }
[ceph-3][DEBUG ]   ], 
[ceph-3][DEBUG ]   "feature_map": {
[ceph-3][DEBUG ]     "mon": [
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-3][DEBUG ]         "num": 1, 
[ceph-3][DEBUG ]         "release": "luminous"
[ceph-3][DEBUG ]       }
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "features": {
[ceph-3][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-3][DEBUG ]     "quorum_mon": [
[ceph-3][DEBUG ]       "kraken", 
[ceph-3][DEBUG ]       "luminous", 
[ceph-3][DEBUG ]       "mimic", 
[ceph-3][DEBUG ]       "osdmap-prune", 
[ceph-3][DEBUG ]       "nautilus"
[ceph-3][DEBUG ]     ], 
[ceph-3][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-3][DEBUG ]     "required_mon": [
[ceph-3][DEBUG ]       "kraken", 
[ceph-3][DEBUG ]       "luminous", 
[ceph-3][DEBUG ]       "mimic", 
[ceph-3][DEBUG ]       "osdmap-prune", 
[ceph-3][DEBUG ]       "nautilus"
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "monmap": {
[ceph-3][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-3][DEBUG ]     "epoch": 1, 
[ceph-3][DEBUG ]     "features": {
[ceph-3][DEBUG ]       "optional": [], 
[ceph-3][DEBUG ]       "persistent": [
[ceph-3][DEBUG ]         "kraken", 
[ceph-3][DEBUG ]         "luminous", 
[ceph-3][DEBUG ]         "mimic", 
[ceph-3][DEBUG ]         "osdmap-prune", 
[ceph-3][DEBUG ]         "nautilus"
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-3][DEBUG ]     "min_mon_release": 14, 
[ceph-3][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-3][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-3][DEBUG ]     "mons": [
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-3", 
[ceph-3][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 0
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-1", 
[ceph-3][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 1
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-0", 
[ceph-3][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 2
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-2", 
[ceph-3][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 3
[ceph-3][DEBUG ]       }
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "name": "ceph-3", 
[ceph-3][DEBUG ]   "outside_quorum": [], 
[ceph-3][DEBUG ]   "quorum": [
[ceph-3][DEBUG ]     0, 
[ceph-3][DEBUG ]     1, 
[ceph-3][DEBUG ]     2, 
[ceph-3][DEBUG ]     3
[ceph-3][DEBUG ]   ], 
[ceph-3][DEBUG ]   "quorum_age": 931, 
[ceph-3][DEBUG ]   "rank": 0, 
[ceph-3][DEBUG ]   "state": "leader", 
[ceph-3][DEBUG ]   "sync_provider": []
[ceph-3][DEBUG ] }
[ceph-3][DEBUG ] ********************************************************************************
[ceph-3][INFO  ] monitor: mon.ceph-3 is running
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status

[errno 2] error connecting to the cluster

没有key?继续下个步骤再观察

收集秘钥

ceph-deploy gatherkeys ceph-0 ceph-1 ceph-2 ceph-3

日志

ceph -s 可以看到daemons服务了

部署admin节点

ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[root@ceph-0 ceph]# ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff91add0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff91c777d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-0
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-1
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-2
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-3
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

ceph -s

部署OSD

ceph-deploy osd create ceph-0 --data /dev/vdb
ceph-deploy osd create ceph-1 --data /dev/vdb
ceph-deploy osd create ceph-2 --data /dev/vdb
ceph-deploy osd create ceph-3 --data /dev/vdb

日志ceph 0 1 2 3

[root@ceph-0 ceph]# ceph-deploy osd create ceph-0 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-0 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9cc8cd20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-0
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff9cd1bed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][WARNIN] osd keyring does not exist yet, creating one
[ceph-0][DEBUG ] create a keyring file
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-0][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-837353d8-91ff-4418-bc8f-a655d94049d4 /dev/vdb
[ceph-0][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-0][WARNIN]  stdout: Volume group "ceph-837353d8-91ff-4418-bc8f-a655d94049d4" successfully created
[ceph-0][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f ceph-837353d8-91ff-4418-bc8f-a655d94049d4
[ceph-0][WARNIN]  stdout: Logical volume "osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f" created.
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-0][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[ceph-0][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-0][WARNIN]  stderr: 2023-11-11 10:48:34.800 ffff843261e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-0][WARNIN] 2023-11-11 10:48:34.800 ffff843261e0 -1 AuthRegistry(0xffff7c081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-0][WARNIN]  stderr: got monmap epoch 1
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQB/605l8419HxAAhIoXMxEJCV5J6qOB8AyHrw==
[ceph-0][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-0][WARNIN] added entity osd.0 auth(key=AQB/605l8419HxAAhIoXMxEJCV5J6qOB8AyHrw==)
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-0][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c1870346-8e19-4788-b1dd-19bd75d6ec2f --setuser ceph --setgroup ceph
[ceph-0][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-0][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-c1870346-8e19-4788-b1dd-19bd75d6ec2f.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-0][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
[ceph-0][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-0][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@0
[ceph-0][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-0][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-0][INFO  ] checking OSD status...
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-0 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-1 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-1 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff87d9ed20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff87e2ded0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][WARNIN] osd keyring does not exist yet, creating one
[ceph-1][DEBUG ] create a keyring file
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-89d26557-d392-4a46-8d3d-6904076cd4e0 /dev/vdb
[ceph-1][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-1][WARNIN]  stdout: Volume group "ceph-89d26557-d392-4a46-8d3d-6904076cd4e0" successfully created
[ceph-1][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-4aa0152e-d817-4583-817b-81ada419624a ceph-89d26557-d392-4a46-8d3d-6904076cd4e0
[ceph-1][WARNIN]  stdout: Logical volume "osd-block-4aa0152e-d817-4583-817b-81ada419624a" created.
[ceph-1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-1][WARNIN] Running command: /bin/ln -s /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[ceph-1][WARNIN]  stderr: 2023-11-11 10:49:41.805 ffff89d6d1e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-1][WARNIN] 2023-11-11 10:49:41.805 ffff89d6d1e0 -1 AuthRegistry(0xffff84081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-1][WARNIN]  stderr: got monmap epoch 1
[ceph-1][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQDC605lWArnLhAAEYhGC+H+Jy224yAIJhL0gA==
[ceph-1][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[ceph-1][WARNIN] added entity osd.1 auth(key=AQDC605lWArnLhAAEYhGC+H+Jy224yAIJhL0gA==)
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[ceph-1][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 4aa0152e-d817-4583-817b-81ada419624a --setuser ceph --setgroup ceph
[ceph-1][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a --path /var/lib/ceph/osd/ceph-1 --no-mon-config
[ceph-1][WARNIN] Running command: /bin/ln -snf /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-1-4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-4aa0152e-d817-4583-817b-81ada419624a.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-1][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@1
[ceph-1][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-1][WARNIN] Running command: /bin/systemctl start ceph-osd@1
[ceph-1][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[ceph-1][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-1][INFO  ] checking OSD status...
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-1 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-2 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-2 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9a808d20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-2
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff9a897ed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][WARNIN] osd keyring does not exist yet, creating one
[ceph-2][DEBUG ] create a keyring file
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-2][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-2][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f /dev/vdb
[ceph-2][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-2][WARNIN]  stdout: Volume group "ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f" successfully created
[ceph-2][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f
[ceph-2][WARNIN]  stdout: Logical volume "osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960" created.
[ceph-2][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-2][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-2][WARNIN] Running command: /bin/ln -s /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[ceph-2][WARNIN]  stderr: 2023-11-11 10:50:01.837 ffff947321e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-2][WARNIN] 2023-11-11 10:50:01.837 ffff947321e0 -1 AuthRegistry(0xffff8c081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-2][WARNIN]  stderr: got monmap epoch 1
[ceph-2][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQDW605lUIA0MhAAqOoCGrnDsVpfoIIKVtCXHg==
[ceph-2][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[ceph-2][WARNIN] added entity osd.2 auth(key=AQDW605lUIA0MhAAqOoCGrnDsVpfoIIKVtCXHg==)
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[ceph-2][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid fe7a2030-94ac-4bbb-af27-7950509b0960 --setuser ceph --setgroup ceph
[ceph-2][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[ceph-2][WARNIN] Running command: /bin/ln -snf /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-fe7a2030-94ac-4bbb-af27-7950509b0960.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-2][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[ceph-2][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-2][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph-2][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[ceph-2][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-2][INFO  ] checking OSD status...
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-2 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-3 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-3 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff7f600d20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-3
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff7f68fed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][WARNIN] osd keyring does not exist yet, creating one
[ceph-3][DEBUG ] create a keyring file
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-a75dd665-280f-4901-90db-d72aea971fd7 /dev/vdb
[ceph-3][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-3][WARNIN]  stdout: Volume group "ceph-a75dd665-280f-4901-90db-d72aea971fd7" successfully created
[ceph-3][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 ceph-a75dd665-280f-4901-90db-d72aea971fd7
[ceph-3][WARNIN]  stdout: Logical volume "osd-block-223dea89-7b5f-4584-b294-bbc0457cd250" created.
[ceph-3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-3][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-3][WARNIN] Running command: /bin/ln -s /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
[ceph-3][WARNIN]  stderr: 2023-11-11 10:50:22.197 ffffa80151e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-3][WARNIN] 2023-11-11 10:50:22.197 ffffa80151e0 -1 AuthRegistry(0xffffa0081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-3][WARNIN]  stderr: got monmap epoch 1
[ceph-3][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQDr605lrGtEDRAAvHq/3Wbxx0jH8NgtcKN/aA==
[ceph-3][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-3/keyring
[ceph-3][WARNIN] added entity osd.3 auth(key=AQDr605lrGtEDRAAvHq/3Wbxx0jH8NgtcKN/aA==)
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
[ceph-3][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 223dea89-7b5f-4584-b294-bbc0457cd250 --setuser ceph --setgroup ceph
[ceph-3][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
[ceph-3][WARNIN] Running command: /bin/ln -snf /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-3-223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-223dea89-7b5f-4584-b294-bbc0457cd250.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-3][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@3
[ceph-3][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-3][WARNIN] Running command: /bin/systemctl start ceph-osd@3
[ceph-3][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
[ceph-3][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-3][INFO  ] checking OSD status...
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-3 is now ready for osd use.

ceph -s 状态没变,还没有看到存储状态

部署mgr

 ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3

日志

[root@ceph-0 ceph]# ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-0', 'ceph-0'), ('ceph-1', 'ceph-1'), ('ceph-2', 'ceph-2'), ('ceph-3', 'ceph-3')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff94d07730>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0xffff94e71dd0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-0:ceph-0 ceph-1:ceph-1 ceph-2:ceph-2 ceph-3:ceph-3
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][WARNIN] mgr keyring does not exist yet, creating one
[ceph-0][DEBUG ] create a keyring file
[ceph-0][DEBUG ] create path recursively if it doesn't exist
[ceph-0][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-0 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-0/keyring
[ceph-0][INFO  ] Running command: systemctl enable ceph-mgr@ceph-0
[ceph-0][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-0.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-0][INFO  ] Running command: systemctl start ceph-mgr@ceph-0
[ceph-0][INFO  ] Running command: systemctl enable ceph.target
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][WARNIN] mgr keyring does not exist yet, creating one
[ceph-1][DEBUG ] create a keyring file
[ceph-1][DEBUG ] create path recursively if it doesn't exist
[ceph-1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-1/keyring
[ceph-1][INFO  ] Running command: systemctl enable ceph-mgr@ceph-1
[ceph-1][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-1.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-1][INFO  ] Running command: systemctl start ceph-mgr@ceph-1
[ceph-1][INFO  ] Running command: systemctl enable ceph.target
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][WARNIN] mgr keyring does not exist yet, creating one
[ceph-2][DEBUG ] create a keyring file
[ceph-2][DEBUG ] create path recursively if it doesn't exist
[ceph-2][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-2 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-2/keyring
[ceph-2][INFO  ] Running command: systemctl enable ceph-mgr@ceph-2
[ceph-2][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-2.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-2][INFO  ] Running command: systemctl start ceph-mgr@ceph-2
[ceph-2][INFO  ] Running command: systemctl enable ceph.target
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][WARNIN] mgr keyring does not exist yet, creating one
[ceph-3][DEBUG ] create a keyring file
[ceph-3][DEBUG ] create path recursively if it doesn't exist
[ceph-3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-3/keyring
[ceph-3][INFO  ] Running command: systemctl enable ceph-mgr@ceph-3
[ceph-3][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-3.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-3][INFO  ] Running command: systemctl start ceph-mgr@ceph-3
[ceph-3][INFO  ] Running command: systemctl enable ceph.target

ceph -s 查看mgr

!osd经过了约15分钟,才显示数据空间情况

验证ceph块存储

创建存储池(后面两个参数还不清楚意思)

ceph osd pool create vdbench 250 250

指定类型为块存储

ceph osd pool application enable vdbench rbd

创建一个20G的镜像(未设置压缩)

rbd create image1 --size 20G --pool vdbench --image-format 2 --image-feature layering

映射到linux设备文件

rbd map vdbench/image1

参考以下日志,可以看到,已经生成了/dev/rdb0设备文件

参考文档:

ceph-deploy – 使用最少的基础架构部署 Ceph — ceph-deploy 2.1.0 文档

【精选】ceph-deploy部署指定版本ceph集群_mons are allowing insecure global_id reclaim_ggrong0213的博客-CSDN博客

Ceph使用---dashboard启用及Prometheus监控 - cyh00001 - 博客园 (cnblogs.com)

http://www.hkea.cn/news/684030/

相关文章:

  • 做写字楼用哪个网站更好郑州seo代理外包
  • 做网站 淘宝营销策划思路
  • 网页设计要用到什么软件聊城seo优化
  • 用wordpress做网站百度推广管理
  • 一个空间可以放两个网站吗html模板网站
  • 做试用网站的原理网站推广优化平台
  • 软件工程培训机构学费亚马逊seo什么意思
  • 做恶搞网站软件有哪些苏州seo怎么做
  • 怎么做微信小说网站企业网络营销策划方案
  • 网站后台上传图片失败百度下载免费安装最新版
  • 镇江做网站需要多少钱企业网站模板设计
  • 西安seo优化系统网页seo
  • 如何用网站模板做网站广州网络营销推广
  • 承德手机网站建设seo推广排名
  • wordpress块引用一个网站可以优化多少关键词
  • 360网站卖东西怎么做的无锡seo优化公司
  • 邢台人民网站百度视频推广怎么收费
  • 常州天启建设公司网站高端快速建站
  • ppt模板免费下载网站不用登录seo测试工具
  • 四川建设人才网官网查询阜新网站seo
  • 太原网站开发定制百度网盘官网下载
  • 业主装修日记那个网站做的好片多多可以免费看电视剧吗
  • 租车网站建设站长之家源码
  • 昌吉州回族自治州建设局网站地产渠道12种拓客方式
  • 北京市网站公司网络项目免费的资源网
  • 电子商务网站规划、电子商务网站建设站长工具 忘忧草
  • 凡科建网关键词优化公司哪家好
  • seo排名推广工具seo公司多少钱
  • 做视频网站赚钱怎么在百度上推广自己的公司信息
  • 网站建设凡科厦门网站建设平台