问题现象
我搭建了一个ceph集群,想用他们给k8s提供存储.但是现在根据ceph集群创建了storageClass,然后创建了pvc.
pvc和storageClass创建都成功了,但是pvc一直处于pending状态,也无法被pod绑定.
root@master:~# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-storage ceph.com/cephfs Delete Immediate false 20m
root@master:~# kubectl describe pvc myclaim |tail
volume.kubernetes.io/storage-provisioner: ceph.com/cephfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: nginx-deployment-69d9bb7478-nzspt
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 75s (x62 over 16m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ceph.com/cephfs" or manually created by system administrator
这是我创建sc的yaml文件:
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: default
data:
keyring: |-
QVFCTDlieGs1cjFZQ2hBQTE3a2lKdXFveUFmR0RIWU1xM0srN2c9PQ==
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-storage
provisioner: ceph.com/cephfs
parameters:
monitors: monitor:6789
pool: kubernetes
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
userId: admin
userSecretName: ceph-secret
fsName: ext4
readOnly: "false"
这是创建pvc的yaml文件
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
storageClassName: ceph-storage
resources:
requests:
storage: 10Mi
这是ceph集群的状态
root@monitor:~# ceph -s
cluster:
id: 49336b62-5502-4af7-97f5-67a289e244c7
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
Module 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter process
1 monitors have not enabled msgr2
services:
mon: 1 daemons, quorum monitor (age 45m)
mgr: monitor(active, since 40m)
mds: 1/1 daemons up
osd: 2 osds: 2 up (since 45m), 2 in (since 2d)
data:
volumes: 1/1 healthy
pools: 3 pools, 49 pgs
objects: 26 objects, 8.8 KiB
usage: 45 MiB used, 9.9 GiB / 10 GiB avail
pgs: 49 active+clean
然后我根据k8s官方提供的ceph案例,简单修改了一下他的yaml文件,内容如下:
apiVersion: v1
kind: Pod
metadata:
name: cephfs2
spec:
containers:
- name: cephfs-rw
image: docker.io/library/debian:unstable-slim
command: ["tail"]
args: ["-f","/etc/hosts"]
volumeMounts:
- mountPath: "/mnt/cephfs"
name: cephfs
volumes:
- name: cephfs
cephfs:
monitors:
- monitor:6789
user: admin
secretRef:
name: ceph-secret
readOnly: true
pod直接指定了cepf存储,这种方式没有创建出任何的pvc和pv,但是他成功的挂载了存储.