菜余 2025-02-27 10:39 采纳率: 0%
浏览 9

k8s自定义调度器,测试pod一直Pending

参考链接:https://github.com/FLY-Open-K8s/sample-scheduler-framework.git
已将源插件的代码替换为高版本的

环境:在服务器搭建了一个k8s单节点,服务器为ubuntu

运行sample-scheduler.yaml配置文件:(该pod为running状态)

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: sample-scheduler-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - endpoints
      - events
    verbs:
      - create
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - delete
      - get
      - list
      - watch
      - update
  - apiGroups:
      - ""
    resources:
      - bindings
      - pods/binding
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - pods/status
    verbs:
      - patch
      - update
  - apiGroups:
      - ""
    resources:
      - replicationcontrollers
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
      - extensions
    resources:
      - replicasets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - statefulsets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - persistentvolumeclaims
      - persistentvolumes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "storage.k8s.io"
    resources:
      - storageclasses
      - csinodes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "coordination.k8s.io"
    resources:
      - leases
    verbs:
      - create
      - get
      - list
      - update
  - apiGroups:
      - "events.k8s.io"
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sample-scheduler-sa
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: sample-scheduler-clusterrolebinding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: sample-scheduler-clusterrole
subjects:
  - kind: ServiceAccount
    name: sample-scheduler-sa
    namespace: kube-system

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: scheduler-config
  namespace: kube-system
data:
  scheduler-config.yaml: |
    apiVersion: kubescheduler.config.k8s.io/v1
    kind: KubeSchedulerConfiguration
    leaderElection:
      leaderElect: true
    profiles:
      - schedulerName: sample-scheduler
        plugins:
          preFilter:
            enabled:
              - name: "sample-plugin"
          filter:
            enabled:
              - name: "sample-plugin"
          preBind:
            enabled:
              - name: "sample-plugin"
        pluginConfig:
          - name: "sample-plugin"
            args:
              favorite_color: "#326CE5"
              favorite_number: 7
              thanks_to: "thocqkin"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-scheduler
  namespace: kube-system
  labels:
    component: sample-scheduler
spec:
  replicas: 1
  selector:
    matchLabels:
      component: sample-scheduler
  template:
    metadata:
      labels:
        component: sample-scheduler
    spec:
      serviceAccount: sample-scheduler-sa
      priorityClassName: system-cluster-critical
      volumes:
        - name: scheduler-config
          configMap:
            name: scheduler-config
      containers:
        - name: scheduler-ctrl
          image: myscheduler1:v1.0
          imagePullPolicy: IfNotPresent
          args:
            - /bin/my-scheduler
            - --config=/etc/kubernetes/scheduler-config.yaml
            - --v=3
          resources:
            requests:
              cpu: "50m"
          volumeMounts:
            - name: scheduler-config
              mountPath: /etc/kubernetes

测试pod的yaml: (一直为Pending )


apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-scheduler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-scheduler
  template:
    metadata:
      labels:
        app: test-scheduler
    spec:
      # 手动指定了一个 schedulerName 的字段,将其设置成上面我们自定义的调度器名称 sample-scheduler
      schedulerName: sample-scheduler
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
  • 写回答

3条回答 默认 最新

  • 阿里嘎多学长 2025-02-27 10:40
    关注

    阿里嘎多学长整理AIGC生成,因移动端显示问题导致当前答案未能完全显示,请使用PC端查看更加详细的解答过程

    k8s自定义调度器,测试pod一直Pending

    你使用了自定义的调度器,但是测试的pod一直处于Pending状态。这个问题可能是由于调度器的配置或实现问题导致的。

    首先,检查自定义调度器的配置文件是否正确,确保它能够正确地读取和解析 pod 的信息。

    其次,检查自定义调度器的实现是否正确,确保它能够正确地将 pod 分配到节点上。

    如果你已经检查了这些问题,但是问题仍然存在,可以尝试以下步骤:

    1. 使用 kubectl describe pod 命令来检查 pod 的状态,查看是否有任何错误信息。
    2. 使用 kubectl get pod -o yaml 命令来检查 pod 的 YAML 配置文件,查看是否有任何错误信息。
    3. 使用 kubectl logs 命令来检查自定义调度器的日志信息,查看是否有任何错误信息。
    4. 使用 kubectl get cs 命令来检查集群的状态,查看是否有任何错误信息。

    如果你仍然无法解决问题,可以提供更多的日志信息和配置文件,以便我更好地帮助你解决问题。

    代码示例

    以下是一个简单的自定义调度器的示例代码,用于将 pod 分配到节点上:

    package main
    
    import (
    	"context"
    	"fmt"
    	"log"
    
    	"k8s.io/api/core/v1"
    	"k8s.io/apimachinery/pkg/runtime"
    	"k8s.io/apimachinery/pkg/runtime/schema"
    	"k8s.io/client-go/informers"
    	"k8s.io/client-go/kubernetes"
    	"k8s.io/client-go/rest"
    )
    
    type MyScheduler struct{}
    
    func (s *MyScheduler) Schedule(ctx context.Context, pod *v1.Pod) (*v1.Binding, error) {
    	// 你的自定义调度逻辑
    	// 例如:根据 pod 的名称和标签来选择节点
    	nodeName := "node-1"
    	return &v1.Binding{
    		ObjectMeta: metav1.ObjectMeta{
    			Name:      pod.Name,
    			Namespace: pod.Namespace,
    		},
    		Spec: v1.BindingSpec{
    			Target: v1.ObjectReference{
    				Kind:      "Node",
    				Name:      nodeName,
    				Namespace: "",
    			},
    		},
    	}, nil
    }
    
    func main() {
    	// 创建客户端
    	client, err := rest.InClusterConfig().Client()
    	if err != nil {
    		log.Fatal(err)
    	}
    
    	// 创建 informer
    	informer := informers.NewSharedInformerFactory(client, 0)
    
    	// 创建自定义调度器
    	scheduler := &MyScheduler{}
    
    	// 注册自定义调度器
    	informer.Core().V1().Pods().Informer().AddEventHandler(
    		cache.ResourceEventHandlerFuncs{
    			AddFunc: func(obj interface{}) {
    				pod := obj.(*v1.Pod)
    				scheduler.Schedule(context.Background(), pod)
    			},
    		},
    	)
    
    	// 启动 informer
    	informer.Start(wait.Forever())
    
    	// 等待 informer 停止
    	informer.WaitForCacheSync(wait.Forever())
    }
    

    这个示例代码创建了一个简单的自定义调度器,用于将 pod 分配到节点上。

    评论

报告相同问题?

问题事件

  • 创建了问题 2月27日