当我源码编译出来的kubelet运行时报了很多错,启动时没加任何参数报错,然后我加上正常的kubelet参数后不报错。所以我想知道,kubelet那些参数是必须要设置的
下面是报错信息,driver使用的是systemd
请指教,感觉是cadvisor 和CGroup 的问题
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.244049 337893 server.go:416] Version: v1.20.16-rc.0.3+4a89df5617b8e1-dirty
Mar 17 08:29:14 1.novalocal kubelet[337893]: W0317 08:29:14.245042 337893 server.go:558] standalone mode, no API client
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.245134 337893 server.go:611] CgroupRoot, CgroupPerQos, CgroupDriver
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.245163 337893 server.go:612]
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.245174 337893 server.go:613] true
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.245189 337893 server.go:614] systemd
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.245199 337893 server.go:615] /systemd/system.slice
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.245212 337893 server.go:618] /systemd/system.slice
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.245467 337893 container_manager_linux.go:916] map[blkio:/systemd/system.slice cpu:/systemd/system.slice cpuacct:/systemd/system.slice cpuset:/systemd/system.slice devices:/system.slice/docker.service freezer:/systemd/system.slice hugetlb:/systemd/system.slice memory:/systemd/system.slice name=systemd:/systemd/system.slice net_cls:/systemd/system.slice net_prio:/systemd/system.slice pids:/systemd/system.slice]
Mar 17 08:29:14 1.novalocal kubelet[337893]: W0317 08:29:14.450940 337893 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Mar 17 08:29:14 1.novalocal kubelet[337893]: W0317 08:29:14.451486 337893 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Mar 17 08:29:14 1.novalocal kubelet[337893]: W0317 08:29:14.515956 337893 server.go:473] No api server defined - no events will be sent to API server.
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.516083 337893 server.go:651] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.516985 337893 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.517060 337893 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.517470 337893 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.517511 337893 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.517525 337893 container_manager_linux.go:315] Creating device plugin manager: true
Mar 17 08:29:14 1.novalocal kubelet[337893]: W0317 08:29:14.518193 337893 kubelet.go:297] Using dockershim is deprecated, please consider using a full-fledged CRI implementation
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.518260 337893 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.518302 337893 client.go:94] Start docker client with request timeout=2m0s
Mar 17 08:29:14 1.novalocal kubelet[337893]: W0317 08:29:14.544370 337893 docker_service.go:568] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.544510 337893 docker_service.go:241] Hairpin mode set to "hairpin-veth"
Mar 17 08:29:14 1.novalocal kubelet[337893]: W0317 08:29:14.544898 337893 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.562233 337893 docker_service.go:256] Docker cri networking managed by kubernetes.io/no-op
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.587777 337893 docker_service.go:262] "Docker Info" dockerInfo=&{ID:QFOI:2N27:ICJF:MB4O:NYOI:AFIO:37X7:XRYN:24SQ:EF2C:NBYC:G2VJ Containers:14 ContainersRunning:0 ContainersPaused:0 ContainersStopped:14 Images:30 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:34 SystemTime:2022-03-17T08:29:14.564260435Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.19.190-2.lns8.loongarch64 OperatingSystem:Loongnix-Server Linux 8 OSType:linux Architecture:loongarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0006204d0 NCPU:4 MemTotal:8380203008 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:1.novalocal Labels:[] ExperimentalBuild:false ServerVersion:20.10.3 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[]} io.containerd.runtime.v1.linux:{Path:runc Args:[]} runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:[]}
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.587973 337893 docker_service.go:277] Setting cgroupDriver to systemd
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.652417 337893 container_manager_linux.go:916] map[blkio:/systemd/system.slice cpu:/systemd/system.slice cpuacct:/systemd/system.slice cpuset:/systemd/system.slice devices:/system.slice/docker.service freezer:/systemd/system.slice hugetlb:/systemd/system.slice memory:/systemd/system.slice name=systemd:/systemd/system.slice net_cls:/systemd/system.slice net_prio:/systemd/system.slice pids:/systemd/system.slice]
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.654469 337893 remote_runtime.go:62] parsed scheme: ""
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.654577 337893 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.654709 337893 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.654773 337893 clientconn.go:948] ClientConn switching balancer to "pick_first"
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.654942 337893 remote_image.go:50] parsed scheme: ""
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.654971 337893 remote_image.go:50] scheme "" not registered, fallback to default scheme
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.654996 337893 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.655012 337893 clientconn.go:948] ClientConn switching balancer to "pick_first"
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.655209 337893 kubelet.go:400] Kubelet is running in standalone mode, will skip API server sync
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.656155 337893 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
Mar 17 08:29:14 1.novalocal kubelet[337893]: I0317 08:29:14.709771 337893 kuberuntime_manager.go:216] Container runtime docker initialized, version: 20.10.3, apiVersion: 1.41.0
Mar 17 08:29:19 1.novalocal kubelet[337893]: E0317 08:29:19.310735 337893 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Mar 17 08:29:19 1.novalocal kubelet[337893]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.311482 337893 volume_host.go:75] kubeClient is nil. Skip initialization of CSIDriverLister
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.312253 337893 csi_plugin.go:191] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.312310 337893 csi_plugin.go:265] Skipping CSINode initialization, kubelet running in standalone mode
Mar 17 08:29:19 1.novalocal kubelet[337893]: E0317 08:29:19.313015 337893 kubelet.go:1291] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.313179 337893 kubelet.go:1396] No api server defined - no node status update will be sent.
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.313224 337893 server.go:148] Starting to listen on 0.0.0.0:10250
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.315755 337893 server.go:414] Adding debug handlers to kubelet server.
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.313158 337893 server.go:1182] Started kubelet
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.318260 337893 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.318522 337893 volume_manager.go:271] Starting Kubelet Volume Manager
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.318686 337893 desired_state_of_world_populator.go:142] Desired state populator starts to run
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.380072 337893 client.go:86] parsed scheme: "unix"
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.380132 337893 client.go:86] scheme "unix" not registered, fallback to default scheme
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.380311 337893 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.380341 337893 clientconn.go:948] ClientConn switching balancer to "pick_first"
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.433746 337893 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/\""
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.454776 337893 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods.slice/kubepods-besteffort.slice\""
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.455856 337893 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods.slice/kubepods-burstable.slice\""
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.457519 337893 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/systemd/system.slice\""
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.458921 337893 manager.go:949] Error getting perf_event cgroup path: "could not find path for resource \"perf_event\" for container \"/kubepods.slice\""
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.466941 337893 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.467174 337893 status_manager.go:154] Kubernetes client is nil, not starting status manager.
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.467226 337893 kubelet.go:1828] Starting kubelet main sync loop.
Mar 17 08:29:19 1.novalocal kubelet[337893]: E0317 08:29:19.467428 337893 kubelet.go:1852] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.518963 337893 reconciler.go:157] Reconciler: start to sync state
Mar 17 08:29:19 1.novalocal kubelet[337893]: E0317 08:29:19.567707 337893 kubelet.go:1852] skipping pod synchronization - container runtime status check may not have completed yet
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.685938 337893 cpu_manager.go:193] [cpumanager] starting with none policy
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.685997 337893 cpu_manager.go:194] [cpumanager] reconciling every 10s
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.686069 337893 state_mem.go:36] [cpumanager] initializing new in-memory state store
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.691345 337893 policy_none.go:43] [cpumanager] none policy: Start
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.693713 337893 container_manager_linux.go:916] map[blkio:/system.slice/kubelet.service cpu:/system.slice/kubelet.service cpuacct:/system.slice/kubelet.service cpuset:/ devices:/system.slice/kubelet.service freezer:/ hugetlb:/ memory:/system.slice/kubelet.service name=systemd:/system.slice/kubelet.service net_cls:/ net_prio:/ pids:/system.slice/kubelet.service]
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.694219 337893 container_manager_linux.go:916] map[blkio:/systemd/system.slice cpu:/systemd/system.slice cpuacct:/systemd/system.slice cpuset:/systemd/system.slice devices:/system.slice/docker.service freezer:/systemd/system.slice hugetlb:/systemd/system.slice memory:/systemd/system.slice name=systemd:/systemd/system.slice net_cls:/systemd/system.slice net_prio:/systemd/system.slice pids:/systemd/system.slice]
Mar 17 08:29:19 1.novalocal kubelet[337893]: W0317 08:29:19.694790 337893 manager.go:595] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Mar 17 08:29:19 1.novalocal kubelet[337893]: I0317 08:29:19.695740 337893 plugin_manager.go:114] Starting Kubelet Plugin Manager