weixin_39531761
weixin_39531761
2020-12-02 22:56

unable to set config file path with containerd

Description of problem

I'm following containerd docs, and managed to make kubelet+containerd+io.containerd.kata.v2 work.

However, I have to use /etc/kata-containers/configuration.toml and cannot set the ConfigPath as described in the docs (or config_path, as seen in the code).

Any attempt of removing /etc/kata-containers/configuration.toml and setting ConfigPath or config_path results in the following error:


Cannot find usable config file (config file "/etc/kata-containers/configuration.toml" unresolvable: file /etc/kata-containers/configuration.toml does not exist, config file "/usr/share/defaults/kata-containers/configuration.toml" unresolvable: file /usr/share/defaults/kata-containers/configuration.toml does not exist): not found

I'm also not able to make io.containerd.katafc.v2 work, if create symlink at /opt/kata/bin/containerd-shim-katafc-v2 to /opt/kata/bin/containerd-shim-kata-v2 and another one at /etc/kata-containers/configuration.toml to /opt/kata/share/defaults/kata-containers/configuration-fc.toml, and run sudo ctr run --runtime io.containerd.katafc.v2 -t --rm docker.io/library/busybox:latest hello sh, I get the following error:


ctr: rootfs (/run/kata-containers/shared/containers/hello/rootfs) does not exist: unknown

To be honest the docs are a little confusing, and link to containerd config docs revile new version of config file that is very different from what I'm seeing in Kata docs.

--

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.10.0 (commit ebe9677f23b574c5defacf57456d221d8ce901f2)` at `2020-03-26.10:56:18.682366435+0000`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`":

toml
[Meta]
  Version = "1.0.23"

[Runtime]
  Debug = false
  Trace = false
  DisableGuestSeccomp = true
  DisableNewNetNs = false
  SandboxCgroupOnly = false
  Path = "/opt/kata/bin/kata-runtime"
  [Runtime.Version]
    Semver = "1.10.0"
    Commit = "ebe9677f23b574c5defacf57456d221d8ce901f2"
    OCI = "1.0.1-dev"
  [Runtime.Config]
    Path = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 4.1.0 (kata-static)\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers"
  Path = "/opt/kata/bin/qemu-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  Msize9p = 8192
  MemorySlots = 10
  Debug = false
  UseVSock = false
  SharedFS = "virtio-9p"

[Image]
  Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.10.0_agent_a8007c2969.img"

[Kernel]
  Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.86-60"
  Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket"

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.10.0-bbc3f73a3b003f57c20f4dad4bffb22580864926"
  Path = "/opt/kata/libexec/kata-containers/kata-proxy"
  Debug = false

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.10.0-6db555216f68a14b68ce64713083c5f5b4684aea"
  Path = "/opt/kata/libexec/kata-containers/kata-shim"
  Debug = false

[Agent]
  Type = "kata"
  Debug = false
  Trace = false
  TraceMode = ""
  TraceType = ""

[Host]
  Kernel = "4.14.171-136.231.amzn2.x86_64"
  Architecture = "amd64"
  VMContainerCapable = true
  SupportVSocks = true
  [Host.Distro]
    Name = "Amazon Linux"
    Version = "2"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz"

[Netmon]
  Version = "kata-netmon version 1.10.0"
  Path = "/opt/kata/libexec/kata-containers/kata-netmon"
  Debug = false
  Enable = false
--- # Runtime config files ## Runtime default config files

/etc/kata-containers/configuration.toml
/opt/kata/share/defaults/kata-containers/configuration.toml
## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`":
toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Shared file system type:
#   - virtio-9p (default)
#   - virtio-fs
shared_fs = "virtio-9p"

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/opt/kata/bin/virtiofsd"

# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024

# Extra args for virtiofsd daemon
#
# Format example:
#   ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []

# Cache mode:
#
#  - none
#    Metadata, data, and pathname lookup are not cached in guest. They are
#    always fetched from host and any changes are immediately pushed to host.
#
#  - auto
#    Metadata and pathname lookup cache expires after a configured amount of
#    time (default is 1 second). Data is cached while the file is open (close
#    to open consistency).
#
#  - always
#    Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"

# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true

# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true

# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance. 
#disable_vhost_net = true

#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true

# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"

# The number of caches of VMCache:
# unspecified or == 0   --> VMCache is disabled
# > 0                   --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket.  The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client.  It will request gRPC format
# VM and convert it back to a VM.  If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0

# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"

[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true

[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true

# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
#   setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
#   will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
#   full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"

# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
#  - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
#  * A kernel module is specified and the modprobe command is not installed in the guest
#    or it fails loading the module.
#  * The module is not available in the guest or it doesn't met the guest kernel
#    requirements, like architecture and version.
#
kernel_modules=[]


[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"

# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The sandbox cgroup is not constrained by the runtime
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false

# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# They may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# 1. "newstore": new persist storage driver which breaks backward compatibility,
#               expected to move out of experimental in 2.0.0.
# (default: [])
experimental=[]
Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`":
toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Shared file system type:
#   - virtio-9p (default)
#   - virtio-fs
shared_fs = "virtio-9p"

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/opt/kata/bin/virtiofsd"

# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024

# Extra args for virtiofsd daemon
#
# Format example:
#   ["-o", "arg1=xxx,arg2", "-o", "hello world", "--arg3=yyy"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = []

# Cache mode:
#
#  - none
#    Metadata, data, and pathname lookup are not cached in guest. They are
#    always fetched from host and any changes are immediately pushed to host.
#
#  - auto
#    Metadata and pathname lookup cache expires after a configured amount of
#    time (default is 1 second). Data is cached while the file is open (close
#    to open consistency).
#
#  - always
#    Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"

# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true

# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true

# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance. 
#disable_vhost_net = true

#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true

# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"

# The number of caches of VMCache:
# unspecified or == 0   --> VMCache is disabled
# > 0                   --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket.  The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client.  It will request gRPC format
# VM and convert it back to a VM.  If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0

# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"

[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true

[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true

# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
#   setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
#   will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
#   full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"

# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
#  - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
#  * A kernel module is specified and the modprobe command is not installed in the guest
#    or it fails loading the module.
#  * The module is not available in the guest or it doesn't met the guest kernel
#    requirements, like architecture and version.
#
kernel_modules=[]


[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"

# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The sandbox cgroup is not constrained by the runtime
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=false

# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# They may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# 1. "newstore": new persist storage driver which breaks backward compatibility,
#               expected to move out of experimental in 2.0.0.
# (default: [])
experimental=[]
Config file `/usr/share/defaults/kata-containers/configuration.toml` not found --- # KSM throttler ## version Output of "` --version`":

/opt/kata/bin/kata-collect-data.sh: line 178: --version: command not found
## systemd service # Image details
yaml
---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2020-01-16T01:56:37.905020975+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "Clear"
  version: "32100"
  packages:
    default:
      - "chrony"
      - "iptables-bin"
      - "kmod-bin"
      - "libudev0-shim"
      - "systemd"
      - "util-linux-bin"
    extra:

agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.10.0-a8007c2969e839b584627d1a7db4cac13af908a6"
  agent-is-init-daemon: "no"
--- # Initrd details No initrd --- # Logfiles ## Runtime logs No recent runtime problems found in system journal. ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`":

Client:
 Version:           18.09.9-ce
 API version:       1.39
 Go version:        go1.10.3
 Git commit:        039a7df
 Built:             Fri Nov  1 19:26:49 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.9-ce
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       039a7df
  Built:            Fri Nov  1 19:28:24 2019
  OS/Arch:          linux/amd64
  Experimental:     false
Output of "`docker info`":

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.09.9-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.14.171-136.231.amzn2.x86_64
Operating System: Amazon Linux 2
OSType: linux
Architecture: x86_64
CPUs: 72
Total Memory: 503.8GiB
Name: ip-192-168-8-182.eu-west-1.compute.internal
ID: PWU2:K74B:UEJL:7RI2:VF47:HBEC:EZ2U:N4CW:UWNJ:5BZY:4BK3:QY72
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: true
Output of "`systemctl show docker`":

Type=notify
Restart=always
NotifyAccess=main
RestartUSec=2s
TimeoutStartUSec=0
TimeoutStopUSec=0
WatchdogUSec=0
WatchdogTimestamp=Thu 2020-03-26 10:36:32 UTC
WatchdogTimestampMonotonic=2207659086
StartLimitInterval=60000000
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=4576
ControlPID=0
FileDescriptorStoreMax=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Thu 2020-03-26 10:36:31 UTC
ExecMainStartTimestampMonotonic=2207430526
ExecMainExitTimestampMonotonic=0
ExecMainPID=4576
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/bin/mkdir ; argv[]=/bin/mkdir -p /run/docker ; ignore_errors=no ; start_time=[Thu 2020-03-26 10:36:31 UTC] ; stop_time=[Thu 2020-03-26 10:36:31 UTC] ; pid=4554 ; code=exited ; status=0 }
ExecStartPre={ path=/usr/libexec/docker/docker-setup-runtimes.sh ; argv[]=/usr/libexec/docker/docker-setup-runtimes.sh ; ignore_errors=no ; start_time=[Thu 2020-03-26 10:36:31 UTC] ; stop_time=[Thu 2020-03-26 10:36:31 UTC] ; pid=4566 ; code=exited ; status=0 }
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_ADD_RUNTIMES ; ignore_errors=no ; start_time=[Thu 2020-03-26 10:36:31 UTC] ; stop_time=[n/a] ; pid=4576 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=46870528
TasksCurrent=56
Delegate=yes
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
EnvironmentFile=/etc/sysconfig/docker (ignore_errors=yes)
EnvironmentFile=/etc/sysconfig/docker-storage (ignore_errors=yes)
EnvironmentFile=/run/docker/runtimes.env (ignore_errors=yes)
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=18446744073709551615
LimitAS=18446744073709551615
LimitNPROC=18446744073709551615
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=2063368
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=docker.socket basic.target
Wants=system.slice network-online.target
BindsTo=containerd.service
RequiredBy=kubelet.service
WantedBy=multi-user.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=iptables-restore.service kubelet.service shutdown.target multi-user.target
After=firewalld.service containerd.service basic.target system.slice docker.socket systemd-journald.socket network-online.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/usr/lib/systemd/system/docker.service
UnitFileState=enabled
UnitFilePreset=disabled
InactiveExitTimestamp=Thu 2020-03-26 10:36:31 UTC
InactiveExitTimestampMonotonic=2207425373
ActiveEnterTimestamp=Thu 2020-03-26 10:36:32 UTC
ActiveEnterTimestampMonotonic=2207659117
ActiveExitTimestamp=Thu 2020-03-26 10:36:31 UTC
ActiveExitTimestampMonotonic=2207409334
InactiveEnterTimestamp=Thu 2020-03-26 10:36:31 UTC
InactiveEnterTimestampMonotonic=2207415089
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
IgnoreOnSnapshot=no
NeedDaemonReload=no
JobTimeoutUSec=0
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2020-03-26 10:36:31 UTC
ConditionTimestampMonotonic=2207424366
AssertTimestamp=Thu 2020-03-26 10:36:31 UTC
AssertTimestampMonotonic=2207424366
Transient=no
No `kubectl` No `crio` Have `containerd` ## containerd Output of "`containerd --version`":

containerd github.com/containerd/containerd 1.2.6 894b81a4b802e4eb2a91d1ce216b8817763c29fb
Output of "`systemctl show containerd`":

Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestamp=Thu 2020-03-26 10:36:31 UTC
WatchdogTimestampMonotonic=2207423222
StartLimitInterval=10000000
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=4553
ControlPID=0
FileDescriptorStoreMax=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Thu 2020-03-26 10:36:31 UTC
ExecMainStartTimestampMonotonic=2207423172
ExecMainExitTimestampMonotonic=0
ExecMainPID=4553
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=yes ; start_time=[Thu 2020-03-26 10:36:31 UTC] ; stop_time=[Thu 2020-03-26 10:36:31 UTC] ; pid=4551 ; code=exited ; status=0 }
ExecStart={ path=/usr/bin/containerd ; argv[]=/usr/bin/containerd ; ignore_errors=no ; start_time=[Thu 2020-03-26 10:36:31 UTC] ; stop_time=[n/a] ; pid=4553 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/containerd.service
MemoryCurrent=1120763904
TasksCurrent=190
Delegate=yes
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/kata/bin
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=18446744073709551615
LimitAS=18446744073709551615
LimitNPROC=18446744073709551615
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=2063368
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=containerd.service
Names=containerd.service
Requires=basic.target
Wants=system.slice
BoundBy=docker.service
Conflicts=shutdown.target
Before=docker.service shutdown.target
After=systemd-journald.socket system.slice basic.target network.target
Documentation=https://containerd.io
Description=containerd container runtime
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/usr/lib/systemd/system/containerd.service
DropInPaths=/etc/systemd/system/containerd.service.d/11-kata.conf
UnitFileState=disabled
UnitFilePreset=disabled
InactiveExitTimestamp=Thu 2020-03-26 10:36:31 UTC
InactiveExitTimestampMonotonic=2207421210
ActiveEnterTimestamp=Thu 2020-03-26 10:36:31 UTC
ActiveEnterTimestampMonotonic=2207423264
ActiveExitTimestamp=Thu 2020-03-26 10:36:31 UTC
ActiveExitTimestampMonotonic=2207416561
InactiveEnterTimestamp=Thu 2020-03-26 10:36:31 UTC
InactiveEnterTimestampMonotonic=2207420450
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
IgnoreOnSnapshot=no
NeedDaemonReload=no
JobTimeoutUSec=0
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Thu 2020-03-26 10:36:31 UTC
ConditionTimestampMonotonic=2207420877
AssertTimestamp=Thu 2020-03-26 10:36:31 UTC
AssertTimestampMonotonic=2207420877
Transient=no
Output of "`cat /etc/containerd/config.toml`":

root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0
[grpc]
  address = "/run/containerd/containerd.sock"
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
[debug]
  address = ""
  uid = 0
  gid = 0
  level = ""
[metrics]
  address = ""
  grpc_histogram = false
[cgroup]
  path = ""
[plugins]
  [plugins.cgroups]
    no_prometheus = false
  [plugins.cri]
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    enable_selinux = false
    sandbox_image = "k8s.gcr.io/pause:3.1"
    stats_collect_period = 10
    systemd_cgroup = true
    enable_tls_streaming = false
    max_container_log_line_size = 16384
    [plugins.cri.containerd]
      snapshotter = "overlayfs"
      no_pivot = false
      [plugins.cri.containerd.default_runtime]
        runtime_type = "io.containerd.runtime.v1.linux"
        runtime_engine = ""
        runtime_root = ""
      [plugins.cri.containerd.untrusted_workload_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
      [plugins.cri.containerd.runtimes.kata-qemu]
        runtime_type = "io.containerd.kata.v2"
    [plugins.cri.containerd.runtimes.kata-qemu.options]
          ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
      #[plugins.cri.containerd.runtimes.katafc]
      #  runtime_type = "io.containerd.katafc.v2"
    [plugins.cri.cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
    [plugins.cri.registry]
      [plugins.cri.registry.mirrors]
        [plugins.cri.registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
    [plugins.cri.x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""
  [plugins.diff-service]
    default = ["walking"]
  [plugins.linux]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
  [plugins.opt]
    path = "/opt/containerd"
  [plugins.restart]
    interval = "10s"
  [plugins.scheduler]
    pause_threshold = 0.02
    deletion_threshold = 0
    mutation_threshold = 100
    schedule_delay = "0s"
    startup_delay = "100ms"
--- # Packages No `dpkg` Have `rpm` Output of "`rpm -qa|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-ksm-throttler|kata-containers-image|linux-container|qemu-)"`":


---

该提问来源于开源项目:kata-containers/runtime

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

21条回答

  • weixin_39531761 weixin_39531761 5月前

    One thing I just realised is that config_path is the same as ConfigPath, as ConfigPath is the name of struct field, but config_path is the name that serialiser is meant to accept, but the way that it can work in Go is that both field names get recognised.

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    I added a link to docs in the issue description, thanks for the reminder!

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    I have added shims as kata-deploy.sh does, and now I can use both runtimes - kata-qemu and kata-fc from my pods, which is a departure from what I had, looks like KATA_CONF_FILE really does need to be set. I was able to ditch /etc/kata-containers/configuration.toml now, which is good.

    However, the kata-fc is still not functioning and I get the rootfs error, but now I can test this with Kubernetes and don't need to use ctr on the node.

    See here: https://github.com/errordeveloper/kata-fc-eks/commit/e40291075a4a69bbc270442a1ce1fa65265795d4

    点赞 评论 复制链接分享
  • weixin_39873177 weixin_39873177 5月前

    I ran up a minikube and ran up a kata-deploy on it - here is what kata-deploy appended to the end of the /etc/containerd/config.yaml file:

    yaml
    [plugins.cri.containerd.runtimes.kata]
      runtime_type = "io.containerd.kata.v2"
      [plugins.cri.containerd.runtimes.kata.options]
        ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration.toml"
    [plugins.cri.containerd.runtimes.kata-fc]
      runtime_type = "io.containerd.kata-fc.v2"
      [plugins.cri.containerd.runtimes.kata-fc.options]
        ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-fc.toml"
    [plugins.cri.containerd.runtimes.kata-qemu]
      runtime_type = "io.containerd.kata-qemu.v2"
      [plugins.cri.containerd.runtimes.kata-qemu.options]
        ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
    [plugins.cri.containerd.runtimes.kata-qemu-virtiofs]
      runtime_type = "io.containerd.kata-qemu-virtiofs.v2"
      [plugins.cri.containerd.runtimes.kata-qemu-virtiofs.options]
        ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu-virtiofs.toml"
    [plugins.cri.containerd.runtimes.kata-clh]
      runtime_type = "io.containerd.kata-clh.v2"
      [plugins.cri.containerd.runtimes.kata-clh.options]
        ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-clh.toml"
    

    afaik, this works (I used minikube/kata heavily a few weeks ago). Does this help at all . If not, we'll track down some of the kata shimv2 owners and ask them for their thoughts.

    And, yeah, we can always do with improving our k8s kata documentation - I believe it's better than it used to be, but it needs regular 'cleaning' as things progress.... PRs are always welcome btw ;-)

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    afaik, this works

    did you try running a pod with runtimeClass: kata-fc?

    点赞 评论 复制链接分享
  • weixin_39873177 weixin_39873177 5月前

    I did not - I don't think I can under minikube, as it probably doesn't have a block storage backing. But, maybe this is the clue - maybe kata-fc is 'different' under shimv2. - do you know, would you expect 'fc' to act the same with ConfigPath? I'm guessing 'yes'...

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    Hi

    Actually the config_path specification is not relevant with hypervisor, both of the qemu and fc should work at the same way, and the shimv2 would get the config file through the following 4 way: 1) specify the config file by k8s annotation key: "io.katacontainers.config_path"; 2) pass the config file from runtime classes option, just as you did 'ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration.toml"'; 3) if you didn't pass config_path by runtime class option, then shimv2 would try to get it from env KATA_CONF_FILE;4) if the env wasn't set either, then it would try to get from the default path: /etc/kata-containers/configuration.toml, share/defaults/kata-containers/configuration.toml etc.

    Config override ordering as below: (high to low): 1. podsandbox annotation 2. shimv2 create task option 3. environment 4. default path

    From your fc's error, it seemed there was something wrong with the containerd's snapshot: The fc only worked with devicemapper, and it couldn't work with overlay. From you config, I saw you used overlay as the snapshotter: https://github.com/errordeveloper/kata-fc-eks/blob/e40291075a4a69bbc270442a1ce1fa65265795d4/cluster.yaml#L86 .

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    thanks, I'll do some more testing with this info in mind.

    It's already rather very evident that the documentation needs a lot of improvement, and that's not something an external contributor could tackle properly, so a one of the core contributors should take a good look at it!

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    so just to clarify.. you are saying:

    pass the config file from runtime classes option, just as you did

    Well, that didn't work, certainly looks like a bug to me.

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    so just to clarify.. you are saying:

    pass the config file from runtime classes option, just as you did

    Well, that didn't work, certainly looks like a bug to me.

    You mean you couldn't pass the config file using runtime classes option? For both of qemu and fc ?

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    I have set the config_path in my containerd config and it had no effect, I still got the error that it couldn't find configuration.toml, that's exactly the starting point of my issue.

    To be more specific, I was trying this in /etc/containerd/config.toml:

    
                  [plugins.cri.containerd.runtimes.kata-qemu]
                    runtime_type = "io.containerd.kata.v2"
                    [plugins.cri.containerd.runtimes.kata-qemu.options]
                      ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
    
    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    And now I have this:

    
          [plugins.cri.containerd.runtimes.kata-qemu]
            runtime_type = "io.containerd.kata-qemu.v2"
            pod_annotations = ["io.katacontainers.*"]
    

    And, per your suggestion I have set pod annotation like this:

    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: centos
      labels:
        app: centos
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: centos
      template:
        metadata:
          labels:
            app: centos
          annotations:
            io.katacontainers.config_path: /opt/kata/share/defaults/kata-containers/configuration-qemu.toml
        spec:
          runtimeClassName: kata-qemu
          containers:
          - name: console
            image: quay.io/footloose/centos7
            command: [/sbin/init]
            tty: true
            securityContext:
              privileged: true
    

    And I'm still getting same error:

    
      Warning  FailedCreatePodSandBox  40s (x26 over 6m24s)  kubelet, ip-192-168-52-103.eu-west-1.compute.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: Cannot find usable config file (config file "/etc/kata-containers/configuration.toml" unresolvable: file /etc/kata-containers/configuration.toml does not exist, config file "/usr/share/defaults/kata-containers/configuration.toml" unresolvable: file /usr/share/defaults/kata-containers/configuration.toml does not exist): not found
    
    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    I have set the config_path in my containerd config and it had no effect, I still got the error that it couldn't find configuration.toml, that's exactly the starting point of my issue.

    To be more specific, I was trying this in /etc/containerd/config.toml:

    
                  [plugins.cri.containerd.runtimes.kata-qemu]
                    runtime_type = "io.containerd.kata.v2"
                    [plugins.cri.containerd.runtimes.kata-qemu.options]
                      ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
    

    Hi , What's the kata version you used? I had tried it with the master branch, and it do work from my side.

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    I have installed 1.10.1. I wanted to install 1.10.2, but the tarball with static binaries is missing for 1.10.2. Also, my containerd version is 1.2.6.

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    ❗️Looks like containerd 1.2.6 doesn't actually pass annotation, but 1.3.3 does.

    I created a shim debug wrapper like this:

    
    [ec2-user-192-168-52-103 tmp]$ cat /opt/kata/bin/containerd-shim-kata-qemu-v2 
    #!/bin/bash
    echo $@ > /tmp/kata-$$.args
    env > /tmp/kata-$$.env
    cp $PWD/config.json /tmp/kata-$$.json
    /opt/kata/bin/containerd-shim-kata-v2 $@
    [ec2-user-192-168-52-103 tmp]$ 
    

    And here is' what I see with containerd 1.2.6:

    
    [ec2-user-192-168-52-103 tmp]$ cat /tmp/kata-65146.json | jq .annotations
    {
      "io.kubernetes.cri.container-type": "sandbox",
      "io.kubernetes.cri.sandbox-id": "acd75797065f3c268b12c0674f486a60aacca8f6d30cb664ec158dda0c4b8831",
      "io.kubernetes.cri.sandbox-log-directory": "/var/log/pods/default_centos-58ffc54b94-tzx7g_2ce73eae-eb52-45df-a0d2-da44c73c4926"
    }
    

    And after upgrading to containerd 1.3.3, I start seeing kata annotations:

    
    [ec2-user-192-168-52-103 tmp]$ cat /tmp/kata-62377.json | jq .annotations
    {
      "io.katacontainers.config_path": "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml",
      "io.kubernetes.cri.container-type": "sandbox",
      "io.kubernetes.cri.sandbox-id": "9a5101ce68d5567f3eff7bb435581151d65261bf4f1a8de9d8242c94921a9222",
      "io.kubernetes.cri.sandbox-log-directory": "/var/log/pods/default_centos-58ffc54b94-k9vzk_d288b321-cf2c-4dac-bc57-312fc9ee2cb3"
    }
    [ec2-user-192-168-52-103 tmp]$
    

    However, with kata 1.10.1, I still get same error...

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    So I just discovered that annotations for config_path specifically are not supported in 1.10 and got added (1c11fe20ba3497bd8cf3082169a09d8ff21cbae7) after 1.10 was out.

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    I can also confirm that the following containerd configuration works:

    
                  [plugins.cri.containerd.runtimes.kata-qemu]
                    runtime_type = "io.containerd.kata-qemu.v2"
                    pod_annotations = ["io.katacontainers.*"]
                    [plugins.cri.containerd.runtimes.kata-qemu.options]
                      ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
    

    And the key name is certainly ConfigPath, I have tested config_path and it didn't work. That is a rather inconsistent configuration style, I have to say.

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    I can also confirm that the following containerd configuration works:

    
                  [plugins.cri.containerd.runtimes.kata-qemu]
                    runtime_type = "io.containerd.kata-qemu.v2"
                    pod_annotations = ["io.katacontainers.*"]
                    [plugins.cri.containerd.runtimes.kata-qemu.options]
                      ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
    

    And the key name is certainly ConfigPath, I have tested config_path and it didn't work. That is a rather inconsistent configuration style, I have to say.

    Yeah, it's weird, the "config_path" do not work. I'll have a look this issue.

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    Hi
    I do know why the "config_path" isn't work, you can see the runtime's options definition here: https://github.com/containerd/cri/blob/master/pkg/api/runtimeoptions/v1/api.pb.go#L56
    It only accept the json style's rename when do deserializing without the "toml" style, which is the containerd's config style, that's why the "config_path" doesn't work.

    点赞 评论 复制链接分享
  • weixin_39531761 weixin_39531761 5月前

    Here is my eksctl config file, it should be fully reproducible.

    https://github.com/errordeveloper/kata-fc-eks/blob/e6ee6fc523228fd07a5770868db39928c9a5d4fc/cluster.yaml

    Happy to explain, if anyone has more question about the config file.

    点赞 评论 复制链接分享
  • weixin_39873177 weixin_39873177 5月前

    Can you give us a link to the exact docs you were following as well pls , just for completeness :-) thx

    点赞 评论 复制链接分享

相关推荐