weixin_39885412
weixin_39885412
2020-12-02 13:28

Unable to mount block device due to race between agent and device

Description of problem

We are occasionally seeing failures when mounting devicemapper block devices for the container rootfs.

The following error is displayed:


Aug 12 02:34:07 ip-10-151-109-213.ec2.internal containerd[2565]: time="2019-08-12T02:34:07.764618542Z" level=error msg="StartContainer for \"7a539cd26f8272a7b31c08473bb82ce54bc0eea1d8c3225a818749a04f059ab1\" failed" error="failed to create containerd task: Could not resolve symlink for source /dev/sde: unknown"

Which comes from: https://github.com/kata-containers/agent/blob/3f0a86575ede395afe11b6918c8e1c7b2b8b8869/mount.go#L116-L119

I believe this is due to a race between when the block device is hotplugged into the guest and when the agent tries to mount the /dev/sdX device. In some cases, the /dev/sdX is not visible in the guest when the agent tries to mount it. The /dev/sdX device soon becomes visible, but at this point it is too late - the pod is failed.

I've captured the following logs (more are available if needed):


vmconsole="time=\"2019-08-12T02:34:07.693618516Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/7a539cd26f8272a7b31c08473bb82ce54bc0eea1d8c3225a818749a04f059ab1 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sde name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
vmconsole="time=\"2019-08-12T02:34:07.732122764Z\" level=info msg=\"Received add uevent\" debug_console=false name=kata-agent pid=90 source=agent subsystem=udevlistener uevent-action=add uevent-devname=sde uevent-devpath=\"/devices/pci0000:00/0000:00:03.0/virtio1/host0/target0:0:0/0:0:0:4/block/sde\" uevent-seqnum=1034 uevent-subsystem=block"
time="2019-08-12T02:34:07.764618542Z" level=error msg="StartContainer for \"7a539cd26f8272a7b31c08473bb82ce54bc0eea1d8c3225a818749a04f059ab1\" failed" error="failed to create containerd task: Could not resolve symlink for source /dev/sde: unknown"

Essentially, the mount process for the /dev/sde device is run before the device is actually visible, as we can see the udev add event coming in after.

Expected result

For the Kata container to start up without issues.

Actual result

We see the occasional failure when starting Kata containers due to issues mounting the rootfs for the container.

We see this occurring for about 0.5% of the total pods we run, so a very little percentage. I've found it very difficult to reproduce.

Show kata-collect-data.sh details

# Meta details Running `kata-collect-data.sh` version `1.8.0 (commit 31b8cb3fbc7b1de9c94201aed7c54ddb73e97587)` at `2019-08-13.06:37:57.741490733+0000`. --- Runtime is `/opt/kata/bin/kata-runtime`. # `kata-env` Output of "`/opt/kata/bin/kata-runtime kata-env`":

toml
[Meta]
  Version = "1.0.23"

[Runtime]
  Debug = false
  Trace = false
  DisableGuestSeccomp = true
  DisableNewNetNs = false
  Path = "/opt/kata/bin/kata-runtime"
  [Runtime.Version]
    Semver = "1.8.0"
    Commit = "31b8cb3fbc7b1de9c94201aed7c54ddb73e97587"
    OCI = "1.0.1-dev"
  [Runtime.Config]
    Path = "/etc/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "virt"
  Version = "NEMU (like QEMU) version 4.0.0 (kata-static)\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers"
  Path = "/opt/kata/bin/nemu-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  Msize9p = 8192
  MemorySlots = 10
  Debug = false
  UseVSock = false
  SharedFS = "virtio-9p"

[Image]
  Path = "/opt/kata/share/kata-containers/kata-containers-ubuntu-console-1.8.0-9e0a061.img"

[Kernel]
  Path = "/opt/kata/share/kata-containers/vmlinuz-4.19.52-45"
  Parameters = "init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket"

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.8.0-7cd620094df16486c19b2c20300f9847a94c23af"
  Path = "/opt/kata/libexec/kata-containers/kata-proxy"
  Debug = false

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.8.0-bc65ea8385a09611054d95b5dbc8f858f87c3d7a"
  Path = "/opt/kata/libexec/kata-containers/kata-shim"
  Debug = false

[Agent]
  Type = "kata"
  Debug = false
  Trace = false
  TraceMode = ""
  TraceType = ""

[Host]
  Kernel = "4.19.50-coreos-r1"
  Architecture = "amd64"
  VMContainerCapable = true
  SupportVSocks = true
  [Host.Distro]
    Name = "Container Linux by CoreOS"
    Version = "2135.4.0"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz"

[Netmon]
  Version = "kata-netmon version 1.8.0"
  Path = "/opt/kata/libexec/kata-containers/kata-netmon"
  Debug = false
  Enable = false
--- # Runtime config files ## Runtime default config files

/etc/kata-containers/configuration.toml
/opt/kata/share/defaults/kata-containers/configuration.toml
## Runtime config file contents Output of "`cat "/etc/kata-containers/configuration.toml"`":
toml
# Copyright (c) 2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-nemu.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

# nemu utilizes the 'qemu' hypervisor template type, since it closely matches qemu
[hypervisor.qemu]
path = "/opt/kata/bin/nemu-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "virt"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "/opt/kata/share/kata-nemu/OVMF.fd"

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Shared file system type:
#   - virtio-fs (default)
#   - virtio-9p
shared_fs = "virtio-9p"

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64"

# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024

# Cache mode:
#
#  - none
#    Metadata, data, and pathname lookup are not cached in guest. They are
#    always fetched from host and any changes are immediately pushed to host.
#
#  - auto
#    Metadata and pathname lookup cache expires after a configured amount of
#    time (default is 1 second). Data is cached while the file is open (close
#    to open consistency).
#
#  - always
#    Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"

# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true

# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true

# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
enable_hugepages = false

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent (no proxy is started).
# Default true
#use_vsock = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true

# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"

# The number of caches of VMCache:
# unspecified or == 0   --> VMCache is disabled
# > 0                   --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket.  The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client.  It will request gRPC format
# VM and convert it back to a VM.  If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0

# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"

[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true

[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true

# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
#   setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
#   will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
#   full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"

[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"

# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# They may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# 1. "newstore": new persist storage driver which breaks backward compatibility,
#               expected to move out of experimental in 2.0.0.
# (default: [])
experimental=[]
Output of "`cat "/opt/kata/share/defaults/kata-containers/configuration.toml"`":
toml
# Copyright (c) 2017-2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration-qemu.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
default_memory = 2048
#
# Default memory slots per SB/VM.
# If unspecified then it will be set 10.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = 10

# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Shared file system type:
#   - virtio-9p (default)
#   - virtio-fs
shared_fs = "virtio-9p"

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/opt/kata/bin/virtiofsd-x86_64"

# Default size of DAX cache in MiB
virtio_fs_cache_size = 1024

# Cache mode:
#
#  - none
#    Metadata, data, and pathname lookup are not cached in guest. They are
#    always fetched from host and any changes are immediately pushed to host.
#
#  - auto
#    Metadata and pathname lookup cache expires after a configured amount of
#    time (default is 1 second). Data is cached while the file is open (close
#    to open consistency).
#
#  - always
#    Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "always"

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "virtio-scsi"

# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true

# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true

# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = ""

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes 
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

# VFIO devices are hotplugged on a bridge by default. 
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on 
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true

# If host doesn't support vhost_net, set to true. Thus we won't create vhost fds for nics.
# Default false
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy.  If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "/dev/urandom"

# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,postart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered will scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true

# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"

# The number of caches of VMCache:
# unspecified or == 0   --> VMCache is disabled
# > 0                   --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket.  The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client.  It will request gRPC format
# VM and convert it back to a VM.  If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0

# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"

[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

# If enabled, the shim will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
#
# Note: By default, the shim runs in a separate network namespace. Therefore,
# to allow it to send trace details to the Jaeger agent running on the host,
# it is necessary to set 'disable_new_netns=true' so that it runs in the host
# network namespace.
#
# (default: disabled)
#enable_tracing = true

[agent.kata]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true

# Enable agent tracing.
#
# If enabled, the default trace mode is "dynamic" and the
# default trace type is "isolated". The trace mode and type are set
# explicity with the `trace_type=` and `trace_mode=` options.
#
# Notes:
#
# - Tracing is ONLY enabled when `enable_tracing` is set: explicitly
#   setting `trace_mode=` and/or `trace_type=` without setting `enable_tracing`
#   will NOT activate agent tracing.
#
# - See https://github.com/kata-containers/agent/blob/master/TRACING.md for
#   full details.
#
# (default: disabled)
#enable_tracing = true
#
#trace_mode = "dynamic"
#trace_type = "isolated"

[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional
# network being added to the existing network namespace, after the
# sandbox has been created.
# (default: disabled)
#enable_netmon = true

# Specify the path to the netmon binary.
path = "/opt/kata/libexec/kata-containers/kata-netmon"

# If enabled, netmon messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
#
#   - none
#     Used when customize network. Only creates a tap device. No veth pair.
#
#   - tcfilter
#     Uses tc filter rules to redirect traffic from the network interface
#     provided by plugin to a tap interface connected to the VM.
#
internetworking_model="tcfilter"

# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=true

# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true

# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `enable_netmon`
# `disable_new_netns` conflicts with `internetworking_model=bridged` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# If you are using docker, `disable_new_netns` only works with `docker run --net=none`
# (default: false)
#disable_new_netns = true

# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# They may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# 1. "newstore": new persist storage driver which breaks backward compatibility,
#               expected to move out of experimental in 2.0.0.
# (default: [])
experimental=[]
Config file `/usr/share/defaults/kata-containers/configuration.toml` not found --- # KSM throttler ## version Output of "` --version`":

./kata-collect-data.sh: line 178: --version: command not found
## systemd service # Image details
yaml
---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "unknown"
rootfs-creation-time: "2019-07-22T00:55:25.248493513+0000Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "bionic"
  version: "18.04"
  packages:
    default:
      - "systemd,iptables,init,chrony,fuse,bash"
    extra:
      - "bash"
      - "fuse"
agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.8.0-3195c3e48c50dc835ab84d6b4ced397659369fb8"
  agent-is-init-daemon: "no"
--- # Initrd details No initrd --- # Logfiles ## Runtime logs No recent runtime problems found in system journal. ## Proxy logs No recent proxy problems found in system journal. ## Shim logs No recent shim problems found in system journal. ## Throttler logs No recent throttler problems found in system journal. --- # Container manager details Have `docker` ## Docker Output of "`docker version`":

Client: Docker Engine - Community
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        2d0083d
 Built:             Thu Jun 27 17:54:15 2019
 OS/Arch:           linux/amd64
 Experimental:      false
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Output of "`docker info`":

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Output of "`systemctl show docker`":

Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestampMonotonic=0
ExecMainExitTimestampMonotonic=0
ExecMainPID=0
ExecMainCode=0
ExecMainStatus=0
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=[not set]
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=no
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=73727
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1073741816
LimitNOFILESoft=1073741816
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=2063281
LimitNPROCSoft=2063281
LimitMEMLOCK=67108864
LimitMEMLOCKSoft=67108864
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=2063281
LimitSIGPENDINGSoft=2063281
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=inherit
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=control-group
KillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=docker.service
Names=docker.service
WantedBy=kitt-init.service
ConsistsOf=docker.socket
Before=kitt-init.service
After=docker.socket
TriggeredBy=docker.socket
Description=docker.service
LoadState=masked
ActiveState=inactive
SubState=dead
FragmentPath=/dev/null
UnitFileState=masked
StateChangeTimestampMonotonic=0
InactiveExitTimestampMonotonic=0
ActiveEnterTimestampMonotonic=0
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=no
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=no
AssertResult=no
ConditionTimestampMonotonic=0
AssertTimestampMonotonic=0
LoadError=org.freedesktop.systemd1.UnitMasked "Unit docker.service is masked."
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
FailureActionExitStatus=-1
SuccessAction=none
SuccessActionExitStatus=-1
CollectMode=inactive
No `kubectl` No `crio` Have `containerd` ## containerd Output of "`containerd --version`":

containerd github.com/containerd/containerd v1.2.0-872-ga1c88e19.m a1c88e195aa98e5fcd9bf42294caf0bbf7a543ae.m
Output of "`systemctl show containerd`":

Type=simple
Restart=always
NotifyAccess=none
RestartUSec=5s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=2865
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Tue 2019-08-13 06:14:35 UTC
ExecMainStartTimestampMonotonic=64298646
ExecMainExitTimestampMonotonic=0
ExecMainPID=2865
ExecMainCode=0
ExecMainStatus=0
ExecStartPre={ path=/sbin/modprobe ; argv[]=/sbin/modprobe overlay ; ignore_errors=no ; start_time=[Tue 2019-08-13 06:14:33 UTC] ; stop_time=[Tue 2019-08-13 06:14:33 UTC] ; pid=2612 ; code=exited ; status=0 }
ExecStartPre={ path=/opt/bin/containerd-init.sh ; argv[]=/opt/bin/containerd-init.sh ; ignore_errors=no ; start_time=[Tue 2019-08-13 06:14:33 UTC] ; stop_time=[Tue 2019-08-13 06:14:35 UTC] ; pid=2614 ; code=exited ; status=0 }
ExecStart={ path=/opt/containerd/bin/containerd ; argv[]=/opt/containerd/bin/containerd --log-level=debug --config=/etc/containerd/config.toml ; ignore_errors=no ; start_time=[Tue 2019-08-13 06:14:35 UTC] ; stop_time=[n/a] ; pid=2865 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/containerd.service
MemoryCurrent=1023807488
CPUUsageNSec=[not set]
TasksCurrent=280
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids bpf-firewall bpf-devices
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=73727
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=2063281
LimitSIGPENDINGSoft=2063281
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=-999
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
PrivateMounts=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
FinalKillSignal=9
SendSIGKILL=yes
SendSIGHUP=no
WatchdogSignal=6
Id=containerd.service
Names=containerd.service
Requires=system.slice sysinit.target
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=multi-user.target shutdown.target
After=basic.target kata-init.service containerd-devicemapper-init.service systemd-journald.socket sysinit.target system.slice
Documentation=https://containerd.io
Description=containerd container runtime
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/etc/systemd/system/containerd.service
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Tue 2019-08-13 06:14:35 UTC
StateChangeTimestampMonotonic=64298707
InactiveExitTimestamp=Tue 2019-08-13 06:14:33 UTC
InactiveExitTimestampMonotonic=62336899
ActiveEnterTimestamp=Tue 2019-08-13 06:14:35 UTC
ActiveEnterTimestampMonotonic=64298707
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Tue 2019-08-13 06:14:33 UTC
ConditionTimestampMonotonic=62335711
AssertTimestamp=Tue 2019-08-13 06:14:33 UTC
AssertTimestampMonotonic=62335711
Transient=no
Perpetual=no
StartLimitIntervalUSec=10s
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
FailureActionExitStatus=-1
SuccessAction=none
SuccessActionExitStatus=-1
InvocationID=10a9a1a0b4a241899d38c067f5ca0196
CollectMode=inactive
Output of "`cat /etc/containerd/config.toml`":

[grpc]
address = "/run/containerd/containerd.sock"
uid = 0
gid = 0

[plugins]
  [plugins.devmapper]
    pool_name = "containerd-thinpool"
    base_image_size = "32GB"
  [plugins.cri.containerd]
    snapshotter = "devmapper"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/bin/runc"
      runtime_root = ""
    [plugins.cri.containerd.untrusted_workload_runtime]
      runtime_type = "io.containerd.kata.v2"
      privileged_without_host_devices = true

[plugins.cri]
  max_container_log_line_size = 262144

[plugins.linux]
  shim = "/opt/containerd/bin/containerd-shim"
  runtime = "runc"

[plugins.cri.registry]
  [plugins.cri.registry.mirrors]
    [plugins.cri.registry.mirrors."docker.io"]
       endpoint = []
--- # Packages No `dpkg` No `rpm` ---

该提问来源于开源项目:kata-containers/runtime

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

12条回答

  • weixin_39885412 weixin_39885412 5月前

    /cc

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    Hi , there is a real issue with the virtio-scsi driver for waiting on the device, I think there is no issue for virtio-blk mode. I'll fix it later.

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    Can you paste the full log, since it's not easy to reproduce it from my side?

    点赞 评论 复制链接分享
  • weixin_39646970 weixin_39646970 5月前

    Yes, can you provide the complete logs that you get from the agent. Also, can you let us know if you have a way to easily reproduce this with the pod yaml that you are using.

    点赞 评论 复制链接分享
  • weixin_39646970 weixin_39646970 5月前

    I tried to see if this can be reproduced as well, wasnt able to do so. I am wondering if the timeout that we have today for detecting hotplugged devices is too small in your case. We currently have it set to 3 seconds: https://github.com/kata-containers/agent/blob/master/device.go#L50

    In case you are using a slow disk, we may need to increase that timeout.

    点赞 评论 复制链接分享
  • weixin_39885412 weixin_39885412 5月前

    Unfortunately I wasn't able to reproduce this, but I can provide the logs.

    Is that timeout used for virtio-scsi? Looking at the code it seems like it's only used when dealing with virtio-blk devices.

    Looking at the code for https://github.com/kata-containers/agent/blob/master/device.go#L450-L475 am I right to believe that it will wait for the device to be available before mounting?

    Regarding the slow disk - it is possible that there is a large amount of CPU load on the node when this occurs, which may impact it.

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    Unfortunately I wasn't able to reproduce this, but I can provide the logs.

    Is that timeout used for virtio-scsi? Looking at the code it seems like it's only used when dealing with virtio-blk devices.

    Looking at the code for https://github.com/kata-containers/agent/blob/master/device.go#L450-L475 am I right to believe that it will wait for the device to be available before mounting?

    Yes, you are right, it wouldn't do the mount until the device appeared.

    Regarding the slow disk - it is possible that there is a large amount of CPU load on the node when this occurs, which may impact it.

    It's weird, if the system load is heavy, and wait on the device timeout, then it should return directly and wouldn't do the following mount action. Can you upload the full log here?

    点赞 评论 复制链接分享
  • weixin_39885412 weixin_39885412 5月前

    Here is the full logs from when the issue occurred - https://gist.githubusercontent.com/awprice/de44132a9893a92e8b06a7a5f59d9365/raw/3b242bfe1015f152cd8f6bb66ed39fb6ee2681d6/kata.log

    The sandbox/VM ID you will want to look at is 1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0

    The container ID inside the guest is 7a539cd26f8272a7b31c08473bb82ce54bc0eea1d8c3225a818749a04f059ab1

    Some relevant logs I've pull out from the full logs:

    
    time="2019-08-12T02:34:05.718397643Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.708327227Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/ mount-flags=4 mount-fstype=9p mount-options=\"trans=virtio,version=9p2000.L,cache=mmap,msize=8192\" mount-source=kataShared name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:05.718397643Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.708327227Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/ mount-flags=4 mount-fstype=9p mount-options=\"trans=virtio,version=9p2000.L,cache=mmap,msize=8192\" mount-source=kataShared name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:05.720907224Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.710860912Z\" level=debug debug_console=false mount-destination=/run/kata-containers/sandbox/shm mount-flags=14 mount-fstype=tmpfs mount-options=\"mode=1777,size=67108864\" mount-source=shm name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:05.720907224Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.710860912Z\" level=debug debug_console=false mount-destination=/run/kata-containers/sandbox/shm mount-flags=14 mount-fstype=tmpfs mount-options=\"mode=1777,size=67108864\" mount-source=shm name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:05.734923807Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.724883068Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:0\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:05.735042039Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.725056334Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sda name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:05.734923807Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.724883068Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:0\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:05.735042039Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:05.725056334Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sda name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:06.21582267Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:06.205805744Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/0ea79ff45118086b50bdf44195c200cb807d9b66cc62f8f81c4a66bfe5c64ef3 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sdb name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:06.215822670Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:06.205805744Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/0ea79ff45118086b50bdf44195c200cb807d9b66cc62f8f81c4a66bfe5c64ef3 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sdb name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:06.658164703Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:06.648181969Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/f5c865a15426bbc70d5e16375691398a0bb1b588bc8c649019fd58b2ad3dd60e mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sdc name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:06.658058657Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:06.648038969Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:2\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:06.658164703Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:06.648181969Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/f5c865a15426bbc70d5e16375691398a0bb1b588bc8c649019fd58b2ad3dd60e mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sdc name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:06.658058657Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:06.648038969Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:2\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.198868215Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.188903919Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/92c69e13a7e2aa60c925d3a9c8ed04b95fc7ed42c450879fea19cc59c8b15af9 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sdd name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.198766316Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.188791393Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:3\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.198868215Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.188903919Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/92c69e13a7e2aa60c925d3a9c8ed04b95fc7ed42c450879fea19cc59c8b15af9 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sdd name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.198766316Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.188791393Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:3\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.202862522Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.192868685Z\" level=debug debug_console=false mount-destination=/run/kata-containers/sandbox/ephemeral/shared-memory mount-flags=0 mount-fstype=tmpfs mount-options= mount-source=tmpfs name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.202862522Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.192868685Z\" level=debug debug_console=false mount-destination=/run/kata-containers/sandbox/ephemeral/shared-memory mount-flags=0 mount-fstype=tmpfs mount-options= mount-source=tmpfs name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.703578326Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.693618516Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/7a539cd26f8272a7b31c08473bb82ce54bc0eea1d8c3225a818749a04f059ab1 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sde name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.703464800Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.693454791Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:4\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.703578326Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.693618516Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/7a539cd26f8272a7b31c08473bb82ce54bc0eea1d8c3225a818749a04f059ab1 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sde name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.7034648Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.693454791Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:4\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:07.742066254Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.732122764Z\" level=info msg=\"Received add uevent\" debug_console=false name=kata-agent pid=90 source=agent subsystem=udevlistener uevent-action=add uevent-devname=sde uevent-devpath=\"/devices/pci0000:00/0000:00:03.0/virtio1/host0/target0:0:0/0:0:0:4/block/sde\" uevent-seqnum=1034 uevent-subsystem=block"
    time="2019-08-12T02:34:07.742066254Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:07.732122764Z\" level=info msg=\"Received add uevent\" debug_console=false name=kata-agent pid=90 source=agent subsystem=udevlistener uevent-action=add uevent-devname=sde uevent-devpath=\"/devices/pci0000:00/0000:00:03.0/virtio1/host0/target0:0:0/0:0:0:4/block/sde\" uevent-seqnum=1034 uevent-subsystem=block"
    time="2019-08-12T02:34:07.764618542Z" level=error msg="StartContainer for \"7a539cd26f8272a7b31c08473bb82ce54bc0eea1d8c3225a818749a04f059ab1\" failed" error="failed to create containerd task: Could not resolve symlink for source /dev/sde: unknown"
    time="2019-08-12T02:34:08.193518414Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:08.182942177Z\" level=info msg=\"Received add uevent\" debug_console=false name=kata-agent pid=90 source=agent subsystem=udevlistener uevent-action=add uevent-devname=sde uevent-devpath=\"/devices/pci0000:00/0000:00:03.0/virtio1/host0/target0:0:0/0:0:0:5/block/sde\" uevent-seqnum=1057 uevent-subsystem=block"
    time="2019-08-12T02:34:08.193518414Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:08.182942177Z\" level=info msg=\"Received add uevent\" debug_console=false name=kata-agent pid=90 source=agent subsystem=udevlistener uevent-action=add uevent-devname=sde uevent-devpath=\"/devices/pci0000:00/0000:00:03.0/virtio1/host0/target0:0:0/0:0:0:5/block/sde\" uevent-seqnum=1057 uevent-subsystem=block"
    time="2019-08-12T02:34:08.197751892Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:08.187822021Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/50bdd83b89cef97165aff622092a6d34e5f7400c30230a73e54ec8a73d05a682 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sde name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:08.197657705Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:08.187675572Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:5\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:08.197751892Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:08.187822021Z\" level=debug debug_console=false mount-destination=/run/kata-containers/shared/containers/50bdd83b89cef97165aff622092a6d34e5f7400c30230a73e54ec8a73d05a682 mount-flags=0 mount-fstype=ext4 mount-options= mount-source=/dev/sde name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    time="2019-08-12T02:34:08.197657705Z" level=debug msg="reading guest console" ID=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=virtcontainers subsystem=kata_agent vmconsole="time=\"2019-08-12T02:34:08.187675572Z\" level=info msg=\"Device already hotplugged, quit listening\" debug_console=false device=\"0:5\" name=kata-agent pid=90 sandbox=1a9d25bd5ef071fb51da99b6a5174b4efa4b60368f34d567b44d198d3b1d2fc0 source=agent"
    
    点赞 评论 复制链接分享
  • weixin_39885412 weixin_39885412 5月前

    Done some more digging, please ignore my above comment, looks like they do get waited on here - https://github.com/kata-containers/agent/blob/stable-1.8/device.go#L412

    点赞 评论 复制链接分享
  • weixin_39693971 weixin_39693971 5月前

    Hi , after some digging, I found the root cause as below: Even though it waited on https://github.com/kata-containers/agent/blob/stable-1.8/device.go#L412, but there is an issue with this wait: this wait will first check is the device's sys entry existed in /sys directory, and if it existed, then it would return directly. The kernel register a new device as an order of: 1) create the sys entry in /sys; 2) mknod in devtmpfs; 3) send uevent to userspace. The problem here is when the "wait" tested the sys entry existed and return directly, but at that time the kernel would haven't created the device node in devtmpfs, thus the following mount failed with couldn't resolve the device path. Since the time gap between creating sys entry and mknod would be very small, thus it's not easy to reproduce this issue.

    I had tried to reproduce this issue by hacking the guest kernel with delaying a while between create sys entry and mknod in devtmpfs using the following patch:

    `
    diff --git a/drivers/base/core.c b/drivers/base/core.c
    index 92e2c32c2..b78dfa805 100644
    --- a/drivers/base/core.c
    +++ b/drivers/base/core.c
    @@ -25,6 +25,7 @@
     #include <linux>
     #include <linux>
     #include <linux>
    +#include <linux>
    
     #include "base.h"
     #include "power/power.h"
    @@ -1918,7 +1919,8 @@ int device_add(struct device *dev)
                    error = device_create_sys_dev_entry(dev);
                    if (error)
                            goto SysEntryError;
    -
    +       
    +               mdelay(90);
                    devtmpfs_create_node(dev);
            }
     </linux></linux></linux></linux>

    `

    and then try to create a sandbox with more than 2 vcpus, then it's easy to get the following errors:

    $ sudo crictl start $c FATA[0000] Starting the container "df2bf03b3e71b0da152a8076a24a28987607e92d1598c5fa26e4e6bf8215a33b" failed: rpc error: code = Unknown desc = failed to create containerd task: Could not resolve symlink for source /dev/sdb: unknown

    点赞 评论 复制链接分享
  • weixin_39885412 weixin_39885412 5月前

    Awesome find! Thanks for spending the time debugging/reproducing this.

    点赞 评论 复制链接分享
  • weixin_39885412 weixin_39885412 5月前

    I'm going to close this issue out as the fix for https://github.com/kata-containers/agent/issues/628 has solved this for us. We aren't seeing any issues now mounting the block devices after using the fix. Thanks !

    点赞 评论 复制链接分享

相关推荐