weixin_39949506
weixin_39949506
2021-01-07 08:23

Error: Failed container creation: failed to prepare loop device: bad file descriptor

Required information

  • Distribution: Arch Linux
  • The output of "lxc info":

config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIFWDCCA0CgAwIBAgIQXPI8/ep7lUrY4Ph3SeY4yTANBgkqhkiG9w0BAQsFADA2
    MRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRYwFAYDVQQDDA1yb290QGxl
    b25hcmRvMB4XDTE4MDYxMDE3NTgzNloXDTI4MDYwNzE3NTgzNlowNjEcMBoGA1UE
    ChMTbGludXhjb250YWluZXJzLm9yZzEWMBQGA1UEAwwNcm9vdEBsZW9uYXJkbzCC
    AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALLcDGM/ngVTddw84kvmPW4D
    l/nNJ1uPwdUEh+xW6CU/fy5VDosuSHTm3/WYRT8rkVM/gGzJ+KtkcrNHfjsMxeQH
    hgzLS1JhtX4uLlbIQZfGOHCFRxWCvejnnq9D+0h1930/f/Z34mCqJJyh+RjxRszr
    XsqKjGULTeNIeF89MWzQ6GTxXW+2dYtNnsMn0pL0UKB8PrL7XIz45IA5+RKkdwqv
    pGhfC/rUT/xUOshqHe/IkFn0eSG6ynexmognCbnH0BY19j6Ss4eX5Y0zRuq46dNv
    jjE4fQEVkB0wjKaf/xBpHF5Lz082EvD6Gaj+ISpQ8oB9Upaf2q2gDcOhnqhZciUg
    7lqAB5Khux0nXaDd6muINHfeupOKgV2Fm3prErPVGOLQQw4NdXunppIy+QxgRv4P
    zikr6HMrksSqRztvZKmHINxQbqg3ghQWQlpcTcD4Ab0oxURc95kpuMmiUPgTtjfN
    QC1lTYLYDEIo/fRtiVqZRCx8UTH+C9URX5Wn4fGA47H+AF8UIyFSnRdO68HI7Hr7
    JMPLVuqcAh0fLkPcXEd+pWY172T+icV2mb2cgqNxQWIcQmeLhDnuN5L9XbVEF1cP
    LVvGIyK+t1k9a4R4Zn2t4/UAvgmXsKr6qIVCcOQMzL9yU/fY0F4AUTXcjBctgIjI
    gA2h73csfu2tzmJTK975AgMBAAGjYjBgMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUE
    DDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCsGA1UdEQQkMCKCCGxlb25hcmRv
    hwTAqAEYhxAqAcsIAYlOAHohSOKHf+rVMA0GCSqGSIb3DQEBCwUAA4ICAQBEk8e9
    m9npNR+S4rPlDTEdtU4D6ZnJOBVBbWqGQ67FlB7ztpQGSABa/wm7IiaR0JS2s2d4
    nlsArceXlsknq9PjPd9x+SnnlQmIEzw0G1armK+Ho11U+WtoaEDpBoEE6Iibwuh3
    IM0yMaHWqlU3DVgLc27GxNSXQJA3TDtIAQtqPnpEarWHjT2/tlC6CVk0d8P+aKO7
    /7ihKcr91QubfSOt2cRnNocBDlpTG2EFM/A3CQ9ivRVJU+nDAzKCO4bc+sqwOeJS
    LHin2Pq90V0GKNUEZ6z5YdBh/9itn5xvSguTsclQScl+8bBgrNdE/UQhFf9ifWWd
    rKLJtK2xenN7VrkXTqoOFWnv+RywcPQk2Ur2DtpqhrxENAHB5ldwVYJW9S8anvSd
    3+EkhhjqoAp+gsv44i05blOmcdF18+Bpb4plTx+MzqsglnaWA/uPwUiYF6mORBeG
    EAb7vDVR88zQDzAceAE7YrAhGmr3AVQBZoDCExnqN/xriL7Uzt4NBz/ErPsAyMS8
    iKfEdyomsa9nlMt5ZsGNmoXT9TzliSvtNtkTvECchpgn9dt/0xd2Ja2A+ehE72ON
    9+AV5fFN3ZLRmIyfjb3PyPKcyxVY2RSOXpluPHdapnk1wH5baD3pcCATlrse8cuV
    02tpP1GjNgadCMuS21p8+dqzZMMwZ9OofKzOYg==
    -----END CERTIFICATE-----
  certificate_fingerprint: a4c24221d1d887f6faf475223a8a28834da57fe8cff1f9a8119155ddcfacca19
  driver: lxc
  driver_version: 3.0.1
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.17.4-1-ARCH
  server: lxd
  server_pid: 26602
  server_version: "3.1"
  storage: btrfs
  storage_version: 4.16.1
  server_clustered: false
  server_name: leonardo

Issue description

I'm unable to create a new container. This used to work until some days ago. It fails with the message: Error: Failed container creation: failed to prepare loop device: bad file descriptor then with Error: Failed container creation: UNIQUE constraint failed: storage_volumes.storage_pool_id, storage_volumes.node_id, storage_volumes.name, storage_volumes.type No clue where to look at. Thanks in advance for your light! :)

Eg.:


$ lxc launch ubuntu:18.04 tilery -c security.privileged=true                                                                                    Creating tilery
Error: Failed container creation: failed to prepare loop device: bad file descriptor
$ lxc launch ubuntu:18.04 tilery -c security.privileged=true                                                                                    Creating tilery
Error: Failed container creation: UNIQUE constraint failed: storage_volumes.storage_pool_id, storage_volumes.node_id, storage_volumes.name, storage_volumes.type
$ lxc launch ubuntu:18.04 tilery -c security.privileged=true                                                                                    Creating tilery
Error: Failed container creation: failed to prepare loop device: bad file descriptor
$ lxc launch ubuntu:18.04 tilery -c security.privileged=true                                                                                    Creating tilery
Error: Failed container creation: UNIQUE constraint failed: storage_volumes.storage_pool_id, storage_volumes.node_id, storage_volumes.name, storage_volumes.type

Steps to reproduce

Trying to create a container:

lxc launch ubuntu:18.04 tilery -c security.privileged=true

Note: I've never configured my laptop to be able to run unprivileged containers.

Information to attach

  • [ ] Any relevant kernel output (dmesg)

Nothing related as I can see.

  • [ ] Container log (lxc info NAME --show-log)

Container not created, so output not available.

  • [ ] Container configuration (lxc config show NAME --expanded)

Container not created, so output not available.

  • [ ] Main daemon log (at /var/log/lxd/lxd.log or /var/snap/lxd/common/lxd/logs/lxd.log)

Nothing in the logs:


$ sudo ls -lisah /var/log/lxd/                                                                                                               total 8,0K
51945759 4,0K drwx------  2 root root 4,0K 11 juil. 12:54 .
51904519 4,0K drwxr-xr-x 10 root root 4,0K  5 juil. 22:39 ..
  • [x] Output of the client with --debug

DBUG[07-11|12:58:26] Connecting to a local LXD over a Unix socket 
DBUG[07-11|12:58:26] Sending request to LXD                   etag= method=GET url=http://unix.socket/1.0
DBUG[07-11|12:58:26] Got response struct from LXD 
DBUG[07-11|12:58:26] 
    {
        "config": {},
        "api_extensions": [
            "storage_zfs_remove_snapshots",
            "container_host_shutdown_timeout",
            "container_stop_priority",
            "container_syscall_filtering",
            "auth_pki",
            "container_last_used_at",
            "etag",
            "patch",
            "usb_devices",
            "https_allowed_credentials",
            "image_compression_algorithm",
            "directory_manipulation",
            "container_cpu_time",
            "storage_zfs_use_refquota",
            "storage_lvm_mount_options",
            "network",
            "profile_usedby",
            "container_push",
            "container_exec_recording",
            "certificate_update",
            "container_exec_signal_handling",
            "gpu_devices",
            "container_image_properties",
            "migration_progress",
            "id_map",
            "network_firewall_filtering",
            "network_routes",
            "storage",
            "file_delete",
            "file_append",
            "network_dhcp_expiry",
            "storage_lvm_vg_rename",
            "storage_lvm_thinpool_rename",
            "network_vlan",
            "image_create_aliases",
            "container_stateless_copy",
            "container_only_migration",
            "storage_zfs_clone_copy",
            "unix_device_rename",
            "storage_lvm_use_thinpool",
            "storage_rsync_bwlimit",
            "network_vxlan_interface",
            "storage_btrfs_mount_options",
            "entity_description",
            "image_force_refresh",
            "storage_lvm_lv_resizing",
            "id_map_base",
            "file_symlinks",
            "container_push_target",
            "network_vlan_physical",
            "storage_images_delete",
            "container_edit_metadata",
            "container_snapshot_stateful_migration",
            "storage_driver_ceph",
            "storage_ceph_user_name",
            "resource_limits",
            "storage_volatile_initial_source",
            "storage_ceph_force_osd_reuse",
            "storage_block_filesystem_btrfs",
            "resources",
            "kernel_limits",
            "storage_api_volume_rename",
            "macaroon_authentication",
            "network_sriov",
            "console",
            "restrict_devlxd",
            "migration_pre_copy",
            "infiniband",
            "maas_network",
            "devlxd_events",
            "proxy",
            "network_dhcp_gateway",
            "file_get_symlink",
            "network_leases",
            "unix_device_hotplug",
            "storage_api_local_volume_handling",
            "operation_description",
            "clustering",
            "event_lifecycle",
            "storage_api_remote_volume_handling",
            "nvidia_runtime",
            "container_mount_propagation",
            "container_backup",
            "devlxd_images",
            "container_local_cross_pool_handling"
        ],
        "api_status": "stable",
        "api_version": "1.0",
        "auth": "trusted",
        "public": false,
        "auth_methods": [
            "tls"
        ],
        "environment": {
            "addresses": [],
            "architectures": [
                "x86_64",
                "i686"
            ],
            "certificate": "-----BEGIN CERTIFICATE-----\nMIIFWDCCA0CgAwIBAgIQXPI8/ep7lUrY4Ph3SeY4yTANBgkqhkiG9w0BAQsFADA2\nMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRYwFAYDVQQDDA1yb290QGxl\nb25hcmRvMB4XDTE4MDYxMDE3NTgzNloXDTI4MDYwNzE3NTgzNlowNjEcMBoGA1UE\nChMTbGludXhjb250YWluZXJzLm9yZzEWMBQGA1UEAwwNcm9vdEBsZW9uYXJkbzCC\nAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALLcDGM/ngVTddw84kvmPW4D\nl/nNJ1uPwdUEh+xW6CU/fy5VDosuSHTm3/WYRT8rkVM/gGzJ+KtkcrNHfjsMxeQH\nhgzLS1JhtX4uLlbIQZfGOHCFRxWCvejnnq9D+0h1930/f/Z34mCqJJyh+RjxRszr\nXsqKjGULTeNIeF89MWzQ6GTxXW+2dYtNnsMn0pL0UKB8PrL7XIz45IA5+RKkdwqv\npGhfC/rUT/xUOshqHe/IkFn0eSG6ynexmognCbnH0BY19j6Ss4eX5Y0zRuq46dNv\njjE4fQEVkB0wjKaf/xBpHF5Lz082EvD6Gaj+ISpQ8oB9Upaf2q2gDcOhnqhZciUg\n7lqAB5Khux0nXaDd6muINHfeupOKgV2Fm3prErPVGOLQQw4NdXunppIy+QxgRv4P\nzikr6HMrksSqRztvZKmHINxQbqg3ghQWQlpcTcD4Ab0oxURc95kpuMmiUPgTtjfN\nQC1lTYLYDEIo/fRtiVqZRCx8UTH+C9URX5Wn4fGA47H+AF8UIyFSnRdO68HI7Hr7\nJMPLVuqcAh0fLkPcXEd+pWY172T+icV2mb2cgqNxQWIcQmeLhDnuN5L9XbVEF1cP\nLVvGIyK+t1k9a4R4Zn2t4/UAvgmXsKr6qIVCcOQMzL9yU/fY0F4AUTXcjBctgIjI\ngA2h73csfu2tzmJTK975AgMBAAGjYjBgMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUE\nDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCsGA1UdEQQkMCKCCGxlb25hcmRv\nhwTAqAEYhxAqAcsIAYlOAHohSOKHf+rVMA0GCSqGSIb3DQEBCwUAA4ICAQBEk8e9\nm9npNR+S4rPlDTEdtU4D6ZnJOBVBbWqGQ67FlB7ztpQGSABa/wm7IiaR0JS2s2d4\nnlsArceXlsknq9PjPd9x+SnnlQmIEzw0G1armK+Ho11U+WtoaEDpBoEE6Iibwuh3\nIM0yMaHWqlU3DVgLc27GxNSXQJA3TDtIAQtqPnpEarWHjT2/tlC6CVk0d8P+aKO7\n/7ihKcr91QubfSOt2cRnNocBDlpTG2EFM/A3CQ9ivRVJU+nDAzKCO4bc+sqwOeJS\nLHin2Pq90V0GKNUEZ6z5YdBh/9itn5xvSguTsclQScl+8bBgrNdE/UQhFf9ifWWd\nrKLJtK2xenN7VrkXTqoOFWnv+RywcPQk2Ur2DtpqhrxENAHB5ldwVYJW9S8anvSd\n3+EkhhjqoAp+gsv44i05blOmcdF18+Bpb4plTx+MzqsglnaWA/uPwUiYF6mORBeG\nEAb7vDVR88zQDzAceAE7YrAhGmr3AVQBZoDCExnqN/xriL7Uzt4NBz/ErPsAyMS8\niKfEdyomsa9nlMt5ZsGNmoXT9TzliSvtNtkTvECchpgn9dt/0xd2Ja2A+ehE72ON\n9+AV5fFN3ZLRmIyfjb3PyPKcyxVY2RSOXpluPHdapnk1wH5baD3pcCATlrse8cuV\n02tpP1GjNgadCMuS21p8+dqzZMMwZ9OofKzOYg==\n-----END CERTIFICATE-----\n",
            "certificate_fingerprint": "a4c24221d1d887f6faf475223a8a28834da57fe8cff1f9a8119155ddcfacca19",
            "driver": "lxc",
            "driver_version": "3.0.1",
            "kernel": "Linux",
            "kernel_architecture": "x86_64",
            "kernel_version": "4.17.4-1-ARCH",
            "server": "lxd",
            "server_pid": 26602,
            "server_version": "3.1",
            "storage": "btrfs",
            "storage_version": "4.16.1",
            "server_clustered": false,
            "server_name": "leonardo"
        }
    } 
Creating tilery
DBUG[07-11|12:58:26] Connecting to a remote simplestreams server 
DBUG[07-11|12:58:26] Connected to the websocket 
DBUG[07-11|12:58:26] Sending request to LXD                   etag= method=POST url=http://unix.socket/1.0/containers
DBUG[07-11|12:58:26] 
    {
        "architecture": "",
        "config": {
            "security.privileged": "true"
        },
        "devices": {},
        "ephemeral": false,
        "profiles": null,
        "stateful": false,
        "description": "",
        "name": "tilery",
        "source": {
            "type": "image",
            "certificate": "",
            "alias": "18.04",
            "server": "https://cloud-images.ubuntu.com/releases",
            "protocol": "simplestreams",
            "mode": "pull"
        },
        "instance_type": ""
    } 
DBUG[07-11|12:58:26] Got operation from LXD 
DBUG[07-11|12:58:26] 
    {
        "id": "52d5c819-afc2-4c19-96df-711a71330415",
        "class": "task",
        "description": "Creating container",
        "created_at": "2018-07-11T12:58:26.561230738+02:00",
        "updated_at": "2018-07-11T12:58:26.561230738+02:00",
        "status": "Running",
        "status_code": 103,
        "resources": {
            "containers": [
                "/1.0/containers/tilery"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": ""
    } 
DBUG[07-11|12:58:26] Sending request to LXD                   etag= method=GET url=http://unix.socket/1.0/operations/52d5c819-afc2-4c19-96df-711a71330415
DBUG[07-11|12:58:26] Got response struct from LXD 
DBUG[07-11|12:58:26] 
    {
        "id": "52d5c819-afc2-4c19-96df-711a71330415",
        "class": "task",
        "description": "Creating container",
        "created_at": "2018-07-11T12:58:26.561230738+02:00",
        "updated_at": "2018-07-11T12:58:26.561230738+02:00",
        "status": "Running",
        "status_code": 103,
        "resources": {
            "containers": [
                "/1.0/containers/tilery"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": ""
    } 
Error: Failed container creation: failed to prepare loop device: bad file descriptor
  • [x] Output of the daemon with --debug (alternatively output of lxc monitor while reproducing the issue)

metadata:
  context: {}
  level: dbug
  message: 'New event listener: 57cd6f6f-03e4-4f1a-8b7d-7ce8437428f1'
timestamp: "2018-07-11T12:49:24.351003198+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:28.226690005+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New event listener: 9464e218-ef6d-4520-ba0d-e2f26eb7662d'
timestamp: "2018-07-11T12:49:28.229373004+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/events
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:28.229325567+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: Responding to container create
timestamp: "2018-07-11T12:49:28.230190272+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: POST
    url: /1.0/containers
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:28.230164875+02:00"
type: logging


metadata:
  class: task
  created_at: "2018-07-11T12:49:28.230966145+02:00"
  description: Creating container
  err: ""
  id: af8c1233-c9b4-49fe-931a-11ffb64827e6
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/tilery
  status: Running
  status_code: 103
  updated_at: "2018-07-11T12:49:28.230966145+02:00"
timestamp: "2018-07-11T12:49:28.241400363+02:00"
type: operation


metadata:
  context: {}
  level: dbug
  message: 'New task operation: af8c1233-c9b4-49fe-931a-11ffb64827e6'
timestamp: "2018-07-11T12:49:28.241290202+02:00"
type: logging


metadata:
  class: task
  created_at: "2018-07-11T12:49:28.230966145+02:00"
  description: Creating container
  err: ""
  id: af8c1233-c9b4-49fe-931a-11ffb64827e6
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/tilery
  status: Pending
  status_code: 105
  updated_at: "2018-07-11T12:49:28.230966145+02:00"
timestamp: "2018-07-11T12:49:28.241347625+02:00"
type: operation


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: af8c1233-c9b4-49fe-931a-11ffb64827e6'
timestamp: "2018-07-11T12:49:28.241376408+02:00"
type: logging


metadata:
  context:
    expiry: 2018-07-11 13:38:46.401095687 +0200 CEST m=+3600.880225240
    server: https://cloud-images.ubuntu.com/releases
  level: dbug
  message: Using SimpleStreams cache entry
timestamp: "2018-07-11T12:49:28.241798397+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/af8c1233-c9b4-49fe-931a-11ffb64827e6
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:28.243008186+02:00"
type: logging


metadata:
  context:
    image: b190d5ec0c537468465e7bd122fe127d9f3509e3a09fb699ac33b0c5d4fe050f
  level: dbug
  message: Image already exists in the db
timestamp: "2018-07-11T12:49:28.244052441+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Database error: &errors.errorString{s:"sql: no rows in result set"}'
timestamp: "2018-07-11T12:49:28.247520673+02:00"
type: logging


metadata:
  context:
    ephemeral: "false"
    name: tilery
  level: info
  message: Creating container
timestamp: "2018-07-11T12:49:28.26065312+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Database error: sqlite3.Error{Code:19, ExtendedCode:2067, err:"UNIQUE
    constraint failed: storage_volumes.storage_pool_id, storage_volumes.node_id, storage_volumes.name,
    storage_volumes.type"}'
timestamp: "2018-07-11T12:49:28.263903547+02:00"
type: logging


metadata:
  context:
    created: 2018-07-11 12:49:28 +0200 CEST
    ephemeral: "false"
    name: tilery
    used: 1970-01-01 01:00:00 +0100 CET
  level: info
  message: Deleting container
timestamp: "2018-07-11T12:49:28.263938373+02:00"
type: logging


metadata:
  context:
    container: tilery
  level: dbug
  message: containerDeleteSnapshots
timestamp: "2018-07-11T12:49:28.270818133+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: Initializing a BTRFS driver.
timestamp: "2018-07-11T12:49:28.270719304+02:00"
type: logging


metadata:
  action: container-deleted
  source: /1.0/containers/tilery
timestamp: "2018-07-11T12:49:28.309510532+02:00"
type: lifecycle


metadata:
  context:
    created: 2018-07-11 12:49:28 +0200 CEST
    ephemeral: "false"
    name: tilery
    used: 1970-01-01 01:00:00 +0100 CET
  level: info
  message: Deleted container
timestamp: "2018-07-11T12:49:28.309430528+02:00"
type: logging


metadata:
  class: task
  created_at: "2018-07-11T12:49:28.230966145+02:00"
  description: Creating container
  err: 'UNIQUE constraint failed: storage_volumes.storage_pool_id, storage_volumes.node_id,
    storage_volumes.name, storage_volumes.type'
  id: af8c1233-c9b4-49fe-931a-11ffb64827e6
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/tilery
  status: Failure
  status_code: 400
  updated_at: "2018-07-11T12:49:28.230966145+02:00"
timestamp: "2018-07-11T12:49:28.310491045+02:00"
type: operation


metadata:
  context: {}
  level: dbug
  message: 'Database error: &errors.errorString{s:"sql: no rows in result set"}'
timestamp: "2018-07-11T12:49:28.310354011+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Failure for task operation: af8c1233-c9b4-49fe-931a-11ffb64827e6: UNIQUE
    constraint failed: storage_volumes.storage_pool_id, storage_volumes.node_id, storage_volumes.name,
    storage_volumes.type'
timestamp: "2018-07-11T12:49:28.310464267+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Disconnected event listener: 9464e218-ef6d-4520-ba0d-e2f26eb7662d'
timestamp: "2018-07-11T12:49:39.407678044+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:39.407619514+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: Responding to container create
timestamp: "2018-07-11T12:49:39.411147236+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'New event listener: d8de96c2-6ea3-4d8c-badb-d552729a41cc'
timestamp: "2018-07-11T12:49:39.410395826+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: POST
    url: /1.0/containers
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:39.411130923+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/events
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:39.410329352+02:00"
type: logging


metadata:
  class: task
  created_at: "2018-07-11T12:49:39.411866081+02:00"
  description: Creating container
  err: ""
  id: 5aeb3fb1-d45c-4c18-b660-d34ca6bf0906
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/tilery
  status: Running
  status_code: 103
  updated_at: "2018-07-11T12:49:39.411866081+02:00"
timestamp: "2018-07-11T12:49:39.424474975+02:00"
type: operation


metadata:
  context: {}
  level: dbug
  message: 'New task operation: 5aeb3fb1-d45c-4c18-b660-d34ca6bf0906'
timestamp: "2018-07-11T12:49:39.424196346+02:00"
type: logging


metadata:
  class: task
  created_at: "2018-07-11T12:49:39.411866081+02:00"
  description: Creating container
  err: ""
  id: 5aeb3fb1-d45c-4c18-b660-d34ca6bf0906
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/tilery
  status: Pending
  status_code: 105
  updated_at: "2018-07-11T12:49:39.411866081+02:00"
timestamp: "2018-07-11T12:49:39.42429048+02:00"
type: operation


metadata:
  context: {}
  level: dbug
  message: 'Started task operation: 5aeb3fb1-d45c-4c18-b660-d34ca6bf0906'
timestamp: "2018-07-11T12:49:39.424420012+02:00"
type: logging


metadata:
  context:
    expiry: 2018-07-11 13:38:46.401095687 +0200 CEST m=+3600.880225240
    server: https://cloud-images.ubuntu.com/releases
  level: dbug
  message: Using SimpleStreams cache entry
timestamp: "2018-07-11T12:49:39.426925474+02:00"
type: logging


metadata:
  context:
    ip: '@'
    method: GET
    url: /1.0/operations/5aeb3fb1-d45c-4c18-b660-d34ca6bf0906
  level: dbug
  message: handling
timestamp: "2018-07-11T12:49:39.428890275+02:00"
type: logging


metadata:
  context:
    image: b190d5ec0c537468465e7bd122fe127d9f3509e3a09fb699ac33b0c5d4fe050f
  level: dbug
  message: Image already exists in the db
timestamp: "2018-07-11T12:49:39.432416568+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Database error: &errors.errorString{s:"sql: no rows in result set"}'
timestamp: "2018-07-11T12:49:39.437983454+02:00"
type: logging


metadata:
  context:
    ephemeral: "false"
    name: tilery
  level: info
  message: Creating container
timestamp: "2018-07-11T12:49:39.452158007+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: Initializing a BTRFS driver.
timestamp: "2018-07-11T12:49:39.481163843+02:00"
type: logging


metadata:
  action: container-updated
  source: /1.0/containers/tilery
timestamp: "2018-07-11T12:49:39.515031488+02:00"
type: lifecycle


metadata:
  action: container-updated
  source: /1.0/containers/tilery
timestamp: "2018-07-11T12:49:39.535631205+02:00"
type: lifecycle


metadata:
  action: container-updated
  source: /1.0/containers/tilery
timestamp: "2018-07-11T12:49:39.560836191+02:00"
type: lifecycle


metadata:
  action: container-created
  source: /1.0/containers/tilery
timestamp: "2018-07-11T12:49:39.573347303+02:00"
type: lifecycle


metadata:
  context:
    ephemeral: "false"
    name: tilery
  level: info
  message: Created container
timestamp: "2018-07-11T12:49:39.573191991+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: Creating BTRFS storage volume for container "tilery" on storage pool "default".
timestamp: "2018-07-11T12:49:39.601236797+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: Mounting BTRFS storage pool "default".
timestamp: "2018-07-11T12:49:39.601477232+02:00"
type: logging


metadata:
  context: {}
  level: dbug
  message: 'Failure for task operation: 5aeb3fb1-d45c-4c18-b660-d34ca6bf0906: failed
    to prepare loop device: bad file descriptor'
timestamp: "2018-07-11T12:49:39.625432475+02:00"
type: logging


metadata:
  class: task
  created_at: "2018-07-11T12:49:39.411866081+02:00"
  description: Creating container
  err: 'failed to prepare loop device: bad file descriptor'
  id: 5aeb3fb1-d45c-4c18-b660-d34ca6bf0906
  may_cancel: false
  metadata: null
  resources:
    containers:
    - /1.0/containers/tilery
  status: Failure
  status_code: 400
  updated_at: "2018-07-11T12:49:39.411866081+02:00"
timestamp: "2018-07-11T12:49:39.625494104+02:00"
type: operation
  • [x] lxc-checkconfig

--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points: 
/sys/fs/cgroup/systemd
/sys/fs/cgroup/memory
/sys/fs/cgroup/rdma
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/freezer
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/devices
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/pids
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/blkio

Cgroup v2 mount points: 
/sys/fs/cgroup/unified

Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, not loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, not loaded
Advanced netfilter: enabled, not loaded
CONFIG_NF_NAT_IPV4: enabled, not loaded
CONFIG_NF_NAT_IPV6: enabled, not loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, not loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: 

Thanks for your lights on this :)

该提问来源于开源项目:lxc/lxd

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

6条回答

  • weixin_39688875 weixin_39688875 4月前

    Can you show lxc storage list and lxc storage show default? The content of ls -lh /var/lib/lxd/disks, losetup -a and cat /proc/self/mountinfo would be useful too.

    So far it sounds like your kernel is refusing to mount the btrfs loop file that LXD created but maybe we can figure out why that is.

    点赞 评论 复制链接分享
  • weixin_39949506 weixin_39949506 4月前

    Thanks! :)

    Here you go:

    
    leonardo:~ lxc storage list                                                                                                                                                               11ms
    +---------+-------------+--------+--------------------------------+---------+
    |  NAME   | DESCRIPTION | DRIVER |             SOURCE             | USED BY |
    +---------+-------------+--------+--------------------------------+---------+
    | default |             | btrfs  | /var/lib/lxd/disks/default.img | 4       |
    +---------+-------------+--------+--------------------------------+---------+
    
    
    leonardo:~ lxc storage show default                                                                                                                                                       17ms
    config:
      size: 10GB
      source: /var/lib/lxd/disks/default.img
    description: ""
    name: default
    driver: btrfs
    used_by:
    - /1.0/containers/tilery
    - /1.0/containers/usine
    - /1.0/images/b190d5ec0c537468465e7bd122fe127d9f3509e3a09fb699ac33b0c5d4fe050f
    - /1.0/profiles/default
    status: Created
    locations:
    - none
    
    
    leonardo:~ sudo ls -lh /var/lib/lxd/disks                                                                                                                                             3s 434ms
    total 5,3G
    -rw------- 1 root root 10G 29 juin  18:18 default.img
    
    
    leonardo:~ cat /proc/self/mountinfo                                                                                                                                                        1ms
    20 25 0:4 / /proc rw,nosuid,nodev,noexec,relatime shared:5 - proc proc rw
    21 25 0:20 / /sys rw,nosuid,nodev,noexec,relatime shared:6 - sysfs sys rw
    22 25 0:6 / /dev rw,nosuid,relatime shared:2 - devtmpfs dev rw,size=8137120k,nr_inodes=2034280,mode=755
    23 25 0:21 / /run rw,nosuid,nodev,relatime shared:14 - tmpfs run rw,mode=755
    24 21 0:22 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:7 - efivarfs efivarfs rw
    25 0 259:3 / / rw,relatime shared:1 - ext4 /dev/nvme0n1p3 rw
    26 21 0:7 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:8 - securityfs securityfs rw
    27 22 0:23 / /dev/shm rw,nosuid,nodev shared:3 - tmpfs tmpfs rw
    28 22 0:24 / /dev/pts rw,nosuid,noexec,relatime shared:4 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
    29 21 0:25 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:9 - tmpfs tmpfs ro,mode=755
    30 29 0:26 / /sys/fs/cgroup/unified rw,nosuid,nodev,noexec,relatime shared:10 - cgroup2 cgroup2 rw,nsdelegate
    31 29 0:27 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,xattr,name=systemd
    32 21 0:28 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:12 - pstore pstore rw
    33 21 0:29 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:13 - bpf bpf rw,mode=700
    34 29 0:30 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,memory
    35 29 0:31 / /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,rdma
    36 29 0:32 / /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,hugetlb
    37 29 0:33 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,net_cls,net_prio
    38 29 0:34 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:19 - cgroup cgroup rw,freezer
    39 29 0:35 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:20 - cgroup cgroup rw,cpuset
    40 29 0:36 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:21 - cgroup cgroup rw,devices
    41 29 0:37 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:22 - cgroup cgroup rw,cpu,cpuacct
    42 29 0:38 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:23 - cgroup cgroup rw,pids
    43 29 0:39 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:24 - cgroup cgroup rw,perf_event
    44 29 0:40 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:25 - cgroup cgroup rw,blkio
    45 20 0:41 / /proc/sys/fs/binfmt_misc rw,relatime shared:26 - autofs systemd-1 rw,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12078
    46 21 0:8 / /sys/kernel/debug rw,relatime shared:27 - debugfs debugfs rw
    48 22 0:18 / /dev/mqueue rw,relatime shared:28 - mqueue mqueue rw
    49 45 0:43 / /proc/sys/fs/binfmt_misc rw,relatime shared:29 - binfmt_misc binfmt_misc rw
    47 22 0:42 / /dev/hugepages rw,relatime shared:30 - hugetlbfs hugetlbfs rw,pagesize=2M
    51 21 0:19 / /sys/kernel/config rw,relatime shared:31 - configfs configfs rw
    115 25 0:44 / /tmp rw,nosuid,nodev shared:63 - tmpfs tmpfs rw
    118 25 259:1 / /boot rw,relatime shared:65 - vfat /dev/nvme0n1p1 rw,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro
    374 23 0:50 / /run/user/120 rw,nosuid,nodev,relatime shared:216 - tmpfs tmpfs rw,size=1628736k,mode=700,uid=120,gid=120
    545 23 0:52 / /run/user/1000 rw,nosuid,nodev,relatime shared:313 - tmpfs tmpfs rw,size=1628736k,mode=700,uid=1000,gid=1000
    559 21 0:53 / /sys/fs/fuse/connections rw,relatime shared:321 - fusectl fusectl rw
    572 545 0:54 / /run/user/1000/gvfs rw,nosuid,nodev,relatime shared:328 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1000
    680 25 0:55 / /keybase ro,nosuid,nodev,relatime shared:392 - fuse keybase-redirector ro,user_id=0,group_id=1000,allow_other
    386 25 0:51 / /var/lib/lxd/shmounts rw,relatime shared:210 - tmpfs tmpfs rw,size=100k,mode=711
    397 25 0:56 / /var/lib/lxd/devlxd rw,relatime shared:217 - tmpfs tmpfs rw,size=100k,mode=755
    285 545 0:49 / /run/user/1000/keybase/kbfs rw,nosuid,nodev,relatime shared:161 - fuse /dev/fuse rw,user_id=1000,group_id=1000
    

    losetup -a does not give any output.

    点赞 评论 复制链接分享
  • weixin_39688875 weixin_39688875 4月前

    Can you try manually running (as root) mount -o loop /var/lib/lxd/disks/default.img /var/lib/lxd/storage-pools/default?

    点赞 评论 复制链接分享
  • weixin_39949506 weixin_39949506 4月前
    
    leonardo:~ sudo mount -o loop /var/lib/lxd/disks/default.img /var/lib/lxd/storage-pools/default                                                                                                                20ms
    mount: /var/lib/lxd/storage-pools/default: mount failed: Operation not permitted.
    

    Thanks again :)

    点赞 评论 复制链接分享
  • weixin_39688875 weixin_39688875 4月前

    Ok, so your system is in a pretty bad state it looks like, if mount itself can't do it, there's no chance LXD can.

    Did you try rebooting your system already?

    点赞 评论 复制链接分享
  • weixin_39949506 weixin_39949506 4月前

    Did you try rebooting your system already?

    Sigh… I thought I did, but now it works!

    Thanks, and sorry for the noise :(

    点赞 评论 复制链接分享

相关推荐