weixin_39632467
weixin_39632467
2020-11-22 20:51

Multus ipam - dhcp : Containernetworking plugin DHCP, unanable to set up interface.

What happend: Multus unable to invoke correctly containernetworking plugin dhcp. Log output below. With standard configuration of NetworkAttachmentDefinition for ipam - host local , pods crated correctly.


k8s-admin-worker-1:/opt/cni/bin$ sudo ./dhcp daemon
2019/03/26 15:24:28 9b53df47ab29fefe626183afed80d9bdf7acc3f4c439a928a7d4a1c759d8a2ed/macvlan-conf-dhcp: acquiring lease
2019/03/26 15:24:28 Link "net1" down. Attempting to set up
2019/03/26 15:24:28 network is down
2019/03/26 15:24:36 resource temporarily unavailable
2019/03/26 15:24:49 resource temporarily unavailable
2019/03/26 15:25:08 

What you expected to happen: Pod acquire correct ip address from dhcp , pause container started properly.

How to reproduce it (as minimally and precisely as possible): pod manifest:


apiVersion: v1
kind: Pod
metadata:
  name: samplepod
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf-dhcp
spec:
  containers:
  - name: samplepod
    command: ["/bin/bash", "-c", "sleep 2000000000000"]
    image: dougbtv/centos-network

CRD manifest:


apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf-dhcp
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "macvlan",
      "master": "ens192",
      "mode": "bridge",
      "ipam": {
          "type": "dhcp"
      }
    }'

Anything else we need to know?: Cluster consists of two nodes, ip configuration of master interface is the same on both nodes: Worker ip conf:


3: ens192: <broadcast> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:99:1d:03 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.26/24 brd 172.16.1.255 scope global dynamic ens192
       valid_lft 499sec preferred_lft 499sec
    inet6 fe80::250:56ff:fe99:1d03/64 scope link
       valid_lft forever preferred_lft forever
</broadcast>

Master ip conf:


17: ens192: <broadcast> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:99:83:da brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.27/24 brd 172.16.1.255 scope global dynamic ens192
       valid_lft 445sec preferred_lft 445sec
    inet6 fe80::250:56ff:fe99:83da/64 scope link
       valid_lft forever preferred_lft forever
</broadcast>

Environment:

  • Multus version: v3.1

REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
alpine                   latest              5cb3aa00f899        2 weeks ago         5.53MB
k8s.gcr.io/kube-proxy    v1.13.4             fadcc5d2b066        3 weeks ago         80.3MB
weaveworks/weaveexec     2.5.1               4cccd7ef6421        2 months ago        166MB
weaveworks/weave         2.5.1               a57b99d67ee7        2 months ago        111MB
weaveworks/weavedb       latest              4ac51c93545a        4 months ago        698B
nfvpe/multus             v3.1                d9e7bffad290        7 months ago        477MB
k8s.gcr.io/pause         3.1                 da86e6ba6ca1        15 months ago       742kB
dougbtv/centos-network   latest              54a5a35df449        23 months ago       285MB
  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
  • Primary CNI for Kubernetes cluster: Multus

k8s-admin-master:/etc/cni/net.d$ ls
70-multus.conf  multus.d
  • OS (e.g. from /etc/os-release):

NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • NetworkAttachment info (use kubectl get net-attach-def -o yaml)

apiVersion: v1
items:
- apiVersion: k8s.cni.cncf.io/v1
  kind: NetworkAttachmentDefinition
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"macvlan-conf","namespace":"default"},"spec":{"config":"{ \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"master\": \"ens160\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"ranges\": [ [ { \"subnet\": \"172.64.8.0/24\", \"gateway\": \"172.64.8.1\" } ] ] } }"}}
    creationTimestamp: "2019-03-22T11:42:12Z"
    generation: 2
    name: macvlan-conf
    namespace: default
    resourceVersion: "12366"
    selfLink: /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/macvlan-conf
    uid: 8837c010-4c97-11e9-828c-00505699f6bf
  spec:
    config: '{ "cniVersion": "0.3.0", "type": "macvlan", "master": "ens160", "mode":
      "bridge", "ipam": { "type": "host-local", "ranges": [ [ { "subnet": "172.64.8.0/24",
      "gateway": "172.64.8.1" } ] ] } }'
- apiVersion: k8s.cni.cncf.io/v1
  kind: NetworkAttachmentDefinition
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"macvlan-conf-dhcp","namespace":"default"},"spec":{"config":"{ \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"master\": \"ens192\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\", \"routes\": [ { \"dst\": \"172.16.1.0/24\", \"gw\": \"172.16.1.1\" } ] } }"}}
    creationTimestamp: "2019-03-26T10:18:58Z"
    generation: 6
    name: macvlan-conf-dhcp
    namespace: default
    resourceVersion: "79005"
    selfLink: /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/macvlan-conf-dhcp
    uid: 91a83953-4fb0-11e9-839b-00505699f6bf
  spec:
    config: '{ "cniVersion": "0.3.0", "type": "macvlan", "master": "ens192", "mode":
      "bridge", "ipam": { "type": "dhcp", "routes": [ { "dst": "172.16.1.0/24", "gw":
      "172.16.1.1" } ] } }'
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
  • Target pod yaml info (with annotation, use kubectl get pod <podname> -o yaml)

k8s-admin-master:~$ kubectl get pod samplepod -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf-dhcp
    k8s.v1.cni.cncf.io/networks-status: ""
  creationTimestamp: "2019-03-26T13:28:14Z"
  name: samplepod
  namespace: default
  resourceVersion: "82464"
  selfLink: /api/v1/namespaces/default/pods/samplepod
  uid: 021b5643-4fcb-11e9-839b-00505699f6bf
spec:
  containers:
  - command:
    - /bin/bash
    - -c
    - sleep 2000000000000
    image: dougbtv/centos-network
    imagePullPolicy: Always
    name: samplepod
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-sw47n
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k8s-worker-1
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-sw47n
    secret:
      defaultMode: 420
      secretName: default-token-sw47n
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-03-26T13:28:14Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-03-26T13:28:14Z"
    message: 'containers with unready status: [samplepod]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-03-26T13:28:14Z"
    message: 'containers with unready status: [samplepod]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-03-26T13:28:14Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - image: dougbtv/centos-network
    imageID: ""
    lastState: {}
    name: samplepod
    ready: false
    restartCount: 0
    state:
      waiting:
        reason: ContainerCreating
  hostIP: 192.168.41.255
  phase: Pending
  qosClass: BestEffort
  startTime: "2019-03-26T13:28:14Z"

该提问来源于开源项目:intel/multus-cni

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

10条回答

  • weixin_39632467 weixin_39632467 5月前

    Thanks for the link. Probably this is macvlan plugin issue.

    Here is my dhcp config on isc-dhcp-server

    Screen Shot 2019-03-27 at 12 18 25 PM

    Host (Ubuntu server 18.04) interface gets correct ip address , and then I use macvlan without DHCP everything works fine

    
    3: ens192: <broadcast> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:50:56:99:83:da brd ff:ff:ff:ff:ff:ff
        inet 172.16.1.27/24 brd 172.16.1.255 scope global dynamic ens192
           valid_lft 429sec preferred_lft 429sec
        inet6 fe80::250:56ff:fe99:83da/64 scope link
           valid_lft forever preferred_lft forever
    </broadcast>
    点赞 评论 复制链接分享
  • weixin_39632467 weixin_39632467 5月前

    I used another one DHCP of our vmware, it is standard DHCP for our VM network. It works on ens160 in side VM. Here is output of inspect pause container:

    
    $ docker inspect 1ed555fcb815
    [
        {
            "Id": "1ed555fcb8153ed3638dcea4739e28aac2cdcf287d00cdaed2d14003b3d71758",
            "Created": "2019-03-27T09:34:09.803151448Z",
            "Path": "/pause",
            "Args": [],
            "State": {
                "Status": "running",
                "Running": true,
                "Paused": false,
                "Restarting": false,
                "OOMKilled": false,
                "Dead": false,
                "Pid": 9472,
                "ExitCode": 0,
                "Error": "",
                "StartedAt": "2019-03-27T09:34:10.625936084Z",
                "FinishedAt": "0001-01-01T00:00:00Z"
            },
            "Image": "sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e",
            "ResolvConfPath": "/var/lib/docker/containers/1ed555fcb8153ed3638dcea4739e28aac2cdcf287d00cdaed2d14003b3d71758/resolv.conf",
            "HostnamePath": "/var/lib/docker/containers/1ed555fcb8153ed3638dcea4739e28aac2cdcf287d00cdaed2d14003b3d71758/hostname",
            "HostsPath": "/var/lib/docker/containers/1ed555fcb8153ed3638dcea4739e28aac2cdcf287d00cdaed2d14003b3d71758/hosts",
            "LogPath": "/var/lib/docker/containers/1ed555fcb8153ed3638dcea4739e28aac2cdcf287d00cdaed2d14003b3d71758/1ed555fcb8153ed3638dcea4739e28aac2cdcf287d00cdaed2d14003b3d71758-json.log",
            "Name": "/k8s_POD_samplepod_default_59f868c9-5072-11e9-ab58-00505699f6bf_2",
            "RestartCount": 0,
            "Driver": "overlay2",
            "Platform": "linux",
            "MountLabel": "",
            "ProcessLabel": "",
            "AppArmorProfile": "docker-default",
            "ExecIDs": null,
            "HostConfig": {
                "Binds": null,
                "ContainerIDFile": "",
                "LogConfig": {
                    "Type": "json-file",
                    "Config": {}
                },
                "NetworkMode": "none",
                "PortBindings": {},
                "RestartPolicy": {
                    "Name": "",
                    "MaximumRetryCount": 0
                },
                "AutoRemove": false,
                "VolumeDriver": "",
                "VolumesFrom": null,
                "CapAdd": null,
                "CapDrop": null,
                "Dns": null,
                "DnsOptions": null,
                "DnsSearch": null,
                "ExtraHosts": null,
                "GroupAdd": null,
                "IpcMode": "shareable",
                "Cgroup": "",
                "Links": null,
                "OomScoreAdj": -998,
                "PidMode": "",
                "Privileged": false,
                "PublishAllPorts": false,
                "ReadonlyRootfs": false,
                "SecurityOpt": [
                    "seccomp=unconfined"
                ],
                "UTSMode": "",
                "UsernsMode": "",
                "ShmSize": 67108864,
                "Runtime": "runc",
                "ConsoleSize": [
                    0,
                    0
                ],
                "Isolation": "",
                "CpuShares": 2,
                "Memory": 0,
                "NanoCpus": 0,
                "CgroupParent": "/kubepods/besteffort/pod59f868c9-5072-11e9-ab58-00505699f6bf",
                "BlkioWeight": 0,
                "BlkioWeightDevice": null,
                "BlkioDeviceReadBps": null,
                "BlkioDeviceWriteBps": null,
                "BlkioDeviceReadIOps": null,
                "BlkioDeviceWriteIOps": null,
                "CpuPeriod": 0,
                "CpuQuota": 0,
                "CpuRealtimePeriod": 0,
                "CpuRealtimeRuntime": 0,
                "CpusetCpus": "",
                "CpusetMems": "",
                "Devices": null,
                "DeviceCgroupRules": null,
                "DiskQuota": 0,
                "KernelMemory": 0,
                "MemoryReservation": 0,
                "MemorySwap": 0,
                "MemorySwappiness": null,
                "OomKillDisable": false,
                "PidsLimit": 0,
                "Ulimits": null,
                "CpuCount": 0,
                "CpuPercent": 0,
                "IOMaximumIOps": 0,
                "IOMaximumBandwidth": 0,
                "MaskedPaths": [
                    "/proc/asound",
                    "/proc/acpi",
                    "/proc/kcore",
                    "/proc/keys",
                    "/proc/latency_stats",
                    "/proc/timer_list",
                    "/proc/timer_stats",
                    "/proc/sched_debug",
                    "/proc/scsi",
                    "/sys/firmware"
                ],
                "ReadonlyPaths": [
                    "/proc/bus",
                    "/proc/fs",
                    "/proc/irq",
                    "/proc/sys",
                    "/proc/sysrq-trigger"
                ]
            },
            "GraphDriver": {
                "Data": {
                    "LowerDir": "/var/lib/docker/overlay2/f7dea5d0e4e5bc8c7d05d062e103e6a3e1abd75c134423011b80fd6214d61352-init/diff:/var/lib/docker/overlay2/030a6c80688a925b2f636e8eccc9ed49499d4113cd617adb53b7428dc8469cb5/diff",
                    "MergedDir": "/var/lib/docker/overlay2/f7dea5d0e4e5bc8c7d05d062e103e6a3e1abd75c134423011b80fd6214d61352/merged",
                    "UpperDir": "/var/lib/docker/overlay2/f7dea5d0e4e5bc8c7d05d062e103e6a3e1abd75c134423011b80fd6214d61352/diff",
                    "WorkDir": "/var/lib/docker/overlay2/f7dea5d0e4e5bc8c7d05d062e103e6a3e1abd75c134423011b80fd6214d61352/work"
                },
                "Name": "overlay2"
            },
            "Mounts": [],
            "Config": {
                "Hostname": "samplepod",
                "Domainname": "",
                "User": "",
                "AttachStdin": false,
                "AttachStdout": false,
                "AttachStderr": false,
                "Tty": false,
                "OpenStdin": false,
                "StdinOnce": false,
                "Env": [
                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                ],
                "Cmd": null,
                "Image": "k8s.gcr.io/pause:3.1",
                "Volumes": null,
                "WorkingDir": "",
                "Entrypoint": [
                    "/pause"
                ],
                "OnBuild": null,
                "Labels": {
                    "annotation.k8s.v1.cni.cncf.io/networks": "macvlan-conf-dhcp",
                    "annotation.kubernetes.io/config.seen": "2019-03-27T12:26:07.792393867+03:00",
                    "annotation.kubernetes.io/config.source": "api",
                    "io.kubernetes.container.name": "POD",
                    "io.kubernetes.docker.type": "podsandbox",
                    "io.kubernetes.pod.name": "samplepod",
                    "io.kubernetes.pod.namespace": "default",
                    "io.kubernetes.pod.uid": "59f868c9-5072-11e9-ab58-00505699f6bf"
                }
            },
            "NetworkSettings": {
                "Bridge": "",
                "SandboxID": "36b8b557b0a338ddee11c6c8d4cf1a061160eb84cff401ea3a8cf7972188cc73",
                "HairpinMode": false,
                "LinkLocalIPv6Address": "",
                "LinkLocalIPv6PrefixLen": 0,
                "Ports": {},
                "SandboxKey": "/var/run/docker/netns/36b8b557b0a3",
                "SecondaryIPAddresses": null,
                "SecondaryIPv6Addresses": null,
                "EndpointID": "",
                "Gateway": "",
                "GlobalIPv6Address": "",
                "GlobalIPv6PrefixLen": 0,
                "IPAddress": "",
                "IPPrefixLen": 0,
                "IPv6Gateway": "",
                "MacAddress": "",
                "Networks": {
                    "none": {
                        "IPAMConfig": null,
                        "Links": null,
                        "Aliases": null,
                        "NetworkID": "8dc35a56e0eed3814f2309f12384c6d9ab3db8111036bcbed7b24a554bd0b202",
                        "EndpointID": "9f7379432d7075a207a918506e07aca9ca939263e11de8b0f8011f319928d798",
                        "Gateway": "",
                        "IPAddress": "",
                        "IPPrefixLen": 0,
                        "IPv6Gateway": "",
                        "GlobalIPv6Address": "",
                        "GlobalIPv6PrefixLen": 0,
                        "MacAddress": "",
                        "DriverOpts": null
                    }
                }
            }
        }
    ]
    

    P/S: Same behavior with different DHCP server that works on out network:

    
    2: ens160: <broadcast> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:50:56:99:f6:bf brd ff:ff:ff:ff:ff:ff
        inet 192.168.41.254/22 brd 192.168.43.255 scope global dynamic ens160
           valid_lft 257039sec preferred_lft 257039sec
        inet6 fe80::250:56ff:fe99:f6bf/64 scope link
           valid_lft forever preferred_lft forever
    </broadcast>

    CRD config:

    
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: macvlan-conf-dhcp
    spec:
      config: '{
          "cniVersion": "0.3.0",
          "type": "macvlan",
          "master": "ens160",
          "mode": "bridge",
          "ipam": {
              "type": "dhcp"
          }
        }'
    

    BTW it is not seems like DHCP server issue. But maybe I'm wrong :)

    点赞 评论 复制链接分享
  • weixin_39868248 weixin_39868248 5月前

    similar issue here. when i use a host-local, i can get on the same network as the DHCP server and ping it. but when i use DHCP, the Pods freeze and don't initialize, because of this error: Multus: error in invoke Delegate add - "macvlan": error calling DHCP.Allocate: too many open files

    I ran a test and the DHCP request isn't even getting to the DHCP server. So the issue is somehow related to the plugin for dhcp. I did go into all nodes running docker and put Unlimited on all of the Limits and restarted those docker instances. Didn't matter. Same error.

    Not sure if this is exactly the same issue, or related issue.

    点赞 评论 复制链接分享
  • weixin_39838798 weixin_39838798 5月前

    Hi, I met the same problem,and if use the 'host-local' mode, it run well.

    点赞 评论 复制链接分享
  • weixin_39868248 weixin_39868248 5月前

    I just got an email notification about currycan's reply to this thread. It has been a while since I ran into this issue, but I did wind up solving this problem. What was happening, was that the upstream DHCP server was sending routes back which seemed to confuse the DHCP plugin. When I changed the DHCP server to NOT send these routes back, everything worked great and the pods came up fine with DHCP.

    NOTE: I did not do the additional analysis to look into precisely what it was about those routes that was breaking multus, but there are any number of ways to set routes without relying on your DHCP server to set them for you. And in fact, it is actually better to set your routes on your Linux host because if the routes have issues, facilities like iproute2 won't even allow you to set them.

    点赞 评论 复制链接分享
  • weixin_39811166 weixin_39811166 5月前

    I'm getting a similar issue running dhcp daemon:

    I'm running coreos stable, with pfsense as my router.

    
    sudo ./dhcp daemon
    2020/01/15 01:06:24 1943bec66d91cebd9734e45678883c2d2329b32648439433089a24927935442f/macvlan-conf/net1: acquiring lease
    2020/01/15 01:06:24 Link "net1" down. Attempting to set up
    2020/01/15 01:06:24 network is down
    2020/01/15 01:06:32 no DHCP packet received within 5s
    2020/01/15 01:06:45 no DHCP packet received within 5s
    2020/01/15 01:07:08 2a097ab3ee84e359ac65e72c8c4500c72581b4c9b7a71c300059e5500692a822/macvlan-conf/net1: acquiring lease
    2020/01/15 01:07:08 Link "net1" down. Attempting to set up
    2020/01/15 01:07:08 network is down
    2020/01/15 01:07:17 no DHCP packet received within 5s
    2020/01/15 01:07:30 no DHCP packet received within 5s
    2020/01/15 01:07:52 0a771ae626aba1a80c0f0b8a93b368cf11143b2681e0efc640599daf7ca3a51a/macvlan-conf/net1: acquiring lease
    2020/01/15 01:07:52 Link "net1" down. Attempting to set up
    2020/01/15 01:07:52 network is down
    2020/01/15 01:08:01 no DHCP packet received within 5s
    2020/01/15 01:08:14 no DHCP packet received within 5s
    

    I see in my pfsense's dhcpd logs that it's offering addresses, but it doesn't seem like the dhcp cni daemon is picking it up.

    
    Jan 14 20:06:27 | dhcpd |   | DHCPDISCOVER from 00:0c:29:73:9c:d8 via em1
    Jan 14 20:06:28 | dhcpd |   | DHCPOFFER on 192.168.1.58 to 00:0c:29:73:9c:d8 via em1
    Jan 14 20:06:40 | dhcpd |   | DHCPDISCOVER from 00:0c:29:73:9c:d8 via em1
    Jan 14 20:06:40 | dhcpd |   | DHCPOFFER on 192.168.1.58 to 00:0c:29:73:9c:d8 via em1
    Jan 14 20:07:12 | dhcpd |   | DHCPDISCOVER from 00:0c:29:73:9c:d8 via em1
    Jan 14 20:07:13 | dhcpd |   | DHCPOFFER on 192.168.1.59 to 00:0c:29:73:9c:d8 via em1
    Jan 14 20:07:25 | dhcpd |   | DHCPDISCOVER from 00:0c:29:73:9c:d8 via em1
    Jan 14 20:07:25 | dhcpd |   | DHCPOFFER on 192.168.1.59 to 00:0c:29:73:9c:d8 via em1
    Jan 14 20:07:56 | dhcpd |   | DHCPDISCOVER from 00:0c:29:73:9c:d8 via em1
    Jan 14 20:07:57 | dhcpd |   | DHCPOFFER on 192.168.1.62 to 00:0c:29:73:9c:d8 via em1
    Jan 14 20:08:09 | dhcpd |   | DHCPDISCOVER from 00:0c:29:73:9c:d8 via em1
    Jan 14 20:08:09 | dhcpd |   | DHCPOFFER on 192.168.1.62 to 00:0c:29:73:9c:d8 via em1
    
    
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      name: ipvlan-conf
    spec:
      config: |
        {
          "cniVersion": "0.3.1",
          "type": "ipvlan",
          "master": "ens190",
          "ipam": {
            "type": "dhcp",
            "routes": [
              { "dst": "192.168.0.0/16"}
            ]
          }
        }
    
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: samplepod
      annotations:
        k8s.v1.cni.cncf.io/networks: ipvlan-conf
    spec:
      containers:
      - name: samplepod
        image: nginx:latest
    
    点赞 评论 复制链接分享
  • weixin_39822728 weixin_39822728 5月前

    From multus CNI point of view, multus just provides CNI interface to CNI plugin, hence DHCP issue is not our scope and it should be DHCP plugin or DHCP server, hence it is closed.

    点赞 评论 复制链接分享
  • weixin_39632467 weixin_39632467 5月前

    I know that /opt/cni/bin/dhcp is a containernetworking plugin, according to documentation of https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp it should be running. And yes without this daemon multus crashes to execute CNI. DHCP server is available, existing addresses for ens192 assigned dynamically from that DHCP server (isc-dhcp-server).

    点赞 评论 复制链接分享
  • weixin_39822728 weixin_39822728 5月前

    As far as I looked the logs, multus seems to create interface (net1) but dhcp plugin is failed to get IP address. Because DHCP daemon try to send

    
    2019/03/26 15:24:28 9b53df47ab29fefe626183afed80d9bdf7acc3f4c439a928a7d4a1c759d8a2ed/macvlan-conf-dhcp: acquiring lease
    2019/03/26 15:24:28 Link "net1" down. Attempting to set up
    2019/03/26 15:24:28 network is down
    2019/03/26 15:24:36 resource temporarily unavailable
    2019/03/26 15:24:49 resource temporarily unavailable
    

    Could you please check DHCP server configuration?

    P.S. Similar logs are shown at the case. https://github.com/containernetworking/cni/issues/398

    点赞 评论 复制链接分享
  • weixin_39822728 weixin_39822728 5月前

    did you check DHCP server availability without multus? sudo ./dhcp daemon process is not DHCP server, it is just a "DHCP CNI Plugin" daemon, so you need DHCP server other than ./dhcp daemon.

    点赞 评论 复制链接分享

相关推荐