weixin_39632891
weixin_39632891
2021-01-05 12:25

Issue with k8s.io/docs/tasks/access-application-cluster/connecting-frontend-backend/

This is a Bug Report

Problem: I followed the steps outlined in the task but hit a dead end after creating the frontend. The frontend service is never assigned an external-ip even after waiting a long time. Here is a copy of my terminal window:


$ kubectl apply -f https://k8s.io/examples/service/access/hello.yaml
deployment.apps/hello created
$ kubectl describe deployment hello
Name:                   hello
Namespace:              default
CreationTimestamp:      Wed, 29 May 2019 11:14:13 +1000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"hello","namespace":"default"},"spec":{"replicas":7,"selec...
Selector:               app=hello,tier=backend,track=stable
Replicas:               7 desired | 7 updated | 7 total | 7 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=hello
           tier=backend
           track=stable
  Containers:
   hello:
    Image:        gcr.io/google-samples/hello-go-gke:1.0
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   hello-6c9c9df6cd (7/7 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  12s   deployment-controller  Scaled up replica set hello-6c9c9df6cd to 7
$ kubectl apply -f https://k8s.io/examples/service/access/hello-service.yaml
service/hello created
$ kubectl get services
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
hello        ClusterIP   10.98.191.2   <none>        80/TCP    34s
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP   118s
$ kubectl apply -f https://k8s.io/examples/service/access/frontend.yaml
service/frontend created
deployment.apps/frontend created
$ kubectl get services
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
frontend     LoadBalancer   10.96.195.34   <pending>     80:32608/TCP   52s
hello        ClusterIP      10.98.191.2    <none>        80/TCP         110s
kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        3m14s
$ kubectl get service frontend --watch
NAME       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
frontend   LoadBalancer   10.96.195.34   <pending>     80:32608/TCP   91s
</pending></none></none></pending></none></none></none></none></none></none></none>

Proposed Solution: For beginners such as me, it would be helpful to provide some troubleshooting information for when the external-ip is not assigned.

Page to Update: https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

$ uname -a Linux linix-cl 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

该提问来源于开源项目:kubernetes/website

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

7条回答

  • weixin_39915721 weixin_39915721 4月前

    Can you please run kubectl get pods and then check for frontend pod name. Then run kubectl describe pod <pod name>. This would give an idea of where is the pod struck. Only when it is running state, external-ip would be given to frontend service

    点赞 评论 复制链接分享
  • weixin_39632891 weixin_39632891 4月前

    I managed to trash my Kubernetes environment on my Ubuntu machine so I have moved over now to my Windows machine. I'm still seeing the same problem on Windows where the external-ip is not assigned to the frontend service. Here's the pod description:

    
    PS > kubectl get pods
    NAME                       READY   STATUS    RESTARTS   AGE
    frontend-895c8799c-2dhdp   1/1     Running   0          3m48s
    hello-6c9c9df6cd-4svts     1/1     Running   0          4m56s
    hello-6c9c9df6cd-5hffn     1/1     Running   0          4m56s
    hello-6c9c9df6cd-6thhh     1/1     Running   0          4m56s
    hello-6c9c9df6cd-hf68s     1/1     Running   0          4m56s
    hello-6c9c9df6cd-lm955     1/1     Running   0          4m56s
    hello-6c9c9df6cd-nhfs9     1/1     Running   0          4m56s
    hello-6c9c9df6cd-p4fm7     1/1     Running   0          4m56s
    PS > kubectl describe pod frontend-895c8799c-2dhdp
    Name:               frontend-895c8799c-2dhdp
    Namespace:          default
    Priority:           0
    PriorityClassName:  <none>
    Node:               minikube/172.24.249.6
    Start Time:         Thu, 30 May 2019 16:33:25 +1000
    Labels:             app=hello
                        pod-template-hash=895c8799c
                        tier=frontend
                        track=stable
    Annotations:        <none>
    Status:             Running
    IP:                 172.17.0.12
    Controlled By:      ReplicaSet/frontend-895c8799c
    Containers:
      nginx:
        Container ID:   docker://b9cfad7909061259796280d1d94b3201f106950451ee0dfb789981e4fdf46eef
        Image:          gcr.io/google-samples/hello-frontend:1.0
        Image ID:       docker-pullable://gcr.io/google-samples/hello-frontend:3857a9dbcbd72cdd52c9bea46cf45136c9be46a1adccd0d993b9ca989cdb5c22
        Port:           <none>
        Host Port:      <none>
        State:          Running
          Started:      Thu, 30 May 2019 16:33:39 +1000
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-lkp6b (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             True
      ContainersReady   True
      PodScheduled      True
    Volumes:
      default-token-lkp6b:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-lkp6b
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type    Reason     Age    From               Message
      ----    ------     ----   ----               -------
      Normal  Scheduled  4m58s  default-scheduler  Successfully assigned default/frontend-895c8799c-2dhdp to minikube
      Normal  Pulling    4m57s  kubelet, minikube  Pulling image "gcr.io/google-samples/hello-frontend:1.0"
      Normal  Pulled     4m44s  kubelet, minikube  Successfully pulled image "gcr.io/google-samples/hello-frontend:1.0"
      Normal  Created    4m44s  kubelet, minikube  Created container nginx
      Normal  Started    4m44s  kubelet, minikube  Started container nginx
    PS > kubectl get service
    NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    frontend     LoadBalancer   10.97.80.59    <pending>     80:30573/TCP   6m29s
    hello        ClusterIP      10.103.91.88   <none>        80/TCP         6m43s
    kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        8m27s
    </none></none></pending></none></none></none></none></none></none>
    点赞 评论 复制链接分享
  • weixin_39738251 weixin_39738251 4月前

    Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

    If this issue is safe to close now please do so with /close.

    Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

    点赞 评论 复制链接分享
  • weixin_39680208 weixin_39680208 4月前

    This task uses Services with external load balancers, which require a supported environment.

    Maybe that's relevant to why it didn't work? /triage support

    点赞 评论 复制链接分享
  • weixin_39738251 weixin_39738251 4月前

    Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

    If this issue is safe to close now please do so with /close.

    Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

    点赞 评论 复制链接分享
  • weixin_39680208 weixin_39680208 4月前

    /close

    点赞 评论 复制链接分享
  • weixin_39878401 weixin_39878401 4月前

    : Closing this issue.

    In response to [this](https://github.com/kubernetes/website/issues/14598#issuecomment-541124113): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
    点赞 评论 复制链接分享

相关推荐